Sample records for test existing models

  1. Model-based testing with UML applied to a roaming algorithm for bluetooth devices.

    PubMed

    Dai, Zhen Ru; Grabowski, Jens; Neukirchen, Helmut; Pals, Holger

    2004-11-01

    In late 2001, the Object Management Group issued a Request for Proposal to develop a testing profile for UML 2.0. In June 2003, the work on the UML 2.0 Testing Profile was finally adopted by the OMG. Since March 2004, it has become an official standard of the OMG. The UML 2.0 Testing Profile provides support for UML based model-driven testing. This paper introduces a methodology on how to use the testing profile in order to modify and extend an existing UML design model for test issues. The application of the methodology will be explained by applying it to an existing UML Model for a Bluetooth device.

  2. Theoretical models of parental HIV disclosure: a critical review.

    PubMed

    Qiao, Shan; Li, Xiaoming; Stanton, Bonita

    2013-01-01

    This study critically examined three major theoretical models related to parental HIV disclosure (i.e., the Four-Phase Model [FPM], the Disclosure Decision Making Model [DDMM], and the Disclosure Process Model [DPM]), and the existing studies that could provide empirical support to these models or their components. For each model, we briefly reviewed its theoretical background, described its components and/or mechanisms, and discussed its strengths and limitations. The existing empirical studies supported most theoretical components in these models. However, hypotheses related to the mechanisms proposed in the models have not yet tested due to a lack of empirical evidence. This study also synthesized alternative theoretical perspectives and new issues in disclosure research and clinical practice that may challenge the existing models. The current study underscores the importance of including components related to social and cultural contexts in theoretical frameworks, and calls for more adequately designed empirical studies in order to test and refine existing theories and to develop new ones.

  3. Using Set Covering with Item Sampling to Analyze the Infeasibility of Linear Programming Test Assembly Models

    ERIC Educational Resources Information Center

    Huitzing, Hiddo A.

    2004-01-01

    This article shows how set covering with item sampling (SCIS) methods can be used in the analysis and preanalysis of linear programming models for test assembly (LPTA). LPTA models can construct tests, fulfilling a set of constraints set by the test assembler. Sometimes, no solution to the LPTA model exists. The model is then said to be…

  4. NREL Improves Building Energy Simulation Programs Through Diagnostic Testing (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2012-01-01

    This technical highlight describes NREL research to develop Building Energy Simulation Test for Existing Homes (BESTEST-EX) to increase the quality and accuracy of energy analysis tools for the building retrofit market. Researchers at the National Renewable Energy Laboratory (NREL) have developed a new test procedure to increase the quality and accuracy of energy analysis tools for the building retrofit market. The Building Energy Simulation Test for Existing Homes (BESTEST-EX) is a test procedure that enables software developers to evaluate the performance of their audit tools in modeling energy use and savings in existing homes when utility bills are available formore » model calibration. Similar to NREL's previous energy analysis tests, such as HERS BESTEST and other BESTEST suites included in ANSI/ASHRAE Standard 140, BESTEST-EX compares software simulation findings to reference results generated with state-of-the-art simulation tools such as EnergyPlus, SUNREL, and DOE-2.1E. The BESTEST-EX methodology: (1) Tests software predictions of retrofit energy savings in existing homes; (2) Ensures building physics calculations and utility bill calibration procedures perform to a minimum standard; and (3) Quantifies impacts of uncertainties in input audit data and occupant behavior. BESTEST-EX includes building physics and utility bill calibration test cases. The diagram illustrates the utility bill calibration test cases. Participants are given input ranges and synthetic utility bills. Software tools use the utility bills to calibrate key model inputs and predict energy savings for the retrofit cases. Participant energy savings predictions using calibrated models are compared to NREL predictions using state-of-the-art building energy simulation programs.« less

  5. Animal models for testing anti-prion drugs.

    PubMed

    Fernández-Borges, Natalia; Elezgarai, Saioa R; Eraña, Hasier; Castilla, Joaquín

    2013-01-01

    Prion diseases belong to a group of fatal infectious diseases with no effective therapies available. Throughout the last 35 years, less than 50 different drugs have been tested in different experimental animal models without hopeful results. An important limitation when searching for new drugs is the existence of appropriate models of the disease. The three different possible origins of prion diseases require the existence of different animal models for testing anti-prion compounds. Wild type, over-expressing transgenic mice and other more sophisticated animal models have been used to evaluate a diversity of compounds which some of them were previously tested in different in vitro experimental models. The complexity of prion diseases will require more pre-screening studies, reliable sporadic (or spontaneous) animal models and accurate chemical modifications of the selected compounds before having an effective therapy against human prion diseases. This review is intended to put on display the more relevant animal models that have been used in the search of new antiprion therapies and describe some possible procedures when handling chemical compounds presumed to have anti-prion activity prior to testing them in animal models.

  6. A Transactional Model of Bullying and Victimization

    ERIC Educational Resources Information Center

    Georgiou, Stelios N.; Fanti, Kostas A.

    2010-01-01

    The purpose of the current study was to develop and test a transactional model, based on longitudinal data, capable to describe the existing interrelation between maternal behavior and child bullying and victimization experiences over time. The results confirmed the existence of such a model for bullying, but not for victimization in terms of…

  7. General Model Study of Scour at Proposed Pier Extensions - Santa Ana River at BNSF Bridge, Corona, California

    DTIC Science & Technology

    2017-11-01

    model of the bridge piers, other related structures, and the adjacent channel. Data from the model provided a qualitative and quantitative evaluation of...minus post-test lidar survey . ......................... 42 Figure 38. Test 1 (30,000 cfs existing conditions) pre- minus post-test lidar survey ...43 Figure 39. Test 7 (15,000 cfs original proposed conditions) pre- minus post-test lidar survey

  8. Testing Strategies for Model-Based Development

    NASA Technical Reports Server (NTRS)

    Heimdahl, Mats P. E.; Whalen, Mike; Rajan, Ajitha; Miller, Steven P.

    2006-01-01

    This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model.

  9. Methods for the Joint Meta-Analysis of Multiple Tests

    ERIC Educational Resources Information Center

    Trikalinos, Thomas A.; Hoaglin, David C.; Small, Kevin M.; Terrin, Norma; Schmid, Christopher H.

    2014-01-01

    Existing methods for meta-analysis of diagnostic test accuracy focus primarily on a single index test. We propose models for the joint meta-analysis of studies comparing multiple index tests on the same participants in paired designs. These models respect the grouping of data by studies, account for the within-study correlation between the tests'…

  10. 40 CFR 91.501 - Applicability.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... information used as a basis for the FEL (e.g., previous emission tests, development tests), the specific...), (2) and (3) of this section. (1) The provisions of this subpart are waived for existing technology OB... provisions of this subpart for existing technology OB/PWC for a specific engine family through model year...

  11. Computer evaluation of existing and proposed fire lookouts

    Treesearch

    Romain M. Mees

    1976-01-01

    A computer simulation model has been developed for evaluating the fire detection capabilities of existing and proposed lookout stations. The model uses coordinate location of fires and lookouts, tower elevation, and topographic data to judge location of stations, and to determine where a fire can be seen. The model was tested by comparing it with manual detection on a...

  12. A Statistical Test for Comparing Nonnested Covariance Structure Models.

    ERIC Educational Resources Information Center

    Levy, Roy; Hancock, Gregory R.

    While statistical procedures are well known for comparing hierarchically related (nested) covariance structure models, statistical tests for comparing nonhierarchically related (nonnested) models have proven more elusive. While isolated attempts have been made, none exists within the commonly used maximum likelihood estimation framework, thereby…

  13. Population genetic testing for cancer susceptibility: founder mutations to genomes.

    PubMed

    Foulkes, William D; Knoppers, Bartha Maria; Turnbull, Clare

    2016-01-01

    The current standard model for identifying carriers of high-risk mutations in cancer-susceptibility genes (CSGs) generally involves a process that is not amenable to population-based testing: access to genetic tests is typically regulated by health-care providers on the basis of a labour-intensive assessment of an individual's personal and family history of cancer, with face-to-face genetic counselling performed before mutation testing. Several studies have shown that application of these selection criteria results in a substantial proportion of mutation carriers being missed. Population-based genetic testing has been proposed as an alternative approach to determining cancer susceptibility, and aims for a more-comprehensive detection of mutation carriers. Herein, we review the existing data on population-based genetic testing, and consider some of the barriers, pitfalls, and challenges related to the possible expansion of this approach. We consider mechanisms by which population-based genetic testing for cancer susceptibility could be delivered, and suggest how such genetic testing might be integrated into existing and emerging health-care structures. The existing models of genetic testing (including issues relating to informed consent) will very likely require considerable alteration if the potential benefits of population-based genetic testing are to be fully realized.

  14. Analysis of the Sheltered Instruction Observation Protocol Model on Academic Performance of English Language Learners

    NASA Astrophysics Data System (ADS)

    Ingram, Sandra W.

    This quantitative comparative descriptive study involved analyzing archival data from end-of-course (EOC) test scores in biology of English language learners (ELLs) taught or not taught using the sheltered instruction observation protocol (SIOP) model. The study includes descriptions and explanations of the benefits of the SIOP model to ELLs, especially in content area subjects such as biology. Researchers have shown that ELLs in high school lag behind their peers in academic achievement in content area subjects. Much of the research on the SIOP model took place in elementary and middle school, and more research was necessary at the high school level. This study involved analyzing student records from archival data to describe and explain if the SIOP model had an effect on the EOC test scores of ELLs taught or not taught using it. The sample consisted of 527 Hispanic students (283 females and 244 males) from Grades 9-12. An independent sample t-test determined if a significant difference existed in the mean EOC test scores of ELLs taught using the SIOP model as opposed to ELLs not taught using the SIOP model. The results indicated that a significant difference existed between EOC test scores of ELLs taught using the SIOP model and ELLs not taught using the SIOP model (p = .02). A regression analysis indicated a significant difference existed in the academic performance of ELLs taught using the SIOP model in high school science, controlling for free and reduced-price lunch (p = .001) in predicting passing scores on the EOC test in biology at the school level. The data analyzed for free and reduced-price lunch together with SIOP data indicated that both together were not significant (p = .175) for predicting passing scores on the EOC test in high school biology. Future researchers should repeat the study with student-level data as opposed to school-level data, and data should span at least three years.

  15. Measuring Misinformation in Repeat Trial Pick 1 of 2 Tests.

    ERIC Educational Resources Information Center

    Henderson, Pamela W.; Buchanan, Bruce

    1992-01-01

    An extension is described to a product-testing model to account for misinformation among subjects that would lead them to perform incorrectly on "pick one of two" tests. The model is applied to a data set of 367 subjects picking 1 of 2 colas. Misinformation does exist. (SLD)

  16. Using Bayesian Networks to Improve Knowledge Assessment

    ERIC Educational Resources Information Center

    Millan, Eva; Descalco, Luis; Castillo, Gladys; Oliveira, Paula; Diogo, Sandra

    2013-01-01

    In this paper, we describe the integration and evaluation of an existing generic Bayesian student model (GBSM) into an existing computerized testing system within the Mathematics Education Project (PmatE--Projecto Matematica Ensino) of the University of Aveiro. This generic Bayesian student model had been previously evaluated with simulated…

  17. Testing framework for embedded languages

    NASA Astrophysics Data System (ADS)

    Leskó, Dániel; Tejfel, Máté

    2012-09-01

    Embedding a new programming language into an existing one is a widely used technique, because it fastens the development process and gives a part of a language infrastructure for free (e.g. lexical, syntactical analyzers). In this paper we are presenting a new advantage of this development approach regarding to adding testing support for these new languages. Tool support for testing is a crucial point for a newly designed programming language. It could be done in the hard way by creating a testing tool from scratch, or we could try to reuse existing testing tools by extending them with an interface to our new language. The second approach requires less work, and also it fits very well for the embedded approach. The problem is that the creation of such interfaces is not straightforward at all, because the existing testing tools were mostly not designed to be extendable and to be able to deal with new languages. This paper presents an extendable and modular model of a testing framework, in which the most basic design decision was to keep the - previously mentioned - interface creation simple and straightforward. Other important aspects of our model are the test data generation, the oracle problem and the customizability of the whole testing phase.

  18. Redshift drift constraints on holographic dark energy

    NASA Astrophysics Data System (ADS)

    He, Dong-Ze; Zhang, Jing-Fei; Zhang, Xin

    2017-03-01

    The Sandage-Loeb (SL) test is a promising method for probing dark energy because it measures the redshift drift in the spectra of Lyman- α forest of distant quasars, covering the "redshift desert" of 2 ≲ z ≲ 5, which is not covered by existing cosmological observations. Therefore, it could provide an important supplement to current cosmological observations. In this paper, we explore the impact of SL test on the precision of cosmological constraints for two typical holographic dark energy models, i.e., the original holographic dark energy (HDE) model and the Ricci holographic dark energy (RDE) model. To avoid data inconsistency, we use the best-fit models based on current combined observational data as the fiducial models to simulate 30 mock SL test data. The results show that SL test can effectively break the existing strong degeneracy between the present-day matter density Ωm0 and the Hubble constant H 0 in other cosmological observations. For the considered two typical dark energy models, not only can a 30-year observation of SL test improve the constraint precision of Ωm0 and h dramatically, but can also enhance the constraint precision of the model parameters c and α significantly.

  19. Turbine Engine Research Center (TERC) Data System Enhancement and Test Article Evaluation. Delivery Order 0002: TERC Aeromechanical Characterization

    DTIC Science & Technology

    2005-06-01

    test, the entire turbulence model was changed from standard k- epsilon to Spalart- Allmaras. Using these different tools of turbulence models, a few...this research, leaving only pre-existing finite element models to be used. At some point a NASTRAN model was developed for vibrations analysis but

  20. Development, Testing, and Validation of a Model-Based Tool to Predict Operator Responses in Unexpected Workload Transitions

    NASA Technical Reports Server (NTRS)

    Sebok, Angelia; Wickens, Christopher; Sargent, Robert

    2015-01-01

    One human factors challenge is predicting operator performance in novel situations. Approaches such as drawing on relevant previous experience, and developing computational models to predict operator performance in complex situations, offer potential methods to address this challenge. A few concerns with modeling operator performance are that models need to realistic, and they need to be tested empirically and validated. In addition, many existing human performance modeling tools are complex and require that an analyst gain significant experience to be able to develop models for meaningful data collection. This paper describes an effort to address these challenges by developing an easy to use model-based tool, using models that were developed from a review of existing human performance literature and targeted experimental studies, and performing an empirical validation of key model predictions.

  1. Four applications of permutation methods to testing a single-mediator model.

    PubMed

    Taylor, Aaron B; MacKinnon, David P

    2012-09-01

    Four applications of permutation tests to the single-mediator model are described and evaluated in this study. Permutation tests work by rearranging data in many possible ways in order to estimate the sampling distribution for the test statistic. The four applications to mediation evaluated here are the permutation test of ab, the permutation joint significance test, and the noniterative and iterative permutation confidence intervals for ab. A Monte Carlo simulation study was used to compare these four tests with the four best available tests for mediation found in previous research: the joint significance test, the distribution of the product test, and the percentile and bias-corrected bootstrap tests. We compared the different methods on Type I error, power, and confidence interval coverage. The noniterative permutation confidence interval for ab was the best performer among the new methods. It successfully controlled Type I error, had power nearly as good as the most powerful existing methods, and had better coverage than any existing method. The iterative permutation confidence interval for ab had lower power than do some existing methods, but it performed better than any other method in terms of coverage. The permutation confidence interval methods are recommended when estimating a confidence interval is a primary concern. SPSS and SAS macros that estimate these confidence intervals are provided.

  2. A new fit-for-purpose model testing framework: Decision Crash Tests

    NASA Astrophysics Data System (ADS)

    Tolson, Bryan; Craig, James

    2016-04-01

    Decision-makers in water resources are often burdened with selecting appropriate multi-million dollar strategies to mitigate the impacts of climate or land use change. Unfortunately, the suitability of existing hydrologic simulation models to accurately inform decision-making is in doubt because the testing procedures used to evaluate model utility (i.e., model validation) are insufficient. For example, many authors have identified that a good standard framework for model testing called the Klemes Crash Tests (KCTs), which are the classic model validation procedures from Klemeš (1986) that Andréassian et al. (2009) rename as KCTs, have yet to become common practice in hydrology. Furthermore, Andréassian et al. (2009) claim that the progression of hydrological science requires widespread use of KCT and the development of new crash tests. Existing simulation (not forecasting) model testing procedures such as KCTs look backwards (checking for consistency between simulations and past observations) rather than forwards (explicitly assessing if the model is likely to support future decisions). We propose a fundamentally different, forward-looking, decision-oriented hydrologic model testing framework based upon the concept of fit-for-purpose model testing that we call Decision Crash Tests or DCTs. Key DCT elements are i) the model purpose (i.e., decision the model is meant to support) must be identified so that model outputs can be mapped to management decisions ii) the framework evaluates not just the selected hydrologic model but the entire suite of model-building decisions associated with model discretization, calibration etc. The framework is constructed to directly and quantitatively evaluate model suitability. The DCT framework is applied to a model building case study on the Grand River in Ontario, Canada. A hypothetical binary decision scenario is analysed (upgrade or not upgrade the existing flood control structure) under two different sets of model building decisions. In one case, we show the set of model building decisions has a low probability to correctly support the upgrade decision. In the other case, we show evidence suggesting another set of model building decisions has a high probability to correctly support the decision. The proposed DCT framework focuses on what model users typically care about: the management decision in question. The DCT framework will often be very strict and will produce easy to interpret results enabling clear unsuitability determinations. In the past, hydrologic modelling progress has necessarily meant new models and model building methods. Continued progress in hydrologic modelling requires finding clear evidence to motivate researchers to disregard unproductive models and methods and the DCT framework is built to produce this kind of evidence. References: Andréassian, V., C. Perrin, L. Berthet, N. Le Moine, J. Lerat, C. Loumagne, L. Oudin, T. Mathevet, M.-H. Ramos, and A. Valéry (2009), Crash tests for a standardized evaluation of hydrological models. Hydrology and Earth System Sciences, 13, 1757-1764. Klemeš, V. (1986), Operational testing of hydrological simulation models. Hydrological Sciences Journal, 31 (1), 13-24.

  3. Selecting Single Model in Combination Forecasting Based on Cointegration Test and Encompassing Test

    PubMed Central

    Jiang, Chuanjin; Zhang, Jing; Song, Fugen

    2014-01-01

    Combination forecasting takes all characters of each single forecasting method into consideration, and combines them to form a composite, which increases forecasting accuracy. The existing researches on combination forecasting select single model randomly, neglecting the internal characters of the forecasting object. After discussing the function of cointegration test and encompassing test in the selection of single model, supplemented by empirical analysis, the paper gives the single model selection guidance: no more than five suitable single models can be selected from many alternative single models for a certain forecasting target, which increases accuracy and stability. PMID:24892061

  4. Selecting single model in combination forecasting based on cointegration test and encompassing test.

    PubMed

    Jiang, Chuanjin; Zhang, Jing; Song, Fugen

    2014-01-01

    Combination forecasting takes all characters of each single forecasting method into consideration, and combines them to form a composite, which increases forecasting accuracy. The existing researches on combination forecasting select single model randomly, neglecting the internal characters of the forecasting object. After discussing the function of cointegration test and encompassing test in the selection of single model, supplemented by empirical analysis, the paper gives the single model selection guidance: no more than five suitable single models can be selected from many alternative single models for a certain forecasting target, which increases accuracy and stability.

  5. Force and moment tests to determine the interaction effects of the reaction control system jet plumes on the space shuttle Orbiter aerodynamics at Mach Number 6 (Test OA352)

    NASA Technical Reports Server (NTRS)

    Cayse, Robert W.

    1987-01-01

    The purpose of this test was to expand the existing Space Shuttle aerodynamics and Reaction Control System (RCS) data base to support the Glide Return to Launch Site (GRTLS) abort trajectory and the new Digital Autopilot. An existing model of the orbiter was used to investigate the aerodynamic effects of several combinations of RCS thrusters and thruster momentum ratios at Mach number 6. Two separate model installations were used to achieve an angle-of-attack range of -11 to 46 deg. The test was conducted at a unit Reynolds number of 0.8 x 10 to the 6th per foot.

  6. Numerical Modelling of Extended Leak-Off Test with a Pre-Existing Fracture

    NASA Astrophysics Data System (ADS)

    Lavrov, A.; Larsen, I.; Bauer, A.

    2016-04-01

    Extended leak-off test (XLOT) is one of the few techniques available for stress measurements in oil and gas wells. Interpretation of the test is often difficult since the results depend on a multitude of factors, including the presence of natural or drilling-induced fractures in the near-well area. Coupled numerical modelling of XLOT has been performed to investigate the pressure behaviour during the flowback phase as well as the effect of a pre-existing fracture on the test results in a low-permeability formation. Essential features of XLOT known from field measurements are captured by the model, including the saw-tooth shape of the pressure vs injected volume curve, and the change of slope in the pressure vs time curve during flowback used by operators as an indicator of the bottomhole pressure reaching the minimum in situ stress. Simulations with a pre-existing fracture running from the borehole wall in the radial direction have revealed that the results of XLOT are quite sensitive to the orientation of the pre-existing fracture. In particular, the fracture initiation pressure and the formation breakdown pressure increase steadily with decreasing angle between the fracture and the minimum in situ stress. Our findings seem to invalidate the use of the fracture initiation pressure and the formation breakdown pressure for stress measurements or rock strength evaluation purposes.

  7. Analysis of Weibull Grading Test for Solid Tantalum Capacitors

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander

    2010-01-01

    Weibull grading test is a powerful technique that allows selection and reliability rating of solid tantalum capacitors for military and space applications. However, inaccuracies in the existing method and non-adequate acceleration factors can result in significant, up to three orders of magnitude, errors in the calculated failure rate of capacitors. This paper analyzes deficiencies of the existing technique and recommends more accurate method of calculations. A physical model presenting failures of tantalum capacitors as time-dependent-dielectric-breakdown is used to determine voltage and temperature acceleration factors and select adequate Weibull grading test conditions. This, model is verified by highly accelerated life testing (HALT) at different temperature and voltage conditions for three types of solid chip tantalum capacitors. It is shown that parameters of the model and acceleration factors can be calculated using a general log-linear relationship for the characteristic life with two stress levels.

  8. Aeroservoelastic Modeling of Body Freedom Flutter for Control System Design

    NASA Technical Reports Server (NTRS)

    Ouellette, Jeffrey

    2017-01-01

    One of the most severe forms of coupling between aeroelasticity and flight dynamics is an instability called freedom flutter. The existing tools often assume relatively weak coupling, and are therefore unable to accurately model body freedom flutter. Because the existing tools were developed from traditional flutter analysis models, inconsistencies in the final models are not compatible with control system design tools. To resolve these issues, a number of small, but significant changes have been made to the existing approaches. A frequency domain transformation is used with the unsteady aerodynamics to ensure a more physically consistent stability axis rational function approximation of the unsteady aerodynamic model. The aerodynamic model is augmented with additional terms to account for limitations of the baseline unsteady aerodynamic model and to account for the gravity forces. An assumed modes method is used for the structural model to ensure a consistent definition of the aircraft states across the flight envelope. The X-56A stiff wing flight-test data were used to validate the current modeling approach. The flight-test data does not show body-freedom flutter, but does show coupling between the flight dynamics and the aeroelastic dynamics and the effects of the fuel weight.

  9. Utilization of data estimation via existing models, within a tiered data quality system, for populating species sensitivity distributions

    EPA Science Inventory

    The acquisition toxicity test data of sufficient quality from open literature to fulfill taxonomic diversity requirements can be a limiting factor in the creation of new 304(a) Aquatic Life Criteria. The use of existing models (WebICE and ACE) that estimate acute and chronic eff...

  10. Modeling Potential Carbon Monoxide Exposure Due to Operation of a Major Rocket Engine Altitude Test Facility Using Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Blotzer, Michael J.; Woods, Jody L.

    2009-01-01

    This viewgraph presentation reviews computational fluid dynamics as a tool for modelling the dispersion of carbon monoxide at the Stennis Space Center's A3 Test Stand. The contents include: 1) Constellation Program; 2) Constellation Launch Vehicles; 3) J2X Engine; 4) A-3 Test Stand; 5) Chemical Steam Generators; 6) Emission Estimates; 7) Located in Existing Test Complex; 8) Computational Fluid Dynamics; 9) Computational Tools; 10) CO Modeling; 11) CO Model results; and 12) Next steps.

  11. Heteroscedastic Latent Trait Models for Dichotomous Data.

    PubMed

    Molenaar, Dylan

    2015-09-01

    Effort has been devoted to account for heteroscedasticity with respect to observed or latent moderator variables in item or test scores. For instance, in the multi-group generalized linear latent trait model, it could be tested whether the observed (polychoric) covariance matrix differs across the levels of an observed moderator variable. In the case that heteroscedasticity arises across the latent trait itself, existing models commonly distinguish between heteroscedastic residuals and a skewed trait distribution. These models have valuable applications in intelligence, personality and psychopathology research. However, existing approaches are only limited to continuous and polytomous data, while dichotomous data are common in intelligence and psychopathology research. Therefore, in present paper, a heteroscedastic latent trait model is presented for dichotomous data. The model is studied in a simulation study, and applied to data pertaining alcohol use and cognitive ability.

  12. Distribution system model calibration with big data from AMI and PV inverters

    DOE PAGES

    Peppanen, Jouni; Reno, Matthew J.; Broderick, Robert J.; ...

    2016-03-03

    Efficient management and coordination of distributed energy resources with advanced automation schemes requires accurate distribution system modeling and monitoring. Big data from smart meters and photovoltaic (PV) micro-inverters can be leveraged to calibrate existing utility models. This paper presents computationally efficient distribution system parameter estimation algorithms to improve the accuracy of existing utility feeder radial secondary circuit model parameters. The method is demonstrated using a real utility feeder model with advanced metering infrastructure (AMI) and PV micro-inverters, along with alternative parameter estimation approaches that can be used to improve secondary circuit models when limited measurement data is available. Lastly, themore » parameter estimation accuracy is demonstrated for both a three-phase test circuit with typical secondary circuit topologies and single-phase secondary circuits in a real mixed-phase test system.« less

  13. Distribution system model calibration with big data from AMI and PV inverters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peppanen, Jouni; Reno, Matthew J.; Broderick, Robert J.

    Efficient management and coordination of distributed energy resources with advanced automation schemes requires accurate distribution system modeling and monitoring. Big data from smart meters and photovoltaic (PV) micro-inverters can be leveraged to calibrate existing utility models. This paper presents computationally efficient distribution system parameter estimation algorithms to improve the accuracy of existing utility feeder radial secondary circuit model parameters. The method is demonstrated using a real utility feeder model with advanced metering infrastructure (AMI) and PV micro-inverters, along with alternative parameter estimation approaches that can be used to improve secondary circuit models when limited measurement data is available. Lastly, themore » parameter estimation accuracy is demonstrated for both a three-phase test circuit with typical secondary circuit topologies and single-phase secondary circuits in a real mixed-phase test system.« less

  14. The Impact of Prior Deployment Experience on Civilian Employment After Military Service

    DTIC Science & Technology

    2013-03-21

    covariates men- tioned. Given the exploratory nature of this study, all defined variables were included. Model diagnostic tests were conducted and we...assessed model fit using the Hosmer–Lemeshow goodness-of-fit test . To identify the existence of collinearity, we examined all variance inflation factors...separation, and reason for separation and service branch were tested . Both interactions were significant at pɘ.10. Three models were built to examine

  15. St. Paul Harbor, St. Paul Island, Alaska; Design for Wave and Shoaling Protection; Hydraulic Model Investigation

    DTIC Science & Technology

    1988-09-01

    S P a .E REPORT DOCUMENTATION PAGE OMR;oJ ’ , CRR Eo Dale n2 ;R6 ’a 4EPOR- SCRFT CASS F.C.T ON ’b RES’RICTI’,E MARKINGS Unclassified a ECRIT y...and selection of test waves 30. Measured prototype wave data on which a comprehensive statistical analysis of wave conditions could be based were...Tests Existing conditions 32. Prior to testing of the various improvement plans, comprehensive tests were conducted for existing conditions (Plate 1

  16. Force on Force Modeling with Formal Task Structures and Dynamic Geometry

    DTIC Science & Technology

    2017-03-24

    task framework, derived using the MMF methodology to structure a complex mission. It further demonstrated the integration of effects from a range of...application methodology was intended to support a combined developmental testing (DT) and operational testing (OT) strategy for selected systems under test... methodology to develop new or modify existing Models and Simulations (M&S) to: • Apply data from multiple, distributed sources (including test

  17. Rethinking developmental toxicity testing: Evolution or revolution?

    PubMed

    Scialli, Anthony R; Daston, George; Chen, Connie; Coder, Prägati S; Euling, Susan Y; Foreman, Jennifer; Hoberman, Alan M; Hui, Julia; Knudsen, Thomas; Makris, Susan L; Morford, LaRonda; Piersma, Aldert H; Stanislaus, Dinesh; Thompson, Kary E

    2018-06-01

    Current developmental toxicity testing adheres largely to protocols suggested in 1966 involving the administration of test compound to pregnant laboratory animals. After more than 50 years of embryo-fetal development testing, are we ready to consider a different approach to human developmental toxicity testing? A workshop was held under the auspices of the Developmental and Reproductive Toxicology Technical Committee of the ILSI Health and Environmental Sciences Institute to consider how we might design developmental toxicity testing if we started over with 21st century knowledge and techniques (revolution). We first consider what changes to the current protocols might be recommended to make them more predictive for human risk (evolution). The evolutionary approach includes modifications of existing protocols and can include humanized models, disease models, more accurate assessment and testing of metabolites, and informed approaches to dose selection. The revolution could start with hypothesis-driven testing where we take what we know about a compound or close analog and answer specific questions using targeted experimental techniques rather than a one-protocol-fits-all approach. Central to the idea of hypothesis-driven testing is the concept that testing can be done at the level of mode of action. It might be feasible to identify a small number of key events at a molecular or cellular level that predict an adverse outcome and for which testing could be performed in vitro or in silico or, rarely, using limited in vivo models. Techniques for evaluating these key events exist today or are in development. Opportunities exist for refining and then replacing current developmental toxicity testing protocols using techniques that have already been developed or are within reach. © 2018 The Authors. Birth Defects Research Published by Wiley Periodicals, Inc.

  18. Applying Multidimensional Item Response Theory Models in Validating Test Dimensionality: An Example of K-12 Large-Scale Science Assessment

    ERIC Educational Resources Information Center

    Li, Ying; Jiao, Hong; Lissitz, Robert W.

    2012-01-01

    This study investigated the application of multidimensional item response theory (IRT) models to validate test structure and dimensionality. Multiple content areas or domains within a single subject often exist in large-scale achievement tests. Such areas or domains may cause multidimensionality or local item dependence, which both violate the…

  19. Generalized disequilibrium test for association in qualitative traits incorporating imprinting effects based on extended pedigrees.

    PubMed

    Li, Jian-Long; Wang, Peng; Fung, Wing Kam; Zhou, Ji-Yuan

    2017-10-16

    For dichotomous traits, the generalized disequilibrium test with the moment estimate of the variance (GDT-ME) is a powerful family-based association method. Genomic imprinting is an important epigenetic phenomenon and currently, there has been increasing interest of incorporating imprinting to improve the test power of association analysis. However, GDT-ME does not take imprinting effects into account, and it has not been investigated whether it can be used for association analysis when the effects indeed exist. In this article, based on a novel decomposition of the genotype score according to the paternal or maternal source of the allele, we propose the generalized disequilibrium test with imprinting (GDTI) for complete pedigrees without any missing genotypes. Then, we extend GDTI and GDT-ME to accommodate incomplete pedigrees with some pedigrees having missing genotypes, by using a Monte Carlo (MC) sampling and estimation scheme to infer missing genotypes given available genotypes in each pedigree, denoted by MCGDTI and MCGDT-ME, respectively. The proposed GDTI and MCGDTI methods evaluate the differences of the paternal as well as maternal allele scores for all discordant relative pairs in a pedigree, including beyond first-degree relative pairs. Advantages of the proposed GDTI and MCGDTI test statistics over existing methods are demonstrated by simulation studies under various simulation settings and by application to the rheumatoid arthritis dataset. Simulation results show that the proposed tests control the size well under the null hypothesis of no association, and outperform the existing methods under various imprinting effect models. The existing GDT-ME and the proposed MCGDT-ME can be used to test for association even when imprinting effects exist. For the application to the rheumatoid arthritis data, compared to the existing methods, MCGDTI identifies more loci statistically significantly associated with the disease. Under complete and incomplete imprinting effect models, our proposed GDTI and MCGDTI methods, by considering the information on imprinting effects and all discordant relative pairs within each pedigree, outperform all the existing test statistics and MCGDTI can recapture much of the missing information. Therefore, MCGDTI is recommended in practice.

  20. Spatial Dynamics and Determinants of County-Level Education Expenditure in China

    ERIC Educational Resources Information Center

    Gu, Jiafeng

    2012-01-01

    In this paper, a multivariate spatial autoregressive model of local public education expenditure determination with autoregressive disturbance is developed and estimated. The existence of spatial interdependence is tested using Moran's I statistic and Lagrange multiplier test statistics for both the spatial error and spatial lag models. The full…

  1. Empirical testing of an analytical model predicting electrical isolation of photovoltaic models

    NASA Astrophysics Data System (ADS)

    Garcia, A., III; Minning, C. P.; Cuddihy, E. F.

    A major design requirement for photovoltaic modules is that the encapsulation system be capable of withstanding large DC potentials without electrical breakdown. Presented is a simple analytical model which can be used to estimate material thickness to meet this requirement for a candidate encapsulation system or to predict the breakdown voltage of an existing module design. A series of electrical tests to verify the model are described in detail. The results of these verification tests confirmed the utility of the analytical model for preliminary design of photovoltaic modules.

  2. Development of RF plasma simulations of in-reactor tests of small models of the nuclear light bulb fuel region

    NASA Technical Reports Server (NTRS)

    Roman, W. C.; Jaminet, J. F.

    1972-01-01

    Experiments were conducted to develop test configurations and technology necessary to simulate the thermal environment and fuel region expected to exist in in-reactor tests of small models of nuclear light bulb configurations. Particular emphasis was directed at rf plasma tests of approximately full-scale models of an in-reactor cell suitable for tests in Los Alamos Scientific Laboratory's Nuclear Furnace. The in-reactor tests will involve vortex-stabilized fissioning uranium plasmas of approximately 200-kW power, 500-atm pressure and equivalent black-body radiating temperatures between 3220 and 3510 K.

  3. Hypothesis testing of a change point during cognitive decline among Alzheimer's disease patients.

    PubMed

    Ji, Ming; Xiong, Chengjie; Grundman, Michael

    2003-10-01

    In this paper, we present a statistical hypothesis test for detecting a change point over the course of cognitive decline among Alzheimer's disease patients. The model under the null hypothesis assumes a constant rate of cognitive decline over time and the model under the alternative hypothesis is a general bilinear model with an unknown change point. When the change point is unknown, however, the null distribution of the test statistics is not analytically tractable and has to be simulated by parametric bootstrap. When the alternative hypothesis that a change point exists is accepted, we propose an estimate of its location based on the Akaike's Information Criterion. We applied our method to a data set from the Neuropsychological Database Initiative by implementing our hypothesis testing method to analyze Mini Mental Status Exam scores based on a random-slope and random-intercept model with a bilinear fixed effect. Our result shows that despite large amount of missing data, accelerated decline did occur for MMSE among AD patients. Our finding supports the clinical belief of the existence of a change point during cognitive decline among AD patients and suggests the use of change point models for the longitudinal modeling of cognitive decline in AD research.

  4. Cryogenic Wind Tunnel Models. Design and Fabrication

    NASA Technical Reports Server (NTRS)

    Young, C. P., Jr. (Compiler); Gloss, B. B. (Compiler)

    1983-01-01

    The principal motivating factor was the National Transonic Facility (NTF). Since the NTF can achieve significantly higher Reynolds numbers at transonic speeds than other wind tunnels in the world, and will therefore occupy a unique position among ground test facilities, every effort is being made to ensure that model design and fabrication technology exists to allow researchers to take advantage of this high Reynolds number capability. Since a great deal of experience in designing and fabricating cryogenic wind tunnel models does not exist, and since the experience that does exist is scattered over a number of organizations, there is a need to bring existing experience in these areas together and share it among all interested parties. Representatives from government, the airframe industry, and universities are included.

  5. Large-scale aeroacoustic research feasibility and conceptual design of test-section inserts for the Ames 80- by 120-foot wind tunnel

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Olsen, Larry E.

    1990-01-01

    An engineering feasibility study was made of aeroacoustic inserts designed for large-scale acoustic research on aircraft models in the 80 by 120 foot Wind Tunnel at NASA Ames Research Center. The advantages and disadvantages of likely designs were analyzed. Results indicate that the required maximum airspeed leads to the design of a particular insert. Using goals of 200, 150, and 100 knots airspeed, the analysis indicated a 30 x 60 ft open-jet test section, a 40 x 80 ft open jet test section, and a 70 x 100 ft closed test section with enhanced wall lining, respectively. The open-jet inserts would be composed of a nozzle, collector, diffuser, and acoutic wedges incorporated in the existing 80 x 120 test section. The closed test section would be composed of approximately 5 ft acoustic wedges covered by a porous plate attached to the test section walls of the existing 80 x 120. All designs would require a double row of acoustic vanes between the test section and fan drive to attenuate fan noise and, in the case of the open-jet designs, to control flow separation at the diffuser downstream end. The inserts would allow virtually anechoic acoustic studies of large helicopter models, jets, and V/STOL aircraft models in simulated flight. Model scale studies would be necessary to optimize the aerodynamic and acoustic performance of any of the designs. In all designs studied, the existing structure would have to be reinforced. Successful development of acoustically transparent walls, though not strictly necessary to the project, would lead to a porous-wall test section that could be substituted for any of the open-jet designs, and thereby eliminate many aerodynamic and acoustic problems characteristic of open-jet shear layers. The larger size of the facility would make installation and removal of the insert components difficult. Consequently, scheduling of the existing 80 x 120 aerodynamic test section and scheduling of the open-jet test section would likely be made on an annual or longer basis. The enhanced wall-lining insert would likely be permanent. Although the modifications are technically feasible, the economic practicality of the project was not evaluated.

  6. Large scale landslide susceptibility assessment using the statistical methods of logistic regression and BSA - study case: the sub-basin of the small Niraj (Transylvania Depression, Romania)

    NASA Astrophysics Data System (ADS)

    Roşca, S.; Bilaşco, Ş.; Petrea, D.; Fodorean, I.; Vescan, I.; Filip, S.; Măguţ, F.-L.

    2015-11-01

    The existence of a large number of GIS models for the identification of landslide occurrence probability makes difficult the selection of a specific one. The present study focuses on the application of two quantitative models: the logistic and the BSA models. The comparative analysis of the results aims at identifying the most suitable model. The territory corresponding to the Niraj Mic Basin (87 km2) is an area characterised by a wide variety of the landforms with their morphometric, morphographical and geological characteristics as well as by a high complexity of the land use types where active landslides exist. This is the reason why it represents the test area for applying the two models and for the comparison of the results. The large complexity of input variables is illustrated by 16 factors which were represented as 72 dummy variables, analysed on the basis of their importance within the model structures. The testing of the statistical significance corresponding to each variable reduced the number of dummy variables to 12 which were considered significant for the test area within the logistic model, whereas for the BSA model all the variables were employed. The predictability degree of the models was tested through the identification of the area under the ROC curve which indicated a good accuracy (AUROC = 0.86 for the testing area) and predictability of the logistic model (AUROC = 0.63 for the validation area).

  7. Test-Case Generation using an Explicit State Model Checker Final Report

    NASA Technical Reports Server (NTRS)

    Heimdahl, Mats P. E.; Gao, Jimin

    2003-01-01

    In the project 'Test-Case Generation using an Explicit State Model Checker' we have extended an existing tools infrastructure for formal modeling to export Java code so that we can use the NASA Ames tool Java Pathfinder (JPF) for test case generation. We have completed a translator from our source language RSML(exp -e) to Java and conducted initial studies of how JPF can be used as a testing tool. In this final report, we provide a detailed description of the translation approach as implemented in our tools.

  8. A Power Hardware-in-the-Loop Platform with Remote Distribution Circuit Cosimulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan; Lundstrom, Blake; Chakraborty, Sudipta

    2015-04-01

    This paper demonstrates the use of a novel cosimulation architecture that integrates hardware testing using Power Hardware-in-the-Loop (PHIL) with larger-scale electric grid models using off-the-shelf, non-PHIL software tools. This architecture enables utilities to study the impacts of emerging energy technologies on their system and manufacturers to explore the interactions of new devices with existing and emerging devices on the power system, both without the need to convert existing grid models to a new platform or to conduct in-field trials. The paper describes an implementation of this architecture for testing two residential-scale advanced solar inverters at separate points of common coupling.more » The same hardware setup is tested with two different distribution feeders (IEEE 123 and 8500 node test systems) modeled using GridLAB-D. In addition to simplifying testing with multiple feeders, the architecture demonstrates additional flexibility with hardware testing in one location linked via the Internet to software modeling in a remote location. In testing, inverter current, real and reactive power, and PCC voltage are well captured by the co-simulation platform. Testing of the inverter advanced control features is currently somewhat limited by the software model time step (1 sec) and tested communication latency (24 msec). Overshoot induced oscillations are observed with volt/VAR control delays of 0 and 1.5 sec, while 3.4 sec and 5.5 sec delays produced little or no oscillation. These limitations could be overcome using faster modeling and communication within the same co-simulation architecture.« less

  9. Determining if Instructional Delivery Model Differences Exist in Remedial English

    ERIC Educational Resources Information Center

    Carter, LaTanya Woods

    2012-01-01

    The purpose of this causal comparative study is to test the theory of no significant difference that compares pre- and post-test assessment scores, controlling for the instructional delivery model of online and face-to-face students at a Mid-Atlantic university. Online education and virtual distance learning programs have increased in popularity…

  10. An Investigation of Sample Size Splitting on ATFIND and DIMTEST

    ERIC Educational Resources Information Center

    Socha, Alan; DeMars, Christine E.

    2013-01-01

    Modeling multidimensional test data with a unidimensional model can result in serious statistical errors, such as bias in item parameter estimates. Many methods exist for assessing the dimensionality of a test. The current study focused on DIMTEST. Using simulated data, the effects of sample size splitting for use with the ATFIND procedure for…

  11. Formal methods for test case generation

    NASA Technical Reports Server (NTRS)

    Rushby, John (Inventor); De Moura, Leonardo Mendonga (Inventor); Hamon, Gregoire (Inventor)

    2011-01-01

    The invention relates to the use of model checkers to generate efficient test sets for hardware and software systems. The method provides for extending existing tests to reach new coverage targets; searching *to* some or all of the uncovered targets in parallel; searching in parallel *from* some or all of the states reached in previous tests; and slicing the model relative to the current set of coverage targets. The invention provides efficient test case generation and test set formation. Deep regions of the state space can be reached within allotted time and memory. The approach has been applied to use of the model checkers of SRI's SAL system and to model-based designs developed in Stateflow. Stateflow models achieving complete state and transition coverage in a single test case are reported.

  12. Digital imaging and remote sensing image generator (DIRSIG) as applied to NVESD sensor performance modeling

    NASA Astrophysics Data System (ADS)

    Kolb, Kimberly E.; Choi, Hee-sue S.; Kaur, Balvinder; Olson, Jeffrey T.; Hill, Clayton F.; Hutchinson, James A.

    2016-05-01

    The US Army's Communications Electronics Research, Development and Engineering Center (CERDEC) Night Vision and Electronic Sensors Directorate (referred to as NVESD) is developing a virtual detection, recognition, and identification (DRI) testing methodology using simulated imagery as a means of augmenting the field testing component of sensor performance evaluation, which is expensive, resource intensive, time consuming, and limited to the available target(s) and existing atmospheric visibility and environmental conditions at the time of testing. Existing simulation capabilities such as the Digital Imaging Remote Sensing Image Generator (DIRSIG) and NVESD's Integrated Performance Model Image Generator (NVIPM-IG) can be combined with existing detection algorithms to reduce cost/time, minimize testing risk, and allow virtual/simulated testing using full spectral and thermal object signatures, as well as those collected in the field. NVESD has developed an end-to-end capability to demonstrate the feasibility of this approach. Simple detection algorithms have been used on the degraded images generated by NVIPM-IG to determine the relative performance of the algorithms on both DIRSIG-simulated and collected images. Evaluating the degree to which the algorithm performance agrees between simulated versus field collected imagery is the first step in validating the simulated imagery procedure.

  13. DSN system performance test Doppler noise models; noncoherent configuration

    NASA Technical Reports Server (NTRS)

    Bunce, R.

    1977-01-01

    The newer model for variance, the Allan technique, now adopted for testing, is analyzed in the subject mode. A model is generated (including considerable contribution from the station secondary frequency standard), and rationalized with existing data. The variance model is definitely sound; the Allan technique mates theory and measure. The mean-frequency model is an estimate; this problem is yet to be rigorously resolved. The unaltered defining expressions are noncovergent, and the observed mean is quite erratic.

  14. Policy Implications for Continuous Employment Decisions of High School Principals: An Alternative Methodological Approach for Using High-Stakes Testing Outcomes

    ERIC Educational Resources Information Center

    Young, I. Phillip; Fawcett, Paul

    2013-01-01

    Several teacher models exist for using high-stakes testing outcomes to make continuous employment decisions for principals. These models are reviewed, and specific flaws are noted if these models are retrofitted for principals. To address these flaws, a different methodology is proposed on the basis of actual field data. Specially addressed are…

  15. Experimental study of main rotor tip geometry and tail rotor interactions in hover. Volume 2: Run log and tabulated data

    NASA Technical Reports Server (NTRS)

    Balch, D. T.; Lombardi, J.

    1985-01-01

    A model scale hover test was conducted in the Sikorsky Aircraft Model Rotor hover Facility to identify and quantify the impact of the tail rotor on the demonstrated advantages of advanced geometry tip configurations. The existence of mutual interference between hovering main rotor and a tail rotor was acknowledged in the test. The test was conducted using the Basic Model Test Rig and two scaled main rotor systems, one representing a 1/5.727 scale UH-60A BLACK HAWK and the others a 1/4.71 scale S-76. Eight alternate rotor tip configurations were tested, 3 on the BLACK HAWK rotor and 6 on the S-76 rotor. Four of these tips were then selected for testing in close proximity to an operating tail rotor (operating in both tractor and pusher modes) to determine if the performance advantages that could be obtained from the use of advanced geometry tips in a main rotor only environment would still exist in the more complex flow field involving a tail rotor. This volume contains the test run log and tabulated data.

  16. Characterization of Orbital Debris Via Hyper-Velocity Ground-Based Tests

    NASA Technical Reports Server (NTRS)

    Cowardin, Heather

    2015-01-01

    To replicate a hyper-velocity fragmentation event using modern-day spacecraft materials and construction techniques to better improve the existing DoD and NASA breakup models. DebriSat is intended to be representative of modern LEO satellites.Major design decisions were reviewed and approved by Aerospace subject matter experts from different disciplines. DebriSat includes 7 major subsystems. Attitude determination and control system (ADCS), command and data handling (C&DH), electrical power system (EPS), payload, propulsion, telemetry tracking and command (TT&C), and thermal management. To reduce cost, most components are emulated based on existing design of flight hardware and fabricated with the same materials. A key laboratory-based test, Satellite Orbital debris Characterization Impact Test (SOCIT), supporting the development of the DoD and NASA satellite breakup models was conducted at AEDC in 1992 .Breakup models based on SOCIT have supported many applications and matched on-orbit events reasonably well over the years.

  17. Reliability of High-Voltage Tantalum Capacitors. Parts 3 and 4)

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander

    2010-01-01

    Weibull grading test is a powerful technique that allows selection and reliability rating of solid tantalum capacitors for military and space applications. However, inaccuracies in the existing method and non-adequate acceleration factors can result in significant, up to three orders of magnitude, errors in the calculated failure rate of capacitors. This paper analyzes deficiencies of the existing technique and recommends more accurate method of calculations. A physical model presenting failures of tantalum capacitors as time-dependent-dielectric-breakdown is used to determine voltage and temperature acceleration factors and select adequate Weibull grading test conditions. This model is verified by highly accelerated life testing (HALT) at different temperature and voltage conditions for three types of solid chip tantalum capacitors. It is shown that parameters of the model and acceleration factors can be calculated using a general log-linear relationship for the characteristic life with two stress levels.

  18. Application of conditional moment tests to model checking for generalized linear models.

    PubMed

    Pan, Wei

    2002-06-01

    Generalized linear models (GLMs) are increasingly being used in daily data analysis. However, model checking for GLMs with correlated discrete response data remains difficult. In this paper, through a case study on marginal logistic regression using a real data set, we illustrate the flexibility and effectiveness of using conditional moment tests (CMTs), along with other graphical methods, to do model checking for generalized estimation equation (GEE) analyses. Although CMTs provide an array of powerful diagnostic tests for model checking, they were originally proposed in the econometrics literature and, to our knowledge, have never been applied to GEE analyses. CMTs cover many existing tests, including the (generalized) score test for an omitted covariate, as special cases. In summary, we believe that CMTs provide a class of useful model checking tools.

  19. A Risk-Based Approach for Aerothermal/TPS Analysis and Testing

    NASA Technical Reports Server (NTRS)

    Wright, Michael J.; Grinstead, Jay H.; Bose, Deepak

    2007-01-01

    The current status of aerothermal and thermal protection system modeling for civilian entry missions is reviewed. For most such missions, the accuracy of our simulations is limited not by the tools and processes currently employed, but rather by reducible deficiencies in the underlying physical models. Improving the accuracy of and reducing the uncertainties in these models will enable a greater understanding of the system level impacts of a particular thermal protection system and of the system operation and risk over the operational life of the system. A strategic plan will be laid out by which key modeling deficiencies can be identified via mission-specific gap analysis. Once these gaps have been identified, the driving component uncertainties are determined via sensitivity analyses. A Monte-Carlo based methodology is presented for physics-based probabilistic uncertainty analysis of aerothermodynamics and thermal protection system material response modeling. These data are then used to advocate for and plan focused testing aimed at reducing key uncertainties. The results of these tests are used to validate or modify existing physical models. Concurrently, a testing methodology is outlined for thermal protection materials. The proposed approach is based on using the results of uncertainty/sensitivity analyses discussed above to tailor ground testing so as to best identify and quantify system performance and risk drivers. A key component of this testing is understanding the relationship between the test and flight environments. No existing ground test facility can simultaneously replicate all aspects of the flight environment, and therefore good models for traceability to flight are critical to ensure a low risk, high reliability thermal protection system design. Finally, the role of flight testing in the overall thermal protection system development strategy is discussed.

  20. More Than Just Accuracy: A Novel Method to Incorporate Multiple Test Attributes in Evaluating Diagnostic Tests Including Point of Care Tests.

    PubMed

    Thompson, Matthew; Weigl, Bernhard; Fitzpatrick, Annette; Ide, Nicole

    2016-01-01

    Current frameworks for evaluating diagnostic tests are constrained by a focus on diagnostic accuracy, and assume that all aspects of the testing process and test attributes are discrete and equally important. Determining the balance between the benefits and harms associated with new or existing tests has been overlooked. Yet, this is critically important information for stakeholders involved in developing, testing, and implementing tests. This is particularly important for point of care tests (POCTs) where tradeoffs exist between numerous aspects of the testing process and test attributes. We developed a new model that multiple stakeholders (e.g., clinicians, patients, researchers, test developers, industry, regulators, and health care funders) can use to visualize the multiple attributes of tests, the interactions that occur between these attributes, and their impacts on health outcomes. We use multiple examples to illustrate interactions between test attributes (test availability, test experience, and test results) and outcomes, including several POCTs. The model could be used to prioritize research and development efforts, and inform regulatory submissions for new diagnostics. It could potentially provide a way to incorporate the relative weights that various subgroups or clinical settings might place on different test attributes. Our model provides a novel way that multiple stakeholders can use to visualize test attributes, their interactions, and impacts on individual and population outcomes. We anticipate that this will facilitate more informed decision making around diagnostic tests.

  1. A study on seismic behavior of pile foundations of bridge abutment on liquefiable ground through shaking table tests

    NASA Astrophysics Data System (ADS)

    Nakata, Mitsuhiko; Tanimoto, Shunsuke; Ishida, Shuichi; Ohsumi, Michio; Hoshikuma, Jun-ichi

    2017-10-01

    There is risk of bridge foundations to be damaged by liquefaction-induced lateral spreading of ground. Once bridge foundations have been damaged, it takes a lot of time for restoration. Therefore, it is important to assess the seismic behavior of the foundations on liquefiable ground appropriately. In this study, shaking table tests of models on a scale of 1/10 were conducted at the large scale shaking table in Public Works Research Institute, Japan, to investigate the seismic behavior of pile-supported bridge abutment on liquefiable ground. The shaking table tests were conducted for three types of model. Two are models of existing bridge which was built without design for liquefaction and the other is a model of bridge which was designed based on the current Japanese design specifications for highway bridges. As a result, the bending strains of piles of the abutment which were designed based on the current design specifications were less than those of the existing bridge.

  2. Allele-sharing models: LOD scores and accurate linkage tests.

    PubMed

    Kong, A; Cox, N J

    1997-11-01

    Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested.

  3. Allele-sharing models: LOD scores and accurate linkage tests.

    PubMed Central

    Kong, A; Cox, N J

    1997-01-01

    Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested. PMID:9345087

  4. Workshop Introduction: Systems Biology and Biological Models

    EPA Science Inventory

    As we consider the future of toxicity testing, the importance of applying biological models to this problem is clear. Modeling efforts exist along a continuum with respect to the level of organization (e.g. cell, tissue, organism) linked to the resolution of the model. Generally,...

  5. Multitasking TORT under UNICOS: Parallel performance models and measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnett, A.; Azmy, Y.Y.

    1999-09-27

    The existing parallel algorithms in the TORT discrete ordinates code were updated to function in a UNICOS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead.

  6. Multitasking TORT Under UNICOS: Parallel Performance Models and Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azmy, Y.Y.; Barnett, D.A.

    1999-09-27

    The existing parallel algorithms in the TORT discrete ordinates were updated to function in a UNI-COS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead.

  7. An Analysis of Test Equating Models for the Alabama High School Graduation Examination.

    ERIC Educational Resources Information Center

    Glowacki, Margaret L.

    The purpose of this study was to determine which equating models are appropriate for the Alabama High School Graduation Examination (AHSGE) by equating two previously administered fall forms for each subject area of the AHSGE and determining whether differences exist in the test score distributions or passing scores resulting from the equating…

  8. Standardizing Acute Toxicity Data for use in Ecotoxicology Models: Influence of Test Type, Life Stage, and Concentration Reporting

    EPA Science Inventory

    Ecotoxicological models generally have large data requirements and are frequently based on existing information from diverse sources. Standardizing data for toxicological models may be necessary to reduce extraneous variation and to ensure models reflect intrinsic relationships. ...

  9. A method for evaluating the importance of system state observations to model predictions, with application to the Death Valley regional groundwater flow system

    USGS Publications Warehouse

    Tiedeman, Claire; Ely, D. Matthew; Hill, Mary C.; O'Brien, Grady M.

    2004-01-01

    We develop a new observation‐prediction (OPR) statistic for evaluating the importance of system state observations to model predictions. The OPR statistic measures the change in prediction uncertainty produced when an observation is added to or removed from an existing monitoring network, and it can be used to guide refinement and enhancement of the network. Prediction uncertainty is approximated using a first‐order second‐moment method. We apply the OPR statistic to a model of the Death Valley regional groundwater flow system (DVRFS) to evaluate the importance of existing and potential hydraulic head observations to predicted advective transport paths in the saturated zone underlying Yucca Mountain and underground testing areas on the Nevada Test Site. Important existing observations tend to be far from the predicted paths, and many unimportant observations are in areas of high observation density. These results can be used to select locations at which increased observation accuracy would be beneficial and locations that could be removed from the network. Important potential observations are mostly in areas of high hydraulic gradient far from the paths. Results for both existing and potential observations are related to the flow system dynamics and coarse parameter zonation in the DVRFS model. If system properties in different locations are as similar as the zonation assumes, then the OPR results illustrate a data collection opportunity whereby observations in distant, high‐gradient areas can provide information about properties in flatter‐gradient areas near the paths. If this similarity is suspect, then the analysis produces a different type of data collection opportunity involving testing of model assumptions critical to the OPR results.

  10. Binomial test statistics using Psi functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowman, Kimiko o

    2007-01-01

    For the negative binomial model (probability generating function (p + 1 - pt){sup -k}) a logarithmic derivative is the Psi function difference {psi}(k + x) - {psi}(k); this and its derivatives lead to a test statistic to decide on the validity of a specified model. The test statistic uses a data base so there exists a comparison available between theory and application. Note that the test function is not dominated by outliers. Applications to (i) Fisher's tick data, (ii) accidents data, (iii) Weldon's dice data are included.

  11. Computer aided modeling of soil mix designs to predict characteristics and properties of stabilized road bases.

    DOT National Transportation Integrated Search

    2009-07-01

    "Considerable data exists for soils that were tested and documented, both for native properties and : properties with pozzolan stabilization. While the data exists there was no database for the Nebraska : Department of Roads to retrieve this data for...

  12. Experiments in fault tolerant software reliability

    NASA Technical Reports Server (NTRS)

    Mcallister, David F.; Tai, K. C.; Vouk, Mladen A.

    1987-01-01

    The reliability of voting was evaluated in a fault-tolerant software system for small output spaces. The effectiveness of the back-to-back testing process was investigated. Version 3.0 of the RSDIMU-ATS, a semi-automated test bed for certification testing of RSDIMU software, was prepared and distributed. Software reliability estimation methods based on non-random sampling are being studied. The investigation of existing fault-tolerance models was continued and formulation of new models was initiated.

  13. Testing Galactic Cosmic Ray Models

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.

    2009-01-01

    Models of the Galactic Cosmic Ray Environment are used for designing and planning space missions. The existing models will be reviewed. Spectral representations from these models will be compared with measurements of galactic cosmic ray spectra made on balloon flights and satellite flights over a period of more than 50 years.

  14. Third phase of pocket-sized electronic dosimeter testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, R.A.; Hooker, C.D.; Hogan, B.T.

    1982-05-01

    The experiences of industrial radiographers have indicated that electronic radiation-warning devices become inoperative when they are used under some types of ambient conditions. This report, as a follow-up to NUREG/CR-0554 and NUREG/CR-1452, documents the nature of tests performed on several additional commercially available models. None of the four models tested passed the tests for ruggedness and severe environmental conditions. However, all models passed most of the requirements of a Health Physics Society draft standard of performance specifications for these devices. The test procedures used in the project and the results obtained are discussed. Conclusions from the tests and recommendations concerningmore » potentially useful modifications to existing devices are presented.« less

  15. Principles and Practice of Scaled Difference Chi-Square Testing

    ERIC Educational Resources Information Center

    Bryant, Fred B.; Satorra, Albert

    2012-01-01

    We highlight critical conceptual and statistical issues and how to resolve them in conducting Satorra-Bentler (SB) scaled difference chi-square tests. Concerning the original (Satorra & Bentler, 2001) and new (Satorra & Bentler, 2010) scaled difference tests, a fundamental difference exists in how to compute properly a model's scaling correction…

  16. Alternatives to the fish early life-stage test: Developing a conceptual model for early fish development

    EPA Science Inventory

    Chronic fish toxicity is a key parameter for hazard classification and environmental risk assessment of chemicals, and the OECD 210 fish early life-stage (FELS) test is the primary guideline test used for various international regulatory programs. There exists a need to develop ...

  17. Comparison of the new intermediate complex atmospheric research (ICAR) model with the WRF model in a mesoscale catchment in Central Europe

    NASA Astrophysics Data System (ADS)

    Härer, Stefan; Bernhardt, Matthias; Gutmann, Ethan; Bauer, Hans-Stefan; Schulz, Karsten

    2017-04-01

    Until recently, a large gap existed in the atmospheric downscaling strategies. On the one hand, computationally efficient statistical approaches are widely used, on the other hand, dynamic but CPU-intensive numeric atmospheric models like the weather research and forecast (WRF) model exist. The intermediate complex atmospheric research (ICAR) model developed at NCAR (Boulder, Colorado, USA) addresses this gap by combining the strengths of both approaches: the process-based structure of a dynamic model and its applicability in a changing climate as well as the speed of a parsimonious modelling approach which facilitates the modelling of ensembles and a straightforward way to test new parametrization schemes as well as various input data sources. However, the ICAR model has not been tested in Europe and on slightly undulated terrain yet. This study now evaluates for the first time the ICAR model to WRF model runs in Central Europe comparing a complete year of model results in the mesoscale Attert catchment (Luxembourg). In addition to these modelling results, we also describe the first implementation of ICAR on an Intel Phi architecture and consequently perform speed tests between the Vienna cluster, a standard workstation and the use of an Intel Phi coprocessor. Finally, the study gives an outlook on sensitivity studies using slightly different input data sources.

  18. The epistemological status of general circulation models

    NASA Astrophysics Data System (ADS)

    Loehle, Craig

    2018-03-01

    Forecasts of both likely anthropogenic effects on climate and consequent effects on nature and society are based on large, complex software tools called general circulation models (GCMs). Forecasts generated by GCMs have been used extensively in policy decisions related to climate change. However, the relation between underlying physical theories and results produced by GCMs is unclear. In the case of GCMs, many discretizations and approximations are made, and simulating Earth system processes is far from simple and currently leads to some results with unknown energy balance implications. Statistical testing of GCM forecasts for degree of agreement with data would facilitate assessment of fitness for use. If model results need to be put on an anomaly basis due to model bias, then both visual and quantitative measures of model fit depend strongly on the reference period used for normalization, making testing problematic. Epistemology is here applied to problems of statistical inference during testing, the relationship between the underlying physics and the models, the epistemic meaning of ensemble statistics, problems of spatial and temporal scale, the existence or not of an unforced null for climate fluctuations, the meaning of existing uncertainty estimates, and other issues. Rigorous reasoning entails carefully quantifying levels of uncertainty.

  19. A Better Leveled Playing Field for Assessing Satisfactory Job Performance of Superintendents on the Basis of High-Stakes Testing Outcomes

    ERIC Educational Resources Information Center

    Young, I. Phillip; Cox, Edward P.; Buckman, David G.

    2014-01-01

    To assess satisfactory job performance of superintendents on the basis of school districts' high-stakes testing outcomes, existing teacher models were reviewed and critiqued as potential options for retrofit. For these models, specific problems were identified relative to the choice of referent groups. An alternate referent group (statewide…

  20. PiTS-1: Carbon Partitioning in Loblolly Pine after 13C Labeling and Shade Treatments

    DOE Data Explorer

    Warren, J. M.; Iversen, C. M.; Garten, Jr., C. T.; Norby, R. J.; Childs, J.; Brice, D.; Evans, R. M.; Gu, L.; Thornton, P.; Weston, D. J.

    2013-01-01

    The PiTS task was established with the objective of improving the C partitioning routines in existing ecosystem models by exploring mechanistic model representations of partitioning tested against field observations. We used short-term field manipulations of C flow, through 13CO2 labeling, canopy shading and stem girdling, to dramatically alter C partitioning, and resultant data are being used to test model representation of C partitioning processes in the Community Land Model (CLM4 or CLM4.5).

  1. RAPID ASSESSMENT OF URBAN WETLANDS: FUNCTIONAL ASSESSMENT MODEL DEVELOPMENT AND EVALUATION

    EPA Science Inventory

    The objective of this study was to test the ability of existing hydrogeomorphic (HGM) functional assessment models and our own proposed models to predict rates of nitrate production and removal, functions critical to water quality protection, in forested riparian wetlands in nort...

  2. A Bivariate Generalized Linear Item Response Theory Modeling Framework to the Analysis of Responses and Response Times.

    PubMed

    Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J

    2015-01-01

    A generalized linear modeling framework to the analysis of responses and response times is outlined. In this framework, referred to as bivariate generalized linear item response theory (B-GLIRT), separate generalized linear measurement models are specified for the responses and the response times that are subsequently linked by cross-relations. The cross-relations can take various forms. Here, we focus on cross-relations with a linear or interaction term for ability tests, and cross-relations with a curvilinear term for personality tests. In addition, we discuss how popular existing models from the psychometric literature are special cases in the B-GLIRT framework depending on restrictions in the cross-relation. This allows us to compare existing models conceptually and empirically. We discuss various extensions of the traditional models motivated by practical problems. We also illustrate the applicability of our approach using various real data examples, including data on personality and cognitive ability.

  3. The NASA modern technology rotors program

    NASA Technical Reports Server (NTRS)

    Watts, M. E.; Cross, J. L.

    1986-01-01

    Existing data bases regarding helicopters are based on work conducted on 'old-technology' rotor systems. The Modern Technology Rotors (MTR) Program is to provide extensive data bases on rotor systems using present and emerging technology. The MTR is concerned with modern, four-bladed, rotor systems presently being manufactured or under development. Aspects of MTR philosophy are considered along with instrumentation, the MTR test program, the BV 360 Rotor, and the UH-60 Black Hawk. The program phases include computer modelling, shake test, model-scale test, minimally instrumented flight test, extensively pressure-instrumented-blade flight test, and full-scale wind tunnel test.

  4. A Way Forward Commentary

    EPA Science Inventory

    Models for predicting adverse outcomes can help reduce and focus animal testing with new and existing chemicals. This short "thought starter" describes how quantitative-structure activity relationship and systems biology models can be used to help define toxicity pathways and li...

  5. Improved dual-porosity models for petrophysical analysis of vuggy reservoirs

    NASA Astrophysics Data System (ADS)

    Wang, Haitao

    2017-08-01

    A new vug interconnection, isolated vug (IVG), was investigated through resistivity modeling and the dual-porosity model for connected vug (CVG) vuggy reservoirs was tested. The vuggy models were built by pore-scale modeling, and their electrical resistivity was calculated by the finite difference method. For CVG vuggy reservoirs, the CVG reduced formation factors and increased the porosity exponents, and the existing dual-porosity model failed to match these results. Based on the existing dual-porosity model, a conceptual dual-porosity model for CVG was developed by introducing a decoupled term to reduce the resistivity of the model. For IVG vuggy reservoirs, IVG increased the formation factors and porosity exponents. The existing dual-porosity model succeeded due to accurate calculation of the formation factors of the deformed interparticle porous media caused by the insertion of the IVG. Based on the existing dual-porosity model, a new porosity model for IVG vuggy reservoirs was developed by simultaneously recalculating the formation factors of the altered interparticle pore-scale models. The formation factors and porosity exponents from the improved and extended dual-porosity models for CVG and IVG vuggy reservoirs well matched the simulated formation factors and porosity exponents. This work is helpful for understanding the influence of connected and disconnected vugs on resistivity factors—an issue of particular importance in carbonates.

  6. Corrected goodness-of-fit test in covariance structure analysis.

    PubMed

    Hayakawa, Kazuhiko

    2018-05-17

    Many previous studies report simulation evidence that the goodness-of-fit test in covariance structure analysis or structural equation modeling suffers from the overrejection problem when the number of manifest variables is large compared with the sample size. In this study, we demonstrate that one of the tests considered in Browne (1974) can address this long-standing problem. We also propose a simple modification of Satorra and Bentler's mean and variance adjusted test for non-normal data. A Monte Carlo simulation is carried out to investigate the performance of the corrected tests in the context of a confirmatory factor model, a panel autoregressive model, and a cross-lagged panel (panel vector autoregressive) model. The simulation results reveal that the corrected tests overcome the overrejection problem and outperform existing tests in most cases. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  7. Characterization of Orbital Debris via Hyper-Velocity Laboratory-Based Tests

    NASA Technical Reports Server (NTRS)

    Cowardin, Heather; Liou, J.-C.; Anz-Meador, Phillip; Sorge, Marlon; Opiela, John; Fitz-Coy, Norman; Huynh, Tom; Krisko, Paula

    2017-01-01

    Existing DOD and NASA satellite breakup models are based on a key laboratory test, Satellite Orbital debris Characterization Impact Test (SOCIT), which has supported many applications and matched on-orbit events involving older satellite designs reasonably well over the years. In order to update and improve these models, the NASA Orbital Debris Program Office, in collaboration with the Air Force Space and Missile Systems Center, The Aerospace Corporation, and the University of Florida, replicated a hypervelocity impact using a mock-up satellite, DebriSat, in controlled laboratory conditions. DebriSat is representative of present-day LEO satellites, built with modern spacecraft materials and construction techniques. Fragments down to 2 mm in size will be characterized by their physical and derived properties. A subset of fragments will be further analyzed in laboratory radar and optical facilities to update the existing radar-based NASA Size Estimation Model (SEM) and develop a comparable optical-based SEM. A historical overview of the project, status of the characterization process, and plans for integrating the data into various models will be discussed herein.

  8. Characterization of Orbital Debris via Hyper-Velocity Laboratory-Based Tests

    NASA Technical Reports Server (NTRS)

    Cowardin, Heather; Liou, J.-C.; Krisko, Paula; Opiela, John; Fitz-Coy, Norman; Sorge, Marlon; Huynh, Tom

    2017-01-01

    Existing DoD and NASA satellite breakup models are based on a key laboratory test, Satellite Orbital debris Characterization Impact Test (SOCIT), which has supported many applications and matched on-orbit events involving older satellite designs reasonably well over the years. In order to update and improve these models, the NASA Orbital Debris Program Office, in collaboration with the Air Force Space and Missile Systems Center, The Aerospace Corporation, and the University of Florida, replicated a hypervelocity impact using a mock-up satellite, DebriSat, in controlled laboratory conditions. DebriSat is representative of present-day LEO satellites, built with modern spacecraft materials and construction techniques. Fragments down to 2 mm in size will be characterized by their physical and derived properties. A subset of fragments will be further analyzed in laboratory radar and optical facilities to update the existing radar-based NASA Size Estimation Model (SEM) and develop a comparable optical-based SEM. A historical overview of the project, status of the characterization process, and plans for integrating the data into various models will be discussed herein.

  9. A testing-coverage software reliability model considering fault removal efficiency and error generation.

    PubMed

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.

  10. DebriSat: The New Hypervelocity Impact Test for Satellite Breakup Fragment Characterization

    NASA Technical Reports Server (NTRS)

    Cowardin, Heather

    2015-01-01

    To replicate a hyper-velocity fragmentation event using modern-day spacecraft materials and construction techniques to better improve the existing DoD and NASA breakup models: DebriSat is intended to be representative of modern LEO satellites. Major design decisions were reviewed and approved by Aerospace subject matter experts from different disciplines. DebriSat includes 7 major subsystems. Attitude determination and control system (ADCS), command and data handling (C&DH), electrical power system (EPS), payload, propulsion, telemetry tracking and command (TT&C), and thermal management. To reduce cost, most components are emulated based on existing design of flight hardware and fabricated with the same materials. center dotA key laboratory-based test, Satellite Orbital debris Characterization Impact Test (SOCIT), supporting the development of the DoD and NASA satellite breakup models was conducted at AEDC in 1992. Breakup models based on SOCIT have supported many applications and matched on-orbit events reasonably well over the years.

  11. Physics of Accretion in X-Ray Binaries

    NASA Technical Reports Server (NTRS)

    Vrtilek, Saeqa D.

    2004-01-01

    This project consists of several related investigations directed to the study of mass transfer processes in X-ray binaries. Models developed over several years incorporating highly detailed physics will be tested on a balanced mix of existing data and planned observations with both ground and space-based observatories. The extended time coverage of the observations and the existence of {\\it simultaneous} X-ray, ultraviolet, and optical observations will be particularly beneficial for studying the accretion flows. These investigations, which take as detailed a look at the accretion process in X-ray binaries as is now possible, test current models to their limits, and force us to extend them. We now have the ability to do simultaneous ultraviolet/X-ray/optical spectroscopy with HST, Chandra, XMM, and ground-based observatories. The rich spectroscopy that these Observations give us must be interpreted principally by reference to detailed models, the development of which is already well underway; tests of these essential interpretive tools are an important product of the proposed investigations.

  12. The Physics of Accretion in X-Ray Binaries

    NASA Technical Reports Server (NTRS)

    Vrtilek, S.; Oliversen, Ronald (Technical Monitor)

    2001-01-01

    This project consists of several related investigations directed to the study of mass transfer processes in X-ray binaries. Models developed over several years incorporating highly detailed physics will be tested on a balanced mix of existing data and planned observations with both ground and space-based observatories. The extended time coverage of the observations and the existence of simultaneous X-ray, ultraviolet, and optical observations will be particularly beneficial for studying the accretion flows. These investigations, which take as detailed a look at the accretion process in X-ray binaries as is now possible, test current models to their limits, and force us to extend them. We now have the ability to do simultaneous ultraviolet/X-ray/optical spectroscopy with HST, Chandra, XMM, and ground-based observatories. The rich spectroscopy that these observations give us must be interpreted principally by reference to detailed models, the development of which is already well underway; tests of these essential interpretive tools are an important product of the proposed investigations.

  13. Measuring and Advancing Experimental Design Ability in an Introductory Course without Altering Existing Lab Curriculum.

    PubMed

    Shanks, Ryan A; Robertson, Chuck L; Haygood, Christian S; Herdliksa, Anna M; Herdliska, Heather R; Lloyd, Steven A

    2017-01-01

    Introductory biology courses provide an important opportunity to prepare students for future courses, yet existing cookbook labs, although important in their own way, fail to provide many of the advantages of semester-long research experiences. Engaging, authentic research experiences aid biology students in meeting many learning goals. Therefore, overlaying a research experience onto the existing lab structure allows faculty to overcome barriers involving curricular change. Here we propose a working model for this overlay design in an introductory biology course and detail a means to conduct this lab with minimal increases in student and faculty workloads. Furthermore, we conducted exploratory factor analysis of the Experimental Design Ability Test (EDAT) and uncovered two latent factors which provide valid means to assess this overlay model's ability to increase advanced experimental design abilities. In a pre-test/post-test design, we demonstrate significant increases in both basic and advanced experimental design abilities in an experimental and comparison group. We measured significantly higher gains in advanced experimental design understanding in students in the experimental group. We believe this overlay model and EDAT factor analysis contribute a novel means to conduct and assess the effectiveness of authentic research experiences in an introductory course without major changes to the course curriculum and with minimal increases in faculty and student workloads.

  14. NASTRAN Modeling of Flight Test Components for UH-60A Airloads Program Test Configuration

    NASA Technical Reports Server (NTRS)

    Idosor, Florentino R.; Seible, Frieder

    1993-01-01

    Based upon the recommendations of the UH-60A Airloads Program Review Committee, work towards a NASTRAN remodeling effort has been conducted. This effort modeled and added the necessary structural/mass components to the existing UH-60A baseline NASTRAN model to reflect the addition of flight test components currently in place on the UH-60A Airloads Program Test Configuration used in NASA-Ames Research Center's Modern Technology Rotor Airloads Program. These components include necessary flight hardware such as instrument booms, movable ballast cart, equipment mounting racks, etc. Recent modeling revisions have also been included in the analyses to reflect the inclusion of new and updated primary and secondary structural components (i.e., tail rotor shaft service cover, tail rotor pylon) and improvements to the existing finite element mesh (i.e., revisions of material property estimates). Mode frequency and shape results have shown that components such as the Trimmable Ballast System baseplate and its respective payload ballast have caused a significant frequency change in a limited number of modes while only small percent changes in mode frequency are brought about with the addition of the other MTRAP flight components. With the addition of the MTRAP flight components, update of the primary and secondary structural model, and imposition of the final MTRAP weight distribution, modal results are computed representative of the 'best' model presently available.

  15. Military Potential Test of the Model PA23-250B Fixed-Wing Instrument Trainer

    DTIC Science & Technology

    1964-11-30

    cabin heater was installed in the test airplane. Existing climatic conditions precluded actual tests to determine the capability of the heater to...housed within the engine contol pedestal under the engine conr- trol levers. r , aulic pressure is supplied to the control unit by an engine-driven

  16. Ship Model Testing

    DTIC Science & Technology

    2016-01-15

    state-of-the-art equipment and to continue to produce excellent graduates in our field. Technical Approach In order to address our current testing ...New Additions • New material testing machine with environmental chamber • New dual-fuel test bed for Haeberle Laboratory • Upgrade existing...Southwark Emery universal test machine • 3D printer with ultra-high surface definition • CFD Workstations Since the inception of this grant, Webb

  17. 75 FR 27409 - Airworthiness Directives; Sikorsky Aircraft Corporation (Sikorsky) Model S-92A Helicopters

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-17

    ... gearbox (MGB) filter bowl assembly with a two-piece MGB filter bowl assembly and replacing the existing mounting studs. The AD also requires inspecting the MGB lube system filters, the housing, the housing... prompted by tests indicating that an existing MGB filter bowl assembly can fail under certain loading...

  18. Revisiting the co-existence of Attention-Deficit/Hyperactivity Disorder and Chronic Tic Disorder in childhood-The case of colour discrimination, sustained attention and interference control.

    PubMed

    Uebel-von Sandersleben, Henrik; Albrecht, Björn; Rothenberger, Aribert; Fillmer-Heise, Anke; Roessner, Veit; Sergeant, Joseph; Tannock, Rosemary; Banaschewski, Tobias

    2017-01-01

    Attention Deficit / Hyperactivity Disorder (ADHD) and Chronic Tic Disorder (CTD) are two common and frequently co-existing disorders, probably following an additive model. But this is not yet clear for the basic sensory function of colour processing sensitive to dopaminergic functioning in the retina and higher cognitive functions like attention and interference control. The latter two reflect important aspects for psychoeducation and behavioural treatment approaches. Colour discrimination using the Farnsworth-Munsell 100-hue Test, sustained attention during the Frankfurt Attention Inventory (FAIR), and interference liability during Colour- and Counting-Stroop-Tests were assessed to further clarify the cognitive profile of the co-existence of ADHD and CTD. Altogether 69 children were classified into four groups: ADHD (N = 14), CTD (N = 20), ADHD+CTD (N = 20) and healthy Controls (N = 15) and compared in cognitive functioning in a 2×2-factorial statistical model. Difficulties with colour discrimination were associated with both ADHD and CTD factors following an additive model, but in ADHD these difficulties tended to be more pronounced on the blue-yellow axis. Attention problems were characteristic for ADHD but not CTD. Interference load was significant in both Colour- and Counting-Stroop-Tests and unrelated to colour discrimination. Compared to Controls, interference load in the Colour-Stroop was higher in pure ADHD and in pure CTD, but not in ADHD+CTD, following a sub-additive model. In contrast, interference load in the Counting-Stroop did not reveal ADHD or CTD effects. The co-existence of ADHD and CTD is characterized by additive as well as sub-additive performance impairments, suggesting that their co-existence may show simple additive characteristics of both disorders or a more complex interaction, depending on demand. The equivocal findings on interference control may indicate limited validity of the Stroop-Paradigm for clinical assessments.

  19. Revisiting the co-existence of Attention-Deficit/Hyperactivity Disorder and Chronic Tic Disorder in childhood—The case of colour discrimination, sustained attention and interference control

    PubMed Central

    Rothenberger, Aribert; Fillmer-Heise, Anke; Roessner, Veit; Sergeant, Joseph; Tannock, Rosemary; Banaschewski, Tobias

    2017-01-01

    Objective Attention Deficit / Hyperactivity Disorder (ADHD) and Chronic Tic Disorder (CTD) are two common and frequently co-existing disorders, probably following an additive model. But this is not yet clear for the basic sensory function of colour processing sensitive to dopaminergic functioning in the retina and higher cognitive functions like attention and interference control. The latter two reflect important aspects for psychoeducation and behavioural treatment approaches. Methods Colour discrimination using the Farnsworth-Munsell 100-hue Test, sustained attention during the Frankfurt Attention Inventory (FAIR), and interference liability during Colour- and Counting-Stroop-Tests were assessed to further clarify the cognitive profile of the co-existence of ADHD and CTD. Altogether 69 children were classified into four groups: ADHD (N = 14), CTD (N = 20), ADHD+CTD (N = 20) and healthy Controls (N = 15) and compared in cognitive functioning in a 2×2-factorial statistical model. Results Difficulties with colour discrimination were associated with both ADHD and CTD factors following an additive model, but in ADHD these difficulties tended to be more pronounced on the blue-yellow axis. Attention problems were characteristic for ADHD but not CTD. Interference load was significant in both Colour- and Counting-Stroop-Tests and unrelated to colour discrimination. Compared to Controls, interference load in the Colour-Stroop was higher in pure ADHD and in pure CTD, but not in ADHD+CTD, following a sub-additive model. In contrast, interference load in the Counting-Stroop did not reveal ADHD or CTD effects. Conclusion The co-existence of ADHD and CTD is characterized by additive as well as sub-additive performance impairments, suggesting that their co-existence may show simple additive characteristics of both disorders or a more complex interaction, depending on demand. The equivocal findings on interference control may indicate limited validity of the Stroop-Paradigm for clinical assessments. PMID:28594866

  20. A Survey of UML Based Regression Testing

    NASA Astrophysics Data System (ADS)

    Fahad, Muhammad; Nadeem, Aamer

    Regression testing is the process of ensuring software quality by analyzing whether changed parts behave as intended, and unchanged parts are not affected by the modifications. Since it is a costly process, a lot of techniques are proposed in the research literature that suggest testers how to build regression test suite from existing test suite with minimum cost. In this paper, we discuss the advantages and drawbacks of using UML diagrams for regression testing and analyze that UML model helps in identifying changes for regression test selection effectively. We survey the existing UML based regression testing techniques and provide an analysis matrix to give a quick insight into prominent features of the literature work. We discuss the open research issues like managing and reducing the size of regression test suite, prioritization of the test cases that would be helpful during strict schedule and resources that remain to be addressed for UML based regression testing.

  1. The Case of Effort Variables in Student Performance.

    ERIC Educational Resources Information Center

    Borg, Mary O.; And Others

    1989-01-01

    Tests the existence of a structural shift between above- and below-average students in the econometric models that explain students' grades in principles of economics classes. Identifies a structural shift and estimates separate models for above- and below-average students. Concludes that separate models as well as educational policies are…

  2. Human sleep and circadian rhythms: a simple model based on two coupled oscillators.

    PubMed

    Strogatz, S H

    1987-01-01

    We propose a model of the human circadian system. The sleep-wake and body temperature rhythms are assumed to be driven by a pair of coupled nonlinear oscillators described by phase variables alone. The novel aspect of the model is that its equations may be solved analytically. Computer simulations are used to test the model against sleep-wake data pooled from 15 studies of subjects living for weeks in unscheduled, time-free environments. On these tests the model performs about as well as the existing models, although its mathematical structure is far simpler.

  3. Testing space weather connections in the solar system

    NASA Astrophysics Data System (ADS)

    Grison, B.; Souček, J.; Krupař, V.; Píša, D.; Santolík, O.; Taubenschuss, U.; Němec, F.

    2017-09-01

    This study aims at testing and validating tools for prediction of the impact of solar events in the vicinity of inner and outer solar system planets using in-situ spacecraft data (primarily MESSENGER, STEREO and ACE, but also VEX and Cassini), remote Jovian observations (Hubble telescope, Nançay decametric array), existing catalogues (HELCATS and Tao et al. (2005)) and the tested propagating models (the ICME radial propagation tool of the CDPP and the 1-D MHD code propagation model presented in Tao et al. (2005)).

  4. Construction of mathematical model for measuring material concentration by colorimetric method

    NASA Astrophysics Data System (ADS)

    Liu, Bing; Gao, Lingceng; Yu, Kairong; Tan, Xianghua

    2018-06-01

    This paper use the method of multiple linear regression to discuss the data of C problem of mathematical modeling in 2017. First, we have established a regression model for the concentration of 5 substances. But only the regression model of the substance concentration of urea in milk can pass through the significance test. The regression model established by the second sets of data can pass the significance test. But this model exists serious multicollinearity. We have improved the model by principal component analysis. The improved model is used to control the system so that it is possible to measure the concentration of material by direct colorimetric method.

  5. Simple algorithms for remote determination of mineral abundances and particle sizes from reflectance spectra

    NASA Technical Reports Server (NTRS)

    Johnson, Paul E.; Smith, Milton O.; Adams, John B.

    1992-01-01

    Algorithms were developed, based on Hapke's (1981) equations, for remote determinations of mineral abundances and particle sizes from reflectance spectra. In this method, spectra are modeled as a function of end-member abundances and illumination/viewing geometry. The method was tested on a laboratory data set. It is emphasized that, although there exist more sophisticated models, the present algorithms are particularly suited for remotely sensed data, where little opportunity exists to independently measure reflectance versus article size and phase function.

  6. Flight simulator fidelity assessment in a rotorcraft lateral translation maneuver

    NASA Technical Reports Server (NTRS)

    Hess, R. A.; Malsbury, T.; Atencio, A., Jr.

    1992-01-01

    A model-based methodology for assessing flight simulator fidelity in closed-loop fashion is exercised in analyzing a rotorcraft low-altitude maneuver for which flight test and simulation results were available. The addition of a handling qualities sensitivity function to a previously developed model-based assessment criteria allows an analytical comparison of both performance and handling qualities between simulation and flight test. Model predictions regarding the existence of simulator fidelity problems are corroborated by experiment. The modeling approach is used to assess analytically the effects of modifying simulator characteristics on simulator fidelity.

  7. Yield Model Development (YMD) implementation plan for fiscal years 1981 and 1982

    NASA Technical Reports Server (NTRS)

    Ambroziak, R. A. (Principal Investigator)

    1981-01-01

    A plan is described for supporting USDA crop production forecasting and estimation by (1) testing, evaluating, and selecting crop yield models for application testing; (2) identifying areas of feasible research for improvement of models; and (3) conducting research to modify existing models and to develop new crop yield assessment methods. Tasks to be performed for each of these efforts are described as well as for project management and support. The responsibilities of USDA, USDC, USDI, and NASA are delineated as well as problem areas to be addressed.

  8. Plasticity in Memorial Networks

    ERIC Educational Resources Information Center

    Hayes-Roth, Barbara; Hayes-Roth, Frederick

    1975-01-01

    An adaptive network model is proposed to represent the structure and processing of knowledge. Accessibility of subjects' stored information was measured. Relationships exist among (a) frequency of verifying a test relation, (b) other relations involving concepts used to evaluate test relation, (c) frequency of verifying those relations. (CHK)

  9. Nested ocean models: Work in progress

    NASA Technical Reports Server (NTRS)

    Perkins, A. Louise

    1991-01-01

    The ongoing work of combining three existing software programs into a nested grid oceanography model is detailed. The HYPER domain decomposition program, the SPEM ocean modeling program, and a quasi-geostrophic model written in England are being combined into a general ocean modeling facility. This facility will be used to test the viability and the capability of two-way nested grids in the North Atlantic.

  10. Model predictions of wind and turbulence profiles associated with an ensemble of aircraft accidents

    NASA Technical Reports Server (NTRS)

    Williamson, G. G.; Lewellen, W. S.; Teske, M. E.

    1977-01-01

    The feasibility of predicting conditions under which wind/turbulence environments hazardous to aviation operations exist is studied by examining a number of different accidents in detail. A model of turbulent flow in the atmospheric boundary layer is used to reconstruct wind and turbulence profiles which may have existed at low altitudes at the time of the accidents. The predictions are consistent with available flight recorder data, but neither the input boundary conditions nor the flight recorder observations are sufficiently precise for these studies to be interpreted as verification tests of the model predictions.

  11. Behavioral momentum and resurgence: Effects of time in extinction and repeated resurgence tests

    PubMed Central

    Shahan, Timothy A.

    2014-01-01

    Resurgence is an increase in a previously extinguished operant response that occurs if an alternative reinforcement introduced during extinction is removed. Shahan and Sweeney (2011) developed a quantitative model of resurgence based on behavioral momentum theory that captures existing data well and predicts that resurgence should decrease as time in extinction and exposure to the alternative reinforcement increases. Two experiments tested this prediction. The data from Experiment 1 suggested that without a return to baseline, resurgence decreases with increased exposure to alternative reinforcement and to extinction of the target response. Experiment 2 tested the predictions of the model across two conditions, one with constant alternative reinforcement for five sessions, and the other with alternative reinforcement removed three times. In both conditions, the alternative reinforcement was removed for the final test session. Experiment 2 again demonstrated a decrease in relapse across repeated resurgence tests. Furthermore, comparably little resurgence was observed at the same time point in extinction in the final test, despite dissimilar previous exposures to alternative reinforcement removal. The quantitative model provided a good description of the observed data in both experiments. More broadly, these data suggest that increased exposure to extinction may be a successful strategy to reduce resurgence. The relationship between these data and existing tests of the effect of time in extinction on resurgence is discussed. PMID:23982985

  12. Parental Style and Child Bullying and Victimization Experiences at School

    ERIC Educational Resources Information Center

    Georgiou, Stelios N.

    2008-01-01

    The aim of this study was to propose and test a theory-driven model describing the network of effects existing between parental style and child involvement in bullying incidents at school. The participants were 377 Greek Cypriot children (mean age 11.6) and their mothers. It was found that a line of influence exists between maternal…

  13. A preliminary study of crack initiation and growth at stress concentration sites

    NASA Technical Reports Server (NTRS)

    Dawicke, D. S.; Gallagher, J. P.; Hartman, G. A.; Rajendran, A. M.

    1982-01-01

    Crack initiation and propagation models for notches are examined. The Dowling crack initiation model and the E1 Haddad et al. crack propagation model were chosen for additional study. Existing data was used to make a preliminary evaluation of the crack propagation model. The results indicate that for the crack sizes in the test, the elastic parameter K gave good correlation for the crack growth rate data. Additional testing, directed specifically toward the problem of small cracks initiating and propagating from notches is necessary to make a full evaluation of these initiation and propagation models.

  14. Test code for the assessment and improvement of Reynolds stress models

    NASA Technical Reports Server (NTRS)

    Rubesin, M. W.; Viegas, J. R.; Vandromme, D.; Minh, H. HA

    1987-01-01

    An existing two-dimensional, compressible flow, Navier-Stokes computer code, containing a full Reynolds stress turbulence model, was adapted for use as a test bed for assessing and improving turbulence models based on turbulence simulation experiments. To date, the results of using the code in comparison with simulated channel flow and over an oscillating flat plate have shown that the turbulence model used in the code needs improvement for these flows. It is also shown that direct simulation of turbulent flows over a range of Reynolds numbers are needed to guide subsequent improvement of turbulence models.

  15. Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    2000-01-01

    Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.

  16. Proportional Reasoning of Preservice Elementary Education Majors: An Epistemic Model of the Proportional Reasoning Construct.

    ERIC Educational Resources Information Center

    Fleener, M. Jayne

    Current research and learning theory suggest that a hierarchy of proportional reasoning exists that can be tested. Using G. Vergnaud's four complexity variables (structure, content, numerical characteristics, and presentation) and T. E. Kieren's model of rational number knowledge building, an epistemic model of proportional reasoning was…

  17. A testing-coverage software reliability model considering fault removal efficiency and error generation

    PubMed Central

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance. PMID:28750091

  18. An objective Bayesian analysis of a crossover design via model selection and model averaging.

    PubMed

    Li, Dandan; Sivaganesan, Siva

    2016-11-10

    Inference about the treatment effect in a crossover design has received much attention over time owing to the uncertainty in the existence of the carryover effect and its impact on the estimation of the treatment effect. Adding to this uncertainty is that the existence of the carryover effect and its size may depend on the presence of the treatment effect and its size. We consider estimation and testing hypothesis about the treatment effect in a two-period crossover design, assuming normally distributed response variable, and use an objective Bayesian approach to test the hypothesis about the treatment effect and to estimate its size when it exists while accounting for the uncertainty about the presence of the carryover effect as well as the treatment and period effects. We evaluate and compare the performance of the proposed approach with a standard frequentist approach using simulated data, and real data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. A Solution to Separation and Multicollinearity in Multiple Logistic Regression

    PubMed Central

    Shen, Jianzhao; Gao, Sujuan

    2010-01-01

    In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27–38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth’s penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study. PMID:20376286

  20. A Solution to Separation and Multicollinearity in Multiple Logistic Regression.

    PubMed

    Shen, Jianzhao; Gao, Sujuan

    2008-10-01

    In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27-38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth's penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study.

  1. Technical manual for basic version of the Markov chain nest productivity model (MCnest)

    EPA Science Inventory

    The Markov Chain Nest Productivity Model (or MCnest) integrates existing toxicity information from three standardized avian toxicity tests with information on species life history and the timing of pesticide applications relative to the timing of avian breeding seasons to quantit...

  2. User’s manual for basic version of MCnest Markov chain nest productivity model

    EPA Science Inventory

    The Markov Chain Nest Productivity Model (or MCnest) integrates existing toxicity information from three standardized avian toxicity tests with information on species life history and the timing of pesticide applications relative to the timing of avian breeding seasons to quantit...

  3. Performance of a Deep-Learning Neural Network Model in Assessing Skeletal Maturity on Pediatric Hand Radiographs.

    PubMed

    Larson, David B; Chen, Matthew C; Lungren, Matthew P; Halabi, Safwan S; Stence, Nicholas V; Langlotz, Curtis P

    2018-04-01

    Purpose To compare the performance of a deep-learning bone age assessment model based on hand radiographs with that of expert radiologists and that of existing automated models. Materials and Methods The institutional review board approved the study. A total of 14 036 clinical hand radiographs and corresponding reports were obtained from two children's hospitals to train and validate the model. For the first test set, composed of 200 examinations, the mean of bone age estimates from the clinical report and three additional human reviewers was used as the reference standard. Overall model performance was assessed by comparing the root mean square (RMS) and mean absolute difference (MAD) between the model estimates and the reference standard bone ages. Ninety-five percent limits of agreement were calculated in a pairwise fashion for all reviewers and the model. The RMS of a second test set composed of 913 examinations from the publicly available Digital Hand Atlas was compared with published reports of an existing automated model. Results The mean difference between bone age estimates of the model and of the reviewers was 0 years, with a mean RMS and MAD of 0.63 and 0.50 years, respectively. The estimates of the model, the clinical report, and the three reviewers were within the 95% limits of agreement. RMS for the Digital Hand Atlas data set was 0.73 years, compared with 0.61 years of a previously reported model. Conclusion A deep-learning convolutional neural network model can estimate skeletal maturity with accuracy similar to that of an expert radiologist and to that of existing automated models. © RSNA, 2017 An earlier incorrect version of this article appeared online. This article was corrected on January 19, 2018.

  4. Modeling Aromatic Liquids:  Toluene, Phenol, and Pyridine.

    PubMed

    Baker, Christopher M; Grant, Guy H

    2007-03-01

    Aromatic groups are now acknowledged to play an important role in many systems of interest. However, existing molecular mechanics methods provide a poor representation of these groups. In a previous paper, we have shown that the molecular mechanics treatment of benzene can be improved by the incorporation of an explicit representation of the aromatic π electrons. Here, we develop this concept further, developing charge-separation models for toluene, phenol, and pyridine. Monte Carlo simulations are used to parametrize the models, via the reproduction of experimental thermodynamic data, and our models are shown to outperform an existing atom-centered model. The models are then used to make predictions about the structures of the liquids at the molecular level and are tested further through their application to the modeling of gas-phase dimers and cation-π interactions.

  5. Cultural Resource Predictive Modeling

    DTIC Science & Technology

    2017-10-01

    property to manage ? a. Yes 2) Do you use CRPM (Cultural Resource Predictive Modeling) No, but I use predictive modelling informally . For example...resource program and provide support to the test ranges for their missions. This document will provide information such as lessons learned, points...of contact, and resources to the range cultural resource managers . Objective/Scope: Identify existing cultural resource predictive models and

  6. An integrated production-inventory model for food products adopting a general raw material procurement policy

    NASA Astrophysics Data System (ADS)

    Fauza, G.; Prasetyo, H.; Amanto, B. S.

    2018-05-01

    Studies on an integrated production-inventory model for deteriorating items have been done extensively. Most of the studies define deterioration as physical depletion of some inventories over time. This definition may not represent the deterioration characteristics of food products. The quality of food production decreases over time while the quantity remains the same. Further, in the existing models, the raw material is replenished several times (or at least once) within one production cycle. In food industries, however, a food company, for several reasons (e.g., the seasonal raw materials, discounted price, etc.) sometimes will get more benefit if it orders raw materials in a large quantity. Considering this fact, this research, therefore, is aimed at developing a more representative inventory model by (i) considering the quality losses in food and (ii) adopting a general raw material procurement policy. A mathematical model is established to represent the proposed policy in which the total profit of the system is the objective function. To evaluate the performance of the model, a numerical test was conducted. The numerical test indicates that the developed model has better performance, i.e., the total profit is 2.3% higher compared to the existing model.

  7. Evaluation of new collision-pair selection models in DSMC

    NASA Astrophysics Data System (ADS)

    Akhlaghi, Hassan; Roohi, Ehsan

    2017-10-01

    The current paper investigates new collision-pair selection procedures in a direct simulation Monte Carlo (DSMC) method. Collision partner selection based on the random procedure from nearest neighbor particles and deterministic selection of nearest neighbor particles have already been introduced as schemes that provide accurate results in a wide range of problems. In the current research, new collision-pair selections based on the time spacing and direction of the relative movement of particles are introduced and evaluated. Comparisons between the new and existing algorithms are made considering appropriate test cases including fluctuations in homogeneous gas, 2D equilibrium flow, and Fourier flow problem. Distribution functions for number of particles and collisions in cell, velocity components, and collisional parameters (collision separation, time spacing, relative velocity, and the angle between relative movements of particles) are investigated and compared with existing analytical relations for each model. The capability of each model in the prediction of the heat flux in the Fourier problem at different cell numbers, numbers of particles, and time steps is examined. For new and existing collision-pair selection schemes, the effect of an alternative formula for the number of collision-pair selections and avoiding repetitive collisions are investigated via the prediction of the Fourier heat flux. The simulation results demonstrate the advantages and weaknesses of each model in different test cases.

  8. Testing a Conceptual Change Model Framework for Visual Data

    ERIC Educational Resources Information Center

    Finson, Kevin D.; Pedersen, Jon E.

    2015-01-01

    An emergent data analysis technique was employed to test the veracity of a conceptual framework constructed around visual data use and instruction in science classrooms. The framework incorporated all five key components Vosniadou (2007a, 2007b) described as existing in a learner's schema: framework theory, presuppositions, conceptual domains,…

  9. Exploring the Role of Goal Theory in Understanding Training Motivation

    ERIC Educational Resources Information Center

    Smith, Rebecca; Jayasuriya, Rohan; Caputi, Peter; Hammer, David

    2008-01-01

    A model to test conceptions from goal theory within an existing framework of training motivation was developed and tested with employees participating in training in a non-profit organization. It was hypothesized that goal orientation ("distal factors") along with self-efficacy, expectancy and valence ("proximal factors") would predict goal…

  10. Experimental and Numerical Study on Tensile Strength of Concrete under Different Strain Rates

    PubMed Central

    Min, Fanlu; Yao, Zhanhu; Jiang, Teng

    2014-01-01

    The dynamic characterization of concrete is fundamental to understand the material behavior in case of heavy earthquakes and dynamic events. The implementation of material constitutive law is of capital importance for the numerical simulation of the dynamic processes as those caused by earthquakes. Splitting tensile concrete specimens were tested at strain rates of 10−7 s−1 to 10−4 s−1 in an MTS material test machine. Results of tensile strength versus strain rate are presented and compared with compressive strength and existing models at similar strain rates. Dynamic increase factor versus strain rate curves for tensile strength were also evaluated and discussed. The same tensile data are compared with strength data using a thermodynamic model. Results of the tests show a significant strain rate sensitive behavior, exhibiting dynamic tensile strength increasing with strain rate. In the quasistatic strain rate regime, the existing models often underestimate the experimental results. The thermodynamic theory for the splitting tensile strength of concrete satisfactorily describes the experimental findings of strength as effect of strain rates. PMID:24883355

  11. AM2 Mat End Connector Modeling and Performance Validation

    DTIC Science & Technology

    2015-08-01

    9 3.2.1 Subgrade construction and posttest forensics...layout. 3.2.1 Subgrade construction and posttest forensics The test section subgrade was built using in-place material from a previous AM2 test...area 50 ft wide by 42 ft long of the existing test bed was removed and replaced with newly processed material. Posttest values from the previous

  12. Reader Reaction On the generalized Kruskal-Wallis test for genetic association studies incorporating group uncertainty

    PubMed Central

    Wu, Baolin; Guan, Weihua

    2015-01-01

    Summary Acar and Sun (2013, Biometrics, 69, 427-435) presented a generalized Kruskal-Wallis (GKW) test for genetic association studies that incorporated the genotype uncertainty and showed its robust and competitive performance compared to existing methods. We present another interesting way to derive the GKW test via a rank linear model. PMID:25351417

  13. Reader reaction on the generalized Kruskal-Wallis test for genetic association studies incorporating group uncertainty.

    PubMed

    Wu, Baolin; Guan, Weihua

    2015-06-01

    Acar and Sun (2013, Biometrics 69, 427-435) presented a generalized Kruskal-Wallis (GKW) test for genetic association studies that incorporated the genotype uncertainty and showed its robust and competitive performance compared to existing methods. We present another interesting way to derive the GKW test via a rank linear model. © 2014, The International Biometric Society.

  14. Consolidation of data base for Army generalized missile model

    NASA Technical Reports Server (NTRS)

    Klenke, D. J.; Hemsch, M. J.

    1980-01-01

    Data from plume interaction tests, nose mounted canard configuration tests, and high angle of attack tests on the Army Generalized Missile model are consolidated in a computer program which makes them readily accessible for plotting, listing, and evaluation. The program is written in FORTRAN and will run on an ordinary minicomputer. It has the capability of retrieving any coefficient from the existing DATAMAN tapes and displaying it in tabular or plotted form. Comparisons of data taken in several wind tunnels and of data with the predictions of Program MISSILE2 are also presented.

  15. Development of a digital clinical pathway for emergency medicine: Lessons from usability testing and implementation failure.

    PubMed

    Gutenstein, Marc; Pickering, John W; Than, Martin

    2018-06-01

    Clinical pathways are used to support the management of patients in emergency departments. An existing document-based clinical pathway was used as the foundation on which to design and build a digital clinical pathway for acute chest pain, with the aim of improving clinical calculations, clinician decision-making, documentation, and data collection. Established principles of decision support system design were used to build an application within the existing electronic health record, before testing with a multidisciplinary team of doctors using a think-aloud protocol. Technical authoring was successful, however, usability testing revealed that the user experience and the flexibility of workflow within the application were critical barriers to implementation. Emergency medicine and acute care decision support systems face particular challenges to existing models of linear workflow that should be deliberately addressed in digital pathway design. We make key recommendations regarding digital pathway design in emergency medicine.

  16. Comparison of NASTRAN analysis with ground vibration results of UH-60A NASA/AEFA test configuration

    NASA Technical Reports Server (NTRS)

    Idosor, Florentino; Seible, Frieder

    1990-01-01

    Preceding program flight tests, a ground vibration test and modal test analysis of a UH-60A Black Hawk helicopter was conducted by Sikorsky Aircraft to complement the UH-60A test plan and NASA/ARMY Modern Technology Rotor Airloads Program. The 'NASA/AEFA' shake test configuration was tested for modal frequencies and shapes and compared with its NASTRAN finite element model counterpart to give correlative results. Based upon previous findings, significant differences in modal data existed and were attributed to assumptions regarding the influence of secondary structure contributions in the preliminary NASTRAN modeling. An analysis of an updated finite element model including several secondary structural additions has confirmed that the inclusion of specific secondary components produces a significant effect on modal frequency and free-response shapes and improves correlations at lower frequencies with shake test data.

  17. Not just a theory--the utility of mathematical models in evolutionary biology.

    PubMed

    Servedio, Maria R; Brandvain, Yaniv; Dhole, Sumit; Fitzpatrick, Courtney L; Goldberg, Emma E; Stern, Caitlin A; Van Cleve, Jeremy; Yeh, D Justin

    2014-12-01

    Progress in science often begins with verbal hypotheses meant to explain why certain biological phenomena exist. An important purpose of mathematical models in evolutionary research, as in many other fields, is to act as “proof-of-concept” tests of the logic in verbal explanations, paralleling the way in which empirical data are used to test hypotheses. Because not all subfields of biology use mathematics for this purpose, misunderstandings of the function of proof-of-concept modeling are common. In the hope of facilitating communication, we discuss the role of proof-of-concept modeling in evolutionary biology.

  18. A robust method using propensity score stratification for correcting verification bias for binary tests

    PubMed Central

    He, Hua; McDermott, Michael P.

    2012-01-01

    Sensitivity and specificity are common measures of the accuracy of a diagnostic test. The usual estimators of these quantities are unbiased if data on the diagnostic test result and the true disease status are obtained from all subjects in an appropriately selected sample. In some studies, verification of the true disease status is performed only for a subset of subjects, possibly depending on the result of the diagnostic test and other characteristics of the subjects. Estimators of sensitivity and specificity based on this subset of subjects are typically biased; this is known as verification bias. Methods have been proposed to correct verification bias under the assumption that the missing data on disease status are missing at random (MAR), that is, the probability of missingness depends on the true (missing) disease status only through the test result and observed covariate information. When some of the covariates are continuous, or the number of covariates is relatively large, the existing methods require parametric models for the probability of disease or the probability of verification (given the test result and covariates), and hence are subject to model misspecification. We propose a new method for correcting verification bias based on the propensity score, defined as the predicted probability of verification given the test result and observed covariates. This is estimated separately for those with positive and negative test results. The new method classifies the verified sample into several subsamples that have homogeneous propensity scores and allows correction for verification bias. Simulation studies demonstrate that the new estimators are more robust to model misspecification than existing methods, but still perform well when the models for the probability of disease and probability of verification are correctly specified. PMID:21856650

  19. Evaluating the Usability of a Professional Modeling Tool Repurposed for Middle School Learning

    ERIC Educational Resources Information Center

    Peters, Vanessa L.; Songer, Nancy Butler

    2013-01-01

    This paper reports the results of a three-stage usability test of a modeling tool designed to support learners' deep understanding of the impacts of climate change on ecosystems. The design process involved repurposing an existing modeling technology used by professional scientists into a learning tool specifically designed for middle school…

  20. Status of DSMT research program

    NASA Technical Reports Server (NTRS)

    Mcgowan, Paul E.; Javeed, Mehzad; Edighoffer, Harold H.

    1991-01-01

    The status of the Dynamic Scale Model Technology (DSMT) research program is presented. DSMT is developing scale model technology for large space structures as part of the Control Structure Interaction (CSI) program at NASA Langley Research Center (LaRC). Under DSMT a hybrid-scale structural dynamics model of Space Station Freedom was developed. Space Station Freedom was selected as the focus structure for DSMT since the station represents the first opportunity to obtain flight data on a complex, three-dimensional space structure. Included is an overview of DSMT including the development of the space station scale model and the resulting hardware. Scaling technology was developed for this model to achieve a ground test article which existing test facilities can accommodate while employing realistically scaled hardware. The model was designed and fabricated by the Lockheed Missile and Space Co., and is assembled at LaRc for dynamic testing. Also, results from ground tests and analyses of the various model components are presented along with plans for future subassembly and matted model tests. Finally, utilization of the scale model for enhancing analysis verification of the full-scale space station is also considered.

  1. A procedure for the significance testing of unmodeled errors in GNSS observations

    NASA Astrophysics Data System (ADS)

    Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling

    2018-01-01

    It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.

  2. Simulation of pump-turbine prototype fast mode transition for grid stability support

    NASA Astrophysics Data System (ADS)

    Nicolet, C.; Braun, O.; Ruchonnet, N.; Hell, J.; Béguin, A.; Avellan, F.

    2017-04-01

    The paper explores the additional services that Full Size Frequency Converter, FSFC, solution can provide for the case of an existing pumped storage power plant of 2x210 MW, for which conversion from fixed speed to variable speed is investigated with a focus on fast mode transition. First, reduced scale model tests experiments of fast transition of Francis pump-turbine which have been performed at the ANDRITZ HYDRO Hydraulic Laboratory in Linz Austria are presented. The tests consist of linear speed transition from pump to turbine and vice versa performed with constant guide vane opening. Then existing pumped storage power plant with pump-turbine quasi homologous to the reduced scale model is modelled using the simulation software SIMSEN considering the reservoirs, penstocks, the two Francis pump-turbines, the two downstream surge tanks, and the tailrace tunnel. For the electrical part, an FSFC configuration is considered with a detailed electrical model. The transitions from turbine to pump and vice versa are simulated, and similarities between prototype simulation results and reduced scale model experiments are highlighted.

  3. Path generation algorithm for UML graphic modeling of aerospace test software

    NASA Astrophysics Data System (ADS)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Chen, Chao

    2018-03-01

    Aerospace traditional software testing engineers are based on their own work experience and communication with software development personnel to complete the description of the test software, manual writing test cases, time-consuming, inefficient, loopholes and more. Using the high reliability MBT tools developed by our company, the one-time modeling can automatically generate test case documents, which is efficient and accurate. UML model to describe the process accurately express the need to rely on the path is reached, the existing path generation algorithm are too simple, cannot be combined into a path and branch path with loop, or too cumbersome, too complicated arrangement generates a path is meaningless, for aerospace software testing is superfluous, I rely on our experience of ten load space, tailor developed a description of aerospace software UML graphics path generation algorithm.

  4. A Review of the Application of Body-on-a-Chip for Drug Test and Its Latest Trend of Incorporating Barrier Tissue.

    PubMed

    Jin, Haoyi; Yu, Yanqiu

    2016-10-01

    High-quality preclinical bioassay models are essential for drug research and development. We reviewed the emerging body-on-a-chip technology, which serves as a promising model to overcome the limitations of traditional bioassay models, and introduced existing models of body-on-a-chip, their constitutional details, application for drug testing, and individual features of these models. We put special emphasis on the latest trend in this field of incorporating barrier tissue into body-on-a-chip and discussed several remaining challenges of current body-on-a-chip. © 2015 Society for Laboratory Automation and Screening.

  5. A quantitative investigation of the fracture pump-in/flowback test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plahn, S.V.; Nolte, K.G.; Miska, S.

    1995-12-31

    Fracture closure pressure is an important parameter for fracture treatment design and evaluation. The pump-in/flowback (PIFB) test is frequently used to estimate its magnitude. The test is attractive because bottomhole pressures during flowback develop a distinct and repeatable signature. This is in contrast to the pump-in/shut-in test where strong indications of fracture closure are rarely seen. Various techniques exist for extracting closure pressure from the flowback pressure response. Unfortunately, these procedures give different estimates for closure pressure and their theoretical bases are not well established. We present results that place the PIFB test on a more solid foundation. A numericalmore » model is used to simulate the PIFB test and glean physical mechanisms contributing to the response. Based on our simulation results, we propose an interpretation procedure which gives better estimates for closure pressure than existing techniques.« less

  6. Yield and Depth of Burial Hydrodynamic Calculations in Granodiorite: Implications for the North Korean Test Site

    DTIC Science & Technology

    2011-09-01

    the existence of a test site body wave magnitude (mb) bias between U. S. and the former Soviet Union test sites in Nevada and Semipalatinsk . The use...YIELD AND DEPTH OF BURIAL HYDRODYNAMIC CALCULATIONS IN GRANODIORITE:IMPLICATIONS FOR THE NORTH KOREAN TEST SITE Esteban Rougier, Christopher R...Korean test site and the May 2009 test . When compared to the Denny and Johnson (1991) and to the Heard and Ackerman (1967) cavity radius scaling models

  7. Bond–Slip Relationship for CFRP Sheets Externally Bonded to Concrete under Cyclic Loading

    PubMed Central

    Li, Ke; Cao, Shuangyin; Yang, Yue; Zhu, Juntao

    2018-01-01

    The objective of this paper was to explore the bond–slip relationship between carbon fiber-reinforced polymer (CFRP) sheets and concrete under cyclic loading through experimental and analytical approaches. Modified beam tests were performed in order to gain insight into the bond–slip relationship under static and cyclic loading. The test variables are the CFRP-to-concrete width ratio, and the bond length of the CFRP sheets. An analysis of the test results in this paper and existing test results indicated that the slope of the ascending segment of the bond–slip curve decreased with an increase in the number of load cycles, but the slip corresponding to the maximum shear stress was almost invariable as the number of load cycles increased. In addition, the rate of reduction in the slope of the ascending range of the bond–slip curve during cyclic loading decreased as the concrete strength increased, and increased as the load level or CFRP-to-concrete width ratio enhanced. However, these were not affected by variations in bond length if the residual bond length was longer than the effective bond length. A bilinear bond–slip model for CFRP sheets that are externally bonded to concrete under cyclic loading, which considered the effects of the cyclic load level, concrete strength, and CFRP-to-concrete ratio, was developed based on the existing static bond–slip model. The accuracy of this proposed model was verified by a comparison between this proposed model and test results. PMID:29495383

  8. Bond-Slip Relationship for CFRP Sheets Externally Bonded to Concrete under Cyclic Loading.

    PubMed

    Li, Ke; Cao, Shuangyin; Yang, Yue; Zhu, Juntao

    2018-02-26

    The objective of this paper was to explore the bond-slip relationship between carbon fiber-reinforced polymer (CFRP) sheets and concrete under cyclic loading through experimental and analytical approaches. Modified beam tests were performed in order to gain insight into the bond-slip relationship under static and cyclic loading. The test variables are the CFRP-to-concrete width ratio, and the bond length of the CFRP sheets. An analysis of the test results in this paper and existing test results indicated that the slope of the ascending segment of the bond-slip curve decreased with an increase in the number of load cycles, but the slip corresponding to the maximum shear stress was almost invariable as the number of load cycles increased. In addition, the rate of reduction in the slope of the ascending range of the bond-slip curve during cyclic loading decreased as the concrete strength increased, and increased as the load level or CFRP-to-concrete width ratio enhanced. However, these were not affected by variations in bond length if the residual bond length was longer than the effective bond length. A bilinear bond-slip model for CFRP sheets that are externally bonded to concrete under cyclic loading, which considered the effects of the cyclic load level, concrete strength, and CFRP-to-concrete ratio, was developed based on the existing static bond-slip model. The accuracy of this proposed model was verified by a comparison between this proposed model and test results.

  9. Analysing model fit of psychometric process models: An overview, a new test and an application to the diffusion model.

    PubMed

    Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten

    2017-05-01

    Cognitive psychometric models embed cognitive process models into a latent trait framework in order to allow for individual differences. Due to their close relationship to the response process the models allow for profound conclusions about the test takers. However, before such a model can be used its fit has to be checked carefully. In this manuscript we give an overview over existing tests of model fit and show their relation to the generalized moment test of Newey (Econometrica, 53, 1985, 1047) and Tauchen (J. Econometrics, 30, 1985, 415). We also present a new test, the Hausman test of misspecification (Hausman, Econometrica, 46, 1978, 1251). The Hausman test consists of a comparison of two estimates of the same item parameters which should be similar if the model holds. The performance of the Hausman test is evaluated in a simulation study. In this study we illustrate its application to two popular models in cognitive psychometrics, the Q-diffusion model and the D-diffusion model (van der Maas, Molenaar, Maris, Kievit, & Boorsboom, Psychol Rev., 118, 2011, 339; Molenaar, Tuerlinckx, & van der Maas, J. Stat. Softw., 66, 2015, 1). We also compare the performance of the test to four alternative tests of model fit, namely the M 2 test (Molenaar et al., J. Stat. Softw., 66, 2015, 1), the moment test (Ranger et al., Br. J. Math. Stat. Psychol., 2016) and the test for binned time (Ranger & Kuhn, Psychol. Test. Asess. , 56, 2014b, 370). The simulation study indicates that the Hausman test is superior to the latter tests. The test closely adheres to the nominal Type I error rate and has higher power in most simulation conditions. © 2017 The British Psychological Society.

  10. A Model for Space Shuttle Orbiter Tire Side Forces Based on NASA Landing Systems Research Aircraft Test Results

    NASA Technical Reports Server (NTRS)

    Carter, John F.; Nagy, Christopher J.; Barnicki, Joseph S.

    1997-01-01

    Forces generated by the Space Shuttle orbiter tire under varying vertical load, slip angle, speed, and surface conditions were measured using the Landing System Research Aircraft (LSRA). Resulting data were used to calculate a mathematical model for predicting tire forces in orbiter simulations. Tire side and drag forces experienced by an orbiter tire are cataloged as a function of vertical load and slip angle. The mathematical model is compared to existing tire force models for the Space Shuttle orbiter. This report describes the LSRA and a typical test sequence. Testing methods, data reduction, and error analysis are presented. The LSRA testing was conducted on concrete and lakebed runways at the Edwards Air Force Flight Test Center and on concrete runways at the Kennedy Space Center (KSC). Wet runway tire force tests were performed on test strips made at the KSC using different surfacing techniques. Data were corrected for ply steer forces and conicity.

  11. Formative Constructs Implemented via Common Factors

    ERIC Educational Resources Information Center

    Treiblmaier, Horst; Bentler, Peter M.; Mair, Patrick

    2011-01-01

    Recently there has been a renewed interest in formative measurement and its role in properly specified models. Formative measurement models are difficult to identify, and hence to estimate and test. Existing solutions to the identification problem are shown to not adequately represent the formative constructs of interest. We propose a new two-step…

  12. Rural Free Universities: Extending the UFM Model. Final Report.

    ERIC Educational Resources Information Center

    Maes, Sue C.

    Operating under a grant, the University for Man (UFM) in Manhattan, Kansas, tested the transferability of the UFM free university/community education model using four existing statewide delivery systems (public libraries, a private college consortium, a state cooperative extension service, an office of rural affairs) in five states: Kentucky,…

  13. Assessing Mediational Models: Testing and Interval Estimation for Indirect Effects

    ERIC Educational Resources Information Center

    Biesanz, Jeremy C.; Falk, Carl F.; Savalei, Victoria

    2010-01-01

    Theoretical models specifying indirect or mediated effects are common in the social sciences. An indirect effect exists when an independent variable's influence on the dependent variable is mediated through an intervening variable. Classic approaches to assessing such mediational hypotheses (Baron & Kenny, 1986; Sobel, 1982) have in recent years…

  14. Adult Human Stem Cell-Derived Cardiomyocytes: An Alternative Model for Evaluating Chemical and Environmental Pollutant Cardiotoxicity

    EPA Science Inventory

    Heart disease is increasing globally with a significant percentage of the increase being attributed to chemical and pollution exposures. Currently, no alternative or in vitro testing models exist to rapidly and accurately determine the cardiac effects of chemicals and/or pollutan...

  15. A Comparative Test of Work-Family Conflict Models and Critical Examination of Work-Family Linkages

    ERIC Educational Resources Information Center

    Michel, Jesse S.; Mitchelson, Jacqueline K.; Kotrba, Lindsey M.; LeBreton, James M.; Baltes, Boris B.

    2009-01-01

    This paper is a comprehensive meta-analysis of over 20 years of work-family conflict research. A series of path analyses were conducted to compare and contrast existing work-family conflict models, as well as a new model we developed which integrates and synthesizes current work-family theory and research. This new model accounted for 40% of the…

  16. CLIMACS: a computer model of forest stand development for western Oregon and Washington.

    Treesearch

    Virginia H. Dale; Miles Hemstrom

    1984-01-01

    A simulation model for the development of timber stands in the Pacific Northwest is described. The model grows individual trees of 21 species in a 0.20-hectare (0.08-acre) forest gap. The model provides a means of assimilating existing information, indicates where knowledge is deficient, suggests where the forest system is most sensitive, and provides a first testing...

  17. Assessing models of arsenic occurrence in drinking water from bedrock aquifers in New Hampshire

    USGS Publications Warehouse

    Andy, Caroline; Fahnestock, Maria Florencia; Lombard, Melissa; Hayes, Laura; Bryce, Julie; Ayotte, Joseph

    2017-01-01

    Three existing multivariate logistic regression models were assessed using new data to evaluate the capacity of the models to correctly predict the probability of groundwater arsenic concentrations exceeding the threshold values of 1, 5, and 10 micrograms per liter (µg/L) in New Hampshire, USA. A recently released testing dataset includes arsenic concentrations from groundwater samples collected in 2004–2005 from a mix of 367 public-supply and private domestic wells. The use of this dataset to test three existing logistic regression models demonstrated enhanced overall predictive accuracy for the 5 and 10 μg/L models. Overall accuracies of 54.8, 76.3, and 86.4 percent were reported for the 1, 5, and 10 μg/L models, respectively. The state was divided by counties into northwest and southeast regions. Regional differences in accuracy were identified; models had an average accuracy of 83.1 percent for the counties in the northwest and 63.7 percent in the southeast. This is most likely due to high model specificity in the northwest and regional differences in arsenic occurrence. Though these models have limitations, they allow for arsenic hazard assessment across the region. The introduction of well-type (public or private), well depth, and casing length as explanatory variables may be appropriate measures to improve model performance. Our findings indicate that the original models generalize to the testing dataset, and should continue to serve as an important vehicle of preventative public health that may be applied to other groundwater contaminants in New Hampshire.

  18. Measuring change for a multidimensional test using a generalized explanatory longitudinal item response model.

    PubMed

    Cho, Sun-Joo; Athay, Michele; Preacher, Kristopher J

    2013-05-01

    Even though many educational and psychological tests are known to be multidimensional, little research has been done to address how to measure individual differences in change within an item response theory framework. In this paper, we suggest a generalized explanatory longitudinal item response model to measure individual differences in change. New longitudinal models for multidimensional tests and existing models for unidimensional tests are presented within this framework and implemented with software developed for generalized linear models. In addition to the measurement of change, the longitudinal models we present can also be used to explain individual differences in change scores for person groups (e.g., learning disabled students versus non-learning disabled students) and to model differences in item difficulties across item groups (e.g., number operation, measurement, and representation item groups in a mathematics test). An empirical example illustrates the use of the various models for measuring individual differences in change when there are person groups and multiple skill domains which lead to multidimensionality at a time point. © 2012 The British Psychological Society.

  19. A sup-score test for the cure fraction in mixture models for long-term survivors.

    PubMed

    Hsu, Wei-Wen; Todem, David; Kim, KyungMann

    2016-12-01

    The evaluation of cure fractions in oncology research under the well known cure rate model has attracted considerable attention in the literature, but most of the existing testing procedures have relied on restrictive assumptions. A common assumption has been to restrict the cure fraction to a constant under alternatives to homogeneity, thereby neglecting any information from covariates. This article extends the literature by developing a score-based statistic that incorporates covariate information to detect cure fractions, with the existing testing procedure serving as a special case. A complication of this extension, however, is that the implied hypotheses are not typical and standard regularity conditions to conduct the test may not even hold. Using empirical processes arguments, we construct a sup-score test statistic for cure fractions and establish its limiting null distribution as a functional of mixtures of chi-square processes. In practice, we suggest a simple resampling procedure to approximate this limiting distribution. Our simulation results show that the proposed test can greatly improve efficiency over tests that neglect the heterogeneity of the cure fraction under the alternative. The practical utility of the methodology is illustrated using ovarian cancer survival data with long-term follow-up from the surveillance, epidemiology, and end results registry. © 2016, The International Biometric Society.

  20. Advanced Turbine Technology Applications Project (ATTAP) 1993 annual report

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This report summarizes work performed by AlliedSignal Engines, a unit of AlliedSignal Aerospace Company, during calendar year 1993, toward development and demonstration of structural ceramic technology for automotive gas turbine engines. This work was performed for the U.S. Department of Energy (DOE) under National Aeronautics and Space Administration (NASA) Contract DEN3-335, Advanced Turbine Technology Applications Project (ATFAP). During 1993, the test bed used to demonstrate ceramic technology was changed from the AlliedSignal Engines/Garrett Model AGT101 regenerated gas turbine engine to the Model 331-200(CT) engine. The 331-200(CT) ceramic demonstrator is a fully-developed test platform based on the existing production AlliedSignal 331-200(ER) gas turbine auxiliary power unit (APU), and is well suited to evaluating ceramic turbine blades and nozzles. In addition, commonality of the 331-200(CT) engine with existing gas turbine APU's in commercial service provides the potential for field testing of ceramic components. The 1993 ATTAP activities emphasized design modifications of the 331-200 engine test bed to accommodate ceramic first-stage turbine nozzles and blades, fabrication of the ceramic components, ceramic component proof and rig tests, operational tests of the test bed equipped with the ceramic components, and refinement of critical ceramic design technologies.

  1. NETEX Task 1: a study of the effect of ultrawideband (UWB) emitters on existing narrowband military receivers

    NASA Astrophysics Data System (ADS)

    Light, Arthur H.; Griggs, Stephen

    2003-07-01

    The goal of the DARPA NETEX program is to create a wireless networking technology for the military user that enables robust connectivity in harsh environments and support its integration into new and emerging sensor and communication systems. Phase 1 resulted in a thorough understanding of the effects of UWB system operation on existing military spectrum users based on modeling, simulation, and measurements. DARPA procured UWB emitters and broadband antennas to use as interference sources and contracted with the NAWC AD to provide candidate victim systems from the existing US inventory for testing. Testing was conducted on thirteen systems from October 2002 through March 2003. The purpose of this paper is to describe the results of these tests. It will provide a brief definition of UWB emissions as described by the US FCC and describe the generic UWB emitter used for these tests. It will then provide a brief overview of the general test plan and explain how it was adapted to the various systems tested. It will then provide a discussion of the results as they apply to the purpose of the NETEX program. Finally, the paper will look at where NETEX is going after Task 1.

  2. The effect of testing versus restudy on retention: a meta-analytic review of the testing effect.

    PubMed

    Rowland, Christopher A

    2014-11-01

    Engaging in a test over previously studied information can serve as a potent learning event, a phenomenon referred to as the testing effect. Despite a surge of research in the past decade, existing theories have not yet provided a cohesive account of testing phenomena. The present study uses meta-analysis to examine the effects of testing versus restudy on retention. Key results indicate support for the role of effortful processing as a contributor to the testing effect, with initial recall tests yielding larger testing benefits than recognition tests. Limited support was found for existing theoretical accounts attributing the testing effect to enhanced semantic elaboration, indicating that consideration of alternative mechanisms is warranted in explaining testing effects. Future theoretical accounts of the testing effect may benefit from consideration of episodic and contextually derived contributions to retention resulting from memory retrieval. Additionally, the bifurcation model of the testing effect is considered as a viable framework from which to characterize the patterns of results present across the literature. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  3. Automated Simultaneous Assembly of Multistage Testlets for a High-Stakes Licensing Examination

    ERIC Educational Resources Information Center

    Breithaupt, Krista; Hare, Donovan R.

    2007-01-01

    Many challenges exist for high-stakes testing programs offering continuous computerized administration. The automated assembly of test questions to exactly meet content and other requirements, provide uniformity, and control item exposure can be modeled and solved by mixed-integer programming (MIP) methods. A case study of the computerized…

  4. 76 FR 53326 - Airworthiness Directives; Eurocopter France (ECF) Model EC120B Helicopters

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-26

    ... also requires modifying the emergency switch electrical wiring and performing tests to ensure correct... the RFM after modifying the emergency switch electrical wiring and performing tests to ensure correct... likely to exist or develop on other helicopters of the same type design. Differences Between This AD and...

  5. 76 FR 31453 - Special Conditions: Gulfstream Model GVI Airplane; Single-Occupant Side-Facing Seats

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-01

    .... SID TTI data must be processed as defined in Federal Motor Vehicle Safety Standard (FMVSS) part 571...). Pass/fail injury assessments: TTI and pelvic acceleration. 2. One longitudinal test with the Hybrid II... pelvic acceleration. 3. Vertical (14g) test with modified Hybrid II ATDs using existing pass/fail...

  6. A Feedback Control Strategy for Enhancing Item Selection Efficiency in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Weissman, Alexander

    2006-01-01

    A computerized adaptive test (CAT) may be modeled as a closed-loop system, where item selection is influenced by trait level ([theta]) estimation and vice versa. When discrepancies exist between an examinee's estimated and true [theta] levels, nonoptimal item selection is a likely result. Nevertheless, examinee response behavior consistent with…

  7. 40 CFR 91.801 - Applicability.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... EMISSIONS FROM MARINE SPARK-IGNITION ENGINES In-Use Testing and Recall Regulations § 91.801 Applicability... repairs. (f) The requirements of the Manufacturer In-use testing program set forth in §§ 91.803 through 91.805 are waived for existing technology OB/PWC as defined in § 91.3 through model year 2003. (1) The...

  8. The effect of intra-wellbore head losses in a vertical well

    NASA Astrophysics Data System (ADS)

    Wang, Quanrong; Zhan, Hongbin

    2017-05-01

    Flow to a partially penetrating vertical well is made more complex by intra-wellbore losses. These are caused not only by the frictional effect, but also by the kinematic effect, which consists of the accelerational and fluid inflow effects inside a wellbore. Existing models of flow to a partially penetrating vertical well assume either a uniform-flux boundary condition (UFBC) or a uniform-head boundary condition (UHBC) for treating the flow into the wellbore. Neither approach considers intra-wellbore losses. In this study a new general solution, named the mixed-type boundary condition (MTBC) solution, is introduced to include intra-wellbore losses. It is developed from the existing solutions using a hybrid analytical-numerical method. The MTBC solution is capable of modeling various types of aquifer tests (constant-head tests, constant-rate tests, and slug tests) for partially or fully penetrating vertical wells in confined aquifers. Results show that intra-wellbore losses (both frictional and kinematic) can be significant in the early pumping stage. At later pumping times the UHBC solution is adequate because the difference between the MTBC and UHBC solutions becomes negligible.

  9. Modeling the rate of HIV testing from repeated binary data amidst potential never-testers.

    PubMed

    Rice, John D; Johnson, Brent A; Strawderman, Robert L

    2018-01-04

    Many longitudinal studies with a binary outcome measure involve a fraction of subjects with a homogeneous response profile. In our motivating data set, a study on the rate of human immunodeficiency virus (HIV) self-testing in a population of men who have sex with men (MSM), a substantial proportion of the subjects did not self-test during the follow-up study. The observed data in this context consist of a binary sequence for each subject indicating whether or not that subject experienced any events between consecutive observation time points, so subjects who never self-tested were observed to have a response vector consisting entirely of zeros. Conventional longitudinal analysis is not equipped to handle questions regarding the rate of events (as opposed to the odds, as in the classical logistic regression model). With the exception of discrete mixture models, such methods are also not equipped to handle settings in which there may exist a group of subjects for whom no events will ever occur, i.e. a so-called "never-responder" group. In this article, we model the observed data assuming that events occur according to some unobserved continuous-time stochastic process. In particular, we consider the underlying subject-specific processes to be Poisson conditional on some unobserved frailty, leading to a natural focus on modeling event rates. Specifically, we propose to use the power variance function (PVF) family of frailty distributions, which contains both the gamma and inverse Gaussian distributions as special cases and allows for the existence of a class of subjects having zero frailty. We generalize a computational algorithm developed for a log-gamma random intercept model (Conaway, 1990. A random effects model for binary data. Biometrics46, 317-328) to compute the exact marginal likelihood, which is then maximized to obtain estimates of model parameters. We conduct simulation studies, exploring the performance of the proposed method in comparison with competitors. Applying the PVF as well as a Gaussian random intercept model and a corresponding discrete mixture model to our motivating data set, we conclude that the group assigned to receive follow-up messages via SMS was self-testing at a significantly lower rate than the control group, but that there is no evidence to support the existence of a group of never-testers. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Monte Carlo based statistical power analysis for mediation models: methods and software.

    PubMed

    Zhang, Zhiyong

    2014-12-01

    The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.

  11. Mechanical and thermomechanical calculations related to the storage of spent nuclear-fuel assemblies in granite

    NASA Astrophysics Data System (ADS)

    Butkovich, T. R.

    1981-08-01

    A generic test of the geologic storage of spent-fuel assemblies from an operating nuclear reactor is being made by the Lawrence Livermore National Laboratory at the US Department of Energy's Nevada Test Site. The spent-fuel assemblies were emplaced at a depth of 420 m (1370 ft) below the surface in a typical granite and will be retrieved at a later time. The early time, close-in thermal history of this type of repository is being simulated with spent-fuel and electrically heated canisters in a central drift, with auxiliary heaters in two parallel side drifts. Prior to emplacement of the spent-fuel canister, preliminary calculations were made using a pair of existing finite-element codes. Calculational modeling of a spent-fuel repository requires a code with a multiple capability. The effects of both the mining operation and the thermal load on the existing stress fields and the resultant displacements of the rock around the repository must be calculated. The thermal loading for each point in the rock is affected by heat transfer through conduction, radiation, and normal convection, as well as by ventilation of the drifts. Both the ADINA stress code and the compatible ADINAT heat-flow code were used to perform the calculations because they satisfied the requirements of this project. ADINAT was adapted to calculate radiative and convective heat transfer across the drifts and to model the effects of ventilation in the drifts, while the existing isotropic elastic model was used with the ADINA code. The results of the calculation are intended to provide a base with which to compare temperature, stress, and displacement data taken during the planned 5-y duration of the test. In this way, it will be possible to determine how the existing jointing in the rock influences the results as compared with a homogeneous, isotropic rock mass. Later, new models will be introduced into ADINA to account for the effects of jointing.

  12. Greased Lightning (GL-10) Performance Flight Research: Flight Data Report

    NASA Technical Reports Server (NTRS)

    McSwain, Robert G.; Glaab, Louis J.; Theodore, Colin R.; Rhew, Ray D. (Editor); North, David D. (Editor)

    2017-01-01

    Modern aircraft design methods have produced acceptable designs for large conventional aircraft performance. With revolutionary electronic propulsion technologies fueled by the growth in the small UAS (Unmanned Aerial Systems) industry, these same prediction models are being applied to new smaller, and experimental design concepts requiring a VTOL (Vertical Take Off and Landing) capability for ODM (On Demand Mobility). A 50% sub-scale GL-10 flight model was built and tested to demonstrate the transition from hover to forward flight utilizing DEP (Distributed Electric Propulsion)[1][2]. In 2016 plans were put in place to conduct performance flight testing on the 50% sub-scale GL-10 flight model to support a NASA project called DELIVER (Design Environment for Novel Vertical Lift Vehicles). DELIVER was investigating the feasibility of including smaller and more experimental aircraft configurations into a NASA design tool called NDARC (NASA Design and Analysis of Rotorcraft)[3]. This report covers the performance flight data collected during flight testing of the GL-10 50% sub-scale flight model conducted at Beaver Dam Airpark, VA. Overall the flight test data provides great insight into how well our existing conceptual design tools predict the performance of small scale experimental DEP concepts. Low fidelity conceptual design tools estimated the (L/D)( sub max)of the GL-10 50% sub-scale flight model to be 16. Experimentally measured (L/D)( sub max) for the GL-10 50% scale flight model was 7.2. The aerodynamic performance predicted versus measured highlights the complexity of wing and nacelle interactions which is not currently accounted for in existing low fidelity tools.

  13. The Anatomy of a Likely Donor: Econometric Evidence on Philanthropy to Higher Education

    ERIC Educational Resources Information Center

    Lara, Christen; Johnson, Daniel

    2014-01-01

    In 2011, philanthropic giving to higher education institutions totaled $30.3 billion, an 8.2% increase over the previous year. Roughly, 26% of those funds came from alumni donations. This article builds upon existing economic models to create an econometric model to explain and predict the pattern of alumni giving. We test the model using data…

  14. Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.

    ERIC Educational Resources Information Center

    Vidal, Sherry

    Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…

  15. Assessment and improvement of biotransfer models to cow's milk and beef used in exposure assessment tools for organic pollutants.

    PubMed

    Takaki, Koki; Wade, Andrew J; Collins, Chris D

    2015-11-01

    The aim of this study was to assess and improve the accuracy of biotransfer models for the organic pollutants (PCBs, PCDD/Fs, PBDEs, PFCAs, and pesticides) into cow's milk and beef used in human exposure assessment. Metabolic rate in cattle is known as a key parameter for this biotransfer, however few experimental data and no simulation methods are currently available. In this research, metabolic rate was estimated using existing QSAR biodegradation models of microorganisms (BioWIN) and fish (EPI-HL and IFS-HL). This simulated metabolic rate was then incorporated into the mechanistic cattle biotransfer models (RAIDAR, ACC-HUMAN, OMEGA, and CKow). The goodness of fit tests showed that RAIDAR, ACC-HUMAN, OMEGA model performances were significantly improved using either of the QSARs when comparing the new model outputs to observed data. The CKow model is the only one that separates the processes in the gut and liver. This model showed the lowest residual error of all the models tested when the BioWIN model was used to represent the ruminant metabolic process in the gut and the two fish QSARs were used to represent the metabolic process in the liver. Our testing included EUSES and CalTOX which are KOW-regression models that are widely used in regulatory assessment. New regressions based on the simulated rate of the two metabolic processes are also proposed as an alternative to KOW-regression models for a screening risk assessment. The modified CKow model is more physiologically realistic, but has equivalent usability to existing KOW-regression models for estimating cattle biotransfer of organic pollutants. Copyright © 2015. Published by Elsevier Ltd.

  16. Wind tunnel measurements for dispersion modelling of vehicle wakes

    NASA Astrophysics Data System (ADS)

    Carpentieri, Matteo; Kumar, Prashant; Robins, Alan

    2012-12-01

    Wind tunnel measurements downwind of reduced scale car models have been made to study the wake regions in detail, test the usefulness of existing vehicle wake models, and draw key information needed for dispersion modelling in vehicle wakes. The experiments simulated a car moving in still air. This is achieved by (i) the experimental characterisation of the flow, turbulence and concentration fields in both the near and far wake regions, (ii) the preliminary assessment of existing wake models using the experimental database, and (iii) the comparison of previous field measurements in the wake of a real diesel car with the wind tunnel measurements. The experiments highlighted very large gradients of velocities and concentrations existing, in particular, in the near-wake. Of course, the measured fields are strongly dependent on the geometry of the modelled vehicle and a generalisation for other vehicles may prove to be difficult. The methodology applied in the present study, although improvable, could constitute a first step towards the development of mathematical parameterisations. Experimental results were also compared with the estimates from two wake models. It was found that they can adequately describe the far-wake of a vehicle in terms of velocities, but a better characterisation in terms of turbulence and pollutant dispersion is needed. Parameterised models able to predict velocity and concentrations with fine enough details at the near-wake scale do not exist.

  17. Score tests for independence in semiparametric competing risks models.

    PubMed

    Saïd, Mériem; Ghazzali, Nadia; Rivest, Louis-Paul

    2009-12-01

    A popular model for competing risks postulates the existence of a latent unobserved failure time for each risk. Assuming that these underlying failure times are independent is attractive since it allows standard statistical tools for right-censored lifetime data to be used in the analysis. This paper proposes simple independence score tests for the validity of this assumption when the individual risks are modeled using semiparametric proportional hazards regressions. It assumes that covariates are available, making the model identifiable. The score tests are derived for alternatives that specify that copulas are responsible for a possible dependency between the competing risks. The test statistics are constructed by adding to the partial likelihoods for the individual risks an explanatory variable for the dependency between the risks. A variance estimator is derived by writing the score function and the Fisher information matrix for the marginal models as stochastic integrals. Pitman efficiencies are used to compare test statistics. A simulation study and a numerical example illustrate the methodology proposed in this paper.

  18. Pressure distributions obtained on a 0.10-scale model of the space shuttle Orbiter's forebody in the AEDC 16T propulsion wind tunnel

    NASA Technical Reports Server (NTRS)

    Siemers, P. M., III; Henry, M. W.

    1986-01-01

    Pressure distribution test data obtained on a 0.10-scale model of the forward fuselage of the Space Shuttle Orbiter are presented without analysis. The tests were completed in the AEDC 16T Propulsion Wind Tunnel. The 0.10-scale model was tested at angles of attack from -2 deg to 18 deg and angles of side slip from -6 to 6 deg at Mach numbers from 0.25 to 1/5 deg. The tests were conducted in support of the development of the Shuttle Entry Air Data System (SEADS). In addition to modeling the 20 SEADS orifices, the wind-tunnel model was also instrumented with orifices to match Development Flight Instrumentation (DFI) port locations that existed on the Space Shuttle Orbiter Columbia (OV-102) during the Orbiter Flight Test program. This DFI simulation has provided a means of comparisons between reentry flight pressure data and wind-tunnel and computational data.

  19. Existing and Required Modeling Capabilities for Evaluating ATM Systems and Concepts

    NASA Technical Reports Server (NTRS)

    Odoni, Amedeo R.; Bowman, Jeremy; Delahaye, Daniel; Deyst, John J.; Feron, Eric; Hansman, R. John; Khan, Kashif; Kuchar, James K.; Pujet, Nicolas; Simpson, Robert W.

    1997-01-01

    ATM systems throughout the world are entering a period of major transition and change. The combination of important technological developments and of the globalization of the air transportation industry has necessitated a reexamination of some of the fundamental premises of existing Air Traffic Management (ATM) concepts. New ATM concepts have to be examined, concepts that may place more emphasis on: strategic traffic management; planning and control; partial decentralization of decision-making; and added reliance on the aircraft to carry out strategic ATM plans, with ground controllers confined primarily to a monitoring and supervisory role. 'Free Flight' is a case in point. In order to study, evaluate and validate such new concepts, the ATM community will have to rely heavily on models and computer-based tools/utilities, covering a wide range of issues and metrics related to safety, capacity and efficiency. The state of the art in such modeling support is adequate in some respects, but clearly deficient in others. It is the objective of this study to assist in: (1) assessing the strengths and weaknesses of existing fast-time models and tools for the study of ATM systems and concepts and (2) identifying and prioritizing the requirements for the development of additional modeling capabilities in the near future. A three-stage process has been followed to this purpose: 1. Through the analysis of two case studies involving future ATM system scenarios, as well as through expert assessment, modeling capabilities and supporting tools needed for testing and validating future ATM systems and concepts were identified and described. 2. Existing fast-time ATM models and support tools were reviewed and assessed with regard to the degree to which they offer the capabilities identified under Step 1. 3 . The findings of 1 and 2 were combined to draw conclusions about (1) the best capabilities currently existing, (2) the types of concept testing and validation that can be carried out reliably with such existing capabilities and (3) the currently unavailable modeling capabilities that should receive high priority for near-term research and development. It should be emphasized that the study is concerned only with the class of 'fast time' analytical and simulation models. 'Real time' models, that typically involve humans-in-the-loop, comprise another extensive class which is not addressed in this report. However, the relationship between some of the fast-time models reviewed and a few well-known real-time models is identified in several parts of this report and the potential benefits from the combined use of these two classes of models-a very important subject-are discussed in chapters 4 and 7.

  20. Aerodynamic stability analysis of NASA J85-13/planar pressure pulse generator installation

    NASA Technical Reports Server (NTRS)

    Chung, K.; Hosny, W. M.; Steenken, W. G.

    1980-01-01

    A digital computer simulation model for the J85-13/Planar Pressure Pulse Generator (P3 G) test installation was developed by modifying an existing General Electric compression system model. This modification included the incorporation of a novel method for describing the unsteady blade lift force. This approach significantly enhanced the capability of the model to handle unsteady flows. In addition, the frequency response characteristics of the J85-13/P3G test installation were analyzed in support of selecting instrumentation locations to avoid standing wave nodes within the test apparatus and thus, low signal levels. The feasibility of employing explicit analytical expression for surge prediction was also studied.

  1. Validation of the filament winding process model

    NASA Technical Reports Server (NTRS)

    Calius, Emilo P.; Springer, George S.; Wilson, Brian A.; Hanson, R. Scott

    1987-01-01

    Tests were performed toward validating the WIND model developed previously for simulating the filament winding of composite cylinders. In these tests two 24 in. long, 8 in. diam and 0.285 in. thick cylinders, made of IM-6G fibers and HBRF-55 resin, were wound at + or - 45 deg angle on steel mandrels. The temperatures on the inner and outer surfaces and inside the composite cylinders were recorded during oven cure. The temperatures inside the cylinders were also calculated by the WIND model. The measured and calculated temperatures were then compared. In addition, the degree of cure and resin viscosity distributions inside the cylinders were calculated for the conditions which existed in the tests.

  2. Study of tethered satellite active attitude control

    NASA Technical Reports Server (NTRS)

    Colombo, G.

    1982-01-01

    Existing software was adapted for the study of tethered subsatellite rotational dynamics, an analytic solution for a stable configuration of a tethered subsatellite was developed, the analytic and numerical integrator (computer) solutions for this "test case' was compared in a two mass tether model program (DUMBEL), the existing multiple mass tether model (SKYHOOK) was modified to include subsatellite rotational dynamics, the analytic "test case,' was verified, and the use of the SKYHOOK rotational dynamics capability with a computer run showing the effect of a single off axis thruster on the behavior of the subsatellite was demonstrated. Subroutines for specific attitude control systems are developed and applied to the study of the behavior of the tethered subsatellite under realistic on orbit conditions. The effect of all tether "inputs,' including pendular oscillations, air drag, and electrodynamic interactions, on the dynamic behavior of the tether are included.

  3. An evaluation of computerized adaptive testing for general psychological distress: combining GHQ-12 and Affectometer-2 in an item bank for public mental health research.

    PubMed

    Stochl, Jan; Böhnke, Jan R; Pickett, Kate E; Croudace, Tim J

    2016-05-20

    Recent developments in psychometric modeling and technology allow pooling well-validated items from existing instruments into larger item banks and their deployment through methods of computerized adaptive testing (CAT). Use of item response theory-based bifactor methods and integrative data analysis overcomes barriers in cross-instrument comparison. This paper presents the joint calibration of an item bank for researchers keen to investigate population variations in general psychological distress (GPD). Multidimensional item response theory was used on existing health survey data from the Scottish Health Education Population Survey (n = 766) to calibrate an item bank consisting of pooled items from the short common mental disorder screen (GHQ-12) and the Affectometer-2 (a measure of "general happiness"). Computer simulation was used to evaluate usefulness and efficacy of its adaptive administration. A bifactor model capturing variation across a continuum of population distress (while controlling for artefacts due to item wording) was supported. The numbers of items for different required reliabilities in adaptive administration demonstrated promising efficacy of the proposed item bank. Psychometric modeling of the common dimension captured by more than one instrument offers the potential of adaptive testing for GPD using individually sequenced combinations of existing survey items. The potential for linking other item sets with alternative candidate measures of positive mental health is discussed since an optimal item bank may require even more items than these.

  4. Practical Applications of a Building Method to Construct Aerodynamic Database of Guided Missile Using Wind Tunnel Test Data

    NASA Astrophysics Data System (ADS)

    Kim, Duk-hyun; Lee, Hyoung-Jin

    2018-04-01

    A study of efficient aerodynamic database modeling method was conducted. A creation of database using periodicity and symmetry characteristic of missile aerodynamic coefficient was investigated to minimize the number of wind tunnel test cases. In addition, studies of how to generate the aerodynamic database when the periodicity changes due to installation of protuberance and how to conduct a zero calibration were carried out. Depending on missile configurations, the required number of test cases changes and there exist tests that can be omitted. A database of aerodynamic on deflection angle of control surface can be constituted using phase shift. A validity of modeling method was demonstrated by confirming that the result which the aerodynamic coefficient calculated by using the modeling method was in agreement with wind tunnel test results.

  5. Higher-Order Factors of Personality: Do They Exist?

    PubMed Central

    Ashton, Michael C.; Lee, Kibeom; Goldberg, Lewis R.; de Vries, Reinout E.

    2010-01-01

    Scales that measure the Big Five personality factors are often substantially intercorrelated. These correlations are sometimes interpreted as implying the existence of two higher-order factors of personality. We show that correlations between measures of broad personality factors do not necessarily imply the existence of higher-order factors, and might instead be due to variables that represent same-signed blends of orthogonal factors. Therefore, the hypotheses of higher-order factors and blended variables can only be tested with data on lower-level personality variables that define the personality factors. We compared the higher-order factor model and the blended variable model in three participant samples using the Big Five Aspect Scales, and found better fit for the latter model. In other analyses using the HEXACO Personality Inventory, we identified mutually uncorrelated markers of six personality factors. We conclude that correlations between personality factor scales can be explained without postulating any higher-order dimensions of personality. PMID:19458345

  6. Testing goodness of fit in regression: a general approach for specified alternatives.

    PubMed

    Solari, Aldo; le Cessie, Saskia; Goeman, Jelle J

    2012-12-10

    When fitting generalized linear models or the Cox proportional hazards model, it is important to have tools to test for lack of fit. Because lack of fit comes in all shapes and sizes, distinguishing among different types of lack of fit is of practical importance. We argue that an adequate diagnosis of lack of fit requires a specified alternative model. Such specification identifies the type of lack of fit the test is directed against so that if we reject the null hypothesis, we know the direction of the departure from the model. The goodness-of-fit approach of this paper allows to treat different types of lack of fit within a unified general framework and to consider many existing tests as special cases. Connections with penalized likelihood and random effects are discussed, and the application of the proposed approach is illustrated with medical examples. Tailored functions for goodness-of-fit testing have been implemented in the R package global test. Copyright © 2012 John Wiley & Sons, Ltd.

  7. Goodness-Of-Fit Test for Nonparametric Regression Models: Smoothing Spline ANOVA Models as Example.

    PubMed

    Teran Hidalgo, Sebastian J; Wu, Michael C; Engel, Stephanie M; Kosorok, Michael R

    2018-06-01

    Nonparametric regression models do not require the specification of the functional form between the outcome and the covariates. Despite their popularity, the amount of diagnostic statistics, in comparison to their parametric counter-parts, is small. We propose a goodness-of-fit test for nonparametric regression models with linear smoother form. In particular, we apply this testing framework to smoothing spline ANOVA models. The test can consider two sources of lack-of-fit: whether covariates that are not currently in the model need to be included, and whether the current model fits the data well. The proposed method derives estimated residuals from the model. Then, statistical dependence is assessed between the estimated residuals and the covariates using the HSIC. If dependence exists, the model does not capture all the variability in the outcome associated with the covariates, otherwise the model fits the data well. The bootstrap is used to obtain p-values. Application of the method is demonstrated with a neonatal mental development data analysis. We demonstrate correct type I error as well as power performance through simulations.

  8. Accuracy of p53 Codon 72 Polymorphism Status Determined by Multiple Laboratory Methods: A Latent Class Model Analysis

    PubMed Central

    Walter, Stephen D.; Riddell, Corinne A.; Rabachini, Tatiana; Villa, Luisa L.; Franco, Eduardo L.

    2013-01-01

    Introduction Studies on the association of a polymorphism in codon 72 of the p53 tumour suppressor gene (rs1042522) with cervical neoplasia have inconsistent results. While several methods for genotyping p53 exist, they vary in accuracy and are often discrepant. Methods We used latent class models (LCM) to examine the accuracy of six methods for p53 determination, all conducted by the same laboratory. We also examined the association of p53 with cytological cervical abnormalities, recognising potential test inaccuracy. Results Pairwise disagreement between laboratory methods occurred approximately 10% of the time. Given the estimated true p53 status of each woman, we found that each laboratory method is most likely to classify a woman to her correct status. Arg/Arg women had the highest risk of squamous intraepithelial lesions (SIL). Test accuracy was independent of cytology. There was no strong evidence for correlations of test errors. Discussion Empirical analyses ignore possible laboratory errors, and so are inherently biased, but test accuracy estimated by the LCM approach is unbiased when model assumptions are met. LCM analysis avoids ambiguities arising from empirical test discrepancies, obviating the need to regard any of the methods as a “gold” standard measurement. The methods we presented here to analyse the p53 data can be applied in many other situations where multiple tests exist, but where none of them is a gold standard. PMID:23441193

  9. A stirling engine computer model for performance calculations

    NASA Technical Reports Server (NTRS)

    Tew, R.; Jefferies, K.; Miao, D.

    1978-01-01

    To support the development of the Stirling engine as a possible alternative to the automobile spark-ignition engine, the thermodynamic characteristics of the Stirling engine were analyzed and modeled on a computer. The modeling techniques used are presented. The performance of an existing rhombic-drive Stirling engine was simulated by use of this computer program, and some typical results are presented. Engine tests are planned in order to evaluate this model.

  10. Novel approach for dam break flow modeling using computational intelligence

    NASA Astrophysics Data System (ADS)

    Seyedashraf, Omid; Mehrabi, Mohammad; Akhtari, Ali Akbar

    2018-04-01

    A new methodology based on the computational intelligence (CI) system is proposed and tested for modeling the classic 1D dam-break flow problem. The reason to seek for a new solution lies in the shortcomings of the existing analytical and numerical models. This includes the difficulty of using the exact solutions and the unwanted fluctuations, which arise in the numerical results. In this research, the application of the radial-basis-function (RBF) and multi-layer-perceptron (MLP) systems is detailed for the solution of twenty-nine dam-break scenarios. The models are developed using seven variables, i.e. the length of the channel, the depths of the up-and downstream sections, time, and distance as the inputs. Moreover, the depths and velocities of each computational node in the flow domain are considered as the model outputs. The models are validated against the analytical, and Lax-Wendroff and MacCormack FDM schemes. The findings indicate that the employed CI models are able to replicate the overall shape of the shock- and rarefaction-waves. Furthermore, the MLP system outperforms RBF and the tested numerical schemes. A new monolithic equation is proposed based on the best fitting model, which can be used as an efficient alternative to the existing piecewise analytic equations.

  11. Aerothermodynamics of Blunt Body Entry Vehicles. Chapter 3

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.; Borrelli, Salvatore

    2011-01-01

    In this chapter, the aerothermodynamic phenomena of blunt body entry vehicles are discussed. Four topics will be considered that present challenges to current computational modeling techniques for blunt body environments: turbulent flow, non-equilibrium flow, rarefied flow, and radiation transport. Examples of comparisons between computational tools to ground and flight-test data will be presented in order to illustrate the challenges existing in the numerical modeling of each of these phenomena and to provide test cases for evaluation of Computational Fluid Dynamics (CFD) code predictions.

  12. Aerothermodynamics of blunt body entry vehicles

    NASA Astrophysics Data System (ADS)

    Hollis, Brian R.; Borrelli, Salvatore

    2012-01-01

    In this chapter, the aerothermodynamic phenomena of blunt body entry vehicles are discussed. Four topics will be considered that present challenges to current computational modeling techniques for blunt body environments: turbulent flow, non-equilibrium flow, rarefied flow, and radiation transport. Examples of comparisons between computational tools to ground and flight-test data will be presented in order to illustrate the challenges existing in the numerical modeling of each of these phenomena and to provide test cases for evaluation of computational fluid dynamics (CFD) code predictions.

  13. Computer tomography of flows external to test models

    NASA Technical Reports Server (NTRS)

    Prikryl, I.; Vest, C. M.

    1982-01-01

    Computer tomographic techniques for reconstruction of three-dimensional aerodynamic density fields, from interferograms recorded from several different viewing directions were studied. Emphasis is on the case in which an opaque object such as a test model in a wind tunnel obscures significant regions of the interferograms (projection data). A method called the Iterative Convolution Method (ICM), existing methods in which the field is represented by a series expansions, and analysis of real experimental data in the form of aerodynamic interferograms are discussed.

  14. Efficient design of CMOS TSC checkers

    NASA Technical Reports Server (NTRS)

    Biddappa, Anita; Shamanna, Manjunath K.; Maki, Gary; Whitaker, Sterling

    1990-01-01

    This paper considers the design of an efficient, robustly testable, CMOS Totally Self-Checking (TSC) Checker for k-out-of-2k codes. Most existing implementations use primitive gates and assume the single stuck-at fault model. The self-testing property has been found to fail for CMOS TSC checkers under the stuck-open fault model due to timing skews and arbitrary delays in the circuit. A new four level design using CMOS primitive gates (NAND, NOR, INVERTERS) is presented. This design retains its properties under the stuck-open fault model. Additionally, this method offers an impressive reduction (greater than 70 percent) in gate count, gate inputs, and test set size when compared to the existing method. This implementation is easily realizable and is based on Anderson's technique. A thorough comparative study has been made on the proposed implementation and Kundu's implementation and the results indicate that the proposed one is better than Kundu's in all respects for k-out-of-2k codes.

  15. Computational studies of horizontal axis wind turbines in high wind speed condition using advanced turbulence models

    NASA Astrophysics Data System (ADS)

    Benjanirat, Sarun

    Next generation horizontal-axis wind turbines (HAWTs) will operate at very high wind speeds. Existing engineering approaches for modeling the flow phenomena are based on blade element theory, and cannot adequately account for 3-D separated, unsteady flow effects. Therefore, researchers around the world are beginning to model these flows using first principles-based computational fluid dynamics (CFD) approaches. In this study, an existing first principles-based Navier-Stokes approach is being enhanced to model HAWTs at high wind speeds. The enhancements include improved grid topology, implicit time-marching algorithms, and advanced turbulence models. The advanced turbulence models include the Spalart-Allmaras one-equation model, k-epsilon, k-o and Shear Stress Transport (k-o-SST) models. These models are also integrated with detached eddy simulation (DES) models. Results are presented for a range of wind speeds, for a configuration termed National Renewable Energy Laboratory Phase VI rotor, tested at NASA Ames Research Center. Grid sensitivity studies are also presented. Additionally, effects of existing transition models on the predictions are assessed. Data presented include power/torque production, radial distribution of normal and tangential pressure forces, root bending moments, and surface pressure fields. Good agreement was obtained between the predictions and experiments for most of the conditions, particularly with the Spalart-Allmaras-DES model.

  16. The Cognitive Processes Associated with Occupational/Career Indecision: A Model for Gifted Adolescents

    ERIC Educational Resources Information Center

    Jung, Jae Yup

    2013-01-01

    This study developed and tested a new model of the cognitive processes associated with occupational/career indecision for gifted adolescents. A survey instrument with rigorous psychometric properties, developed from a number of existing instruments, was administered to a sample of 687 adolescents attending three academically selective high schools…

  17. Geochemical Models of Water-Quality Changes During Aquifer Storage Recovery (ASR) Cycle Tests, Phase 1: Geochemical Models Using Existing Data

    DTIC Science & Technology

    2006-09-01

    Richardson, in review). Figure 1 shows the lithostratigraphic setting for Eocene through Miocene strata, and the occurrence of hydrostratigraphic units of...basal Haw- thorn unit lies unconformably on lithologies informally called “ Eocene limestones,” which consist of Suwannee Limestone, Ocala Limestone

  18. Predicting fire spread in Arizona's oak chaparral

    Treesearch

    A. W. Lindenmuth; James R. Davis

    1973-01-01

    Five existing fire models, both experimental and theoretical, did not adequately predict rate-of-spread (ROS) when tested on single- and multiclump fires in oak chaparral in Arizona. A statistical model developed using essentially the same input variables but weighted differently accounted for 81 percent ofthe variation in ROS. A chemical coefficient that accounts for...

  19. Classifying Correlation Matrices into Relatively Homogeneous Subgroups: A Cluster Analytic Approach

    ERIC Educational Resources Information Center

    Cheung, Mike W.-L.; Chan, Wai

    2005-01-01

    Researchers are becoming interested in combining meta-analytic techniques and structural equation modeling to test theoretical models from a pool of studies. Most existing procedures are based on the assumption that all correlation matrices are homogeneous. Few studies have addressed what the next step should be when studies being analyzed are…

  20. The Model Analyst’s Toolkit: Scientific Model Development, Analysis, and Validation

    DTIC Science & Technology

    2014-08-20

    many different synthetic series can be generated at once. If the series already exists in the dataset, it is updated to reflect the new values. The...Testing for causality: a personal viewpoint. Journal of Economic Dynamics and Control, 2, 329-352. Manning, C., Raghavan, R., and Schutze , H. (2008

  1. Digital control of wind tunnel magnetic suspension and balance systems

    NASA Technical Reports Server (NTRS)

    Britcher, Colin P.; Goodyer, Michael J.; Eskins, Jonathan; Parker, David; Halford, Robert J.

    1987-01-01

    Digital controllers are being developed for wind tunnel magnetic suspension and balance systems, which in turn permit wind tunnel testing of aircraft models free from support interference. Hardware and software features of two existing digital control systems are reviewed. Some aspects of model position sensing and system calibration are also discussed.

  2. Pedagogical Catalysts of Civic Competence: The Development of a Critical Epistemological Model for Community-Based Learning

    ERIC Educational Resources Information Center

    Stokamer, Stephanie

    2013-01-01

    Democratic problem-solving necessitates an active and informed citizenry, but existing research on service-learning has shed little light on the relationship between pedagogical practices and civic competence outcomes. This study developed and tested a model to represent that relationship and identified pedagogical catalysts of civic competence…

  3. Religion as a Resource for Positive Youth Development: Religion, Social Capital, and Moral Outcomes

    ERIC Educational Resources Information Center

    King, Pamela Ebstyne; Furrow, James L.

    2004-01-01

    Although existing literature demonstrates that developmental benefits are associated with religion for adolescents, little is understood about the dynamics of this relationship. Drawing on social capital theory, this study tested a conceptual model exploring socially embedded religious influences on moral outcomes. A three-dimensional model of…

  4. Plans for Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Heeg, Jennifer; Ballmann, Josef; Bhatia, Kumar; Blades, Eric; Boucke, Alexander; Chwalowski, Pawel; Dietz, Guido; Dowell, Earl; Florance, Jennifer P.; Hansen, Thorsten; hide

    2011-01-01

    This paper summarizes the plans for the first Aeroelastic Prediction Workshop. The workshop is designed to assess the state of the art of computational methods for predicting unsteady flow fields and aeroelastic response. The goals are to provide an impartial forum to evaluate the effectiveness of existing computer codes and modeling techniques, and to identify computational and experimental areas needing additional research and development. Three subject configurations have been chosen from existing wind tunnel data sets where there is pertinent experimental data available for comparison. For each case chosen, the wind tunnel testing was conducted using forced oscillation of the model at specified frequencies

  5. Dealing with Omitted and Not-Reached Items in Competence Tests: Evaluating Approaches Accounting for Missing Responses in Item Response Theory Models

    ERIC Educational Resources Information Center

    Pohl, Steffi; Gräfe, Linda; Rose, Norman

    2014-01-01

    Data from competence tests usually show a number of missing responses on test items due to both omitted and not-reached items. Different approaches for dealing with missing responses exist, and there are no clear guidelines on which of those to use. While classical approaches rely on an ignorable missing data mechanism, the most recently developed…

  6. Predicting novel substrates for enzymes with minimal experimental effort with active learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pertusi, Dante A.; Moura, Matthew E.; Jeffryes, James G.

    Enzymatic substrate promiscuity is more ubiquitous than previously thought, with significant consequences for understanding metabolism and its application to biocatalysis. This realization has given rise to the need for efficient characterization of enzyme promiscuity. Enzyme promiscuity is currently characterized with a limited number of human-selected compounds that may not be representative of the enzyme's versatility. While testing large numbers of compounds may be impractical, computational approaches can exploit existing data to determine the most informative substrates to test next, thereby more thoroughly exploring an enzyme's versatility. To demonstrate this, we used existing studies and tested compounds for four different enzymes,more » developed support vector machine (SVM) models using these datasets, and selected additional compounds for experiments using an active learning approach. SVMs trained on a chemically diverse set of compounds were discovered to achieve maximum accuracies of similar to 80% using similar to 33% fewer compounds than datasets based on all compounds tested in existing studies. Active learning-selected compounds for testing resolved apparent conflicts in the existing training data, while adding diversity to the dataset. The application of these algorithms to wide arrays of metabolic enzymes would result in a library of SVMs that can predict high-probability promiscuous enzymatic reactions and could prove a valuable resource for the design of novel metabolic pathways.« less

  7. Predicting novel substrates for enzymes with minimal experimental effort with active learning.

    PubMed

    Pertusi, Dante A; Moura, Matthew E; Jeffryes, James G; Prabhu, Siddhant; Walters Biggs, Bradley; Tyo, Keith E J

    2017-11-01

    Enzymatic substrate promiscuity is more ubiquitous than previously thought, with significant consequences for understanding metabolism and its application to biocatalysis. This realization has given rise to the need for efficient characterization of enzyme promiscuity. Enzyme promiscuity is currently characterized with a limited number of human-selected compounds that may not be representative of the enzyme's versatility. While testing large numbers of compounds may be impractical, computational approaches can exploit existing data to determine the most informative substrates to test next, thereby more thoroughly exploring an enzyme's versatility. To demonstrate this, we used existing studies and tested compounds for four different enzymes, developed support vector machine (SVM) models using these datasets, and selected additional compounds for experiments using an active learning approach. SVMs trained on a chemically diverse set of compounds were discovered to achieve maximum accuracies of ~80% using ~33% fewer compounds than datasets based on all compounds tested in existing studies. Active learning-selected compounds for testing resolved apparent conflicts in the existing training data, while adding diversity to the dataset. The application of these algorithms to wide arrays of metabolic enzymes would result in a library of SVMs that can predict high-probability promiscuous enzymatic reactions and could prove a valuable resource for the design of novel metabolic pathways. Copyright © 2017 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  8. Decision analytic models for Alzheimer's disease: state of the art and future directions.

    PubMed

    Cohen, Joshua T; Neumann, Peter J

    2008-05-01

    Decision analytic policy models for Alzheimer's disease (AD) enable researchers and policy makers to investigate questions about the costs and benefits of a wide range of existing and potential screening, testing, and treatment strategies. Such models permit analysts to compare existing alternatives, explore hypothetical scenarios, and test the strength of underlying assumptions in an explicit, quantitative, and systematic way. Decision analytic models can best be viewed as complementing clinical trials both by filling knowledge gaps not readily addressed by empirical research and by extrapolating beyond the surrogate markers recorded in a trial. We identified and critiqued 13 distinct AD decision analytic policy models published since 1997. Although existing models provide useful insights, they also have a variety of limitations. (1) They generally characterize disease progression in terms of cognitive function and do not account for other distinguishing features, such as behavioral symptoms, functional performance, and the emotional well-being of AD patients and caregivers. (2) Many describe disease progression in terms of a limited number of discrete states, thus constraining the level of detail that can be used to characterize both changes in patient status and the relationships between disease progression and other factors, such as residential status, that influence outcomes of interest. (3) They have focused almost exclusively on evaluating drug treatments, thus neglecting other disease management strategies and combinations of pharmacologic and nonpharmacologic interventions. Future AD models should facilitate more realistic and compelling evaluations of various interventions to address the disease. An improved model will allow decision makers to better characterize the disease, to better assess the costs and benefits of a wide range of potential interventions, and to better evaluate the incremental costs and benefits of specific interventions used in conjunction with other disease management strategies.

  9. Multilevel structural equation models for assessing moderation within and across levels of analysis.

    PubMed

    Preacher, Kristopher J; Zhang, Zhen; Zyphur, Michael J

    2016-06-01

    Social scientists are increasingly interested in multilevel hypotheses, data, and statistical models as well as moderation or interactions among predictors. The result is a focus on hypotheses and tests of multilevel moderation within and across levels of analysis. Unfortunately, existing approaches to multilevel moderation have a variety of shortcomings, including conflated effects across levels of analysis and bias due to using observed cluster averages instead of latent variables (i.e., "random intercepts") to represent higher-level constructs. To overcome these problems and elucidate the nature of multilevel moderation effects, we introduce a multilevel structural equation modeling (MSEM) logic that clarifies the nature of the problems with existing practices and remedies them with latent variable interactions. This remedy uses random coefficients and/or latent moderated structural equations (LMS) for unbiased tests of multilevel moderation. We describe our approach and provide an example using the publicly available High School and Beyond data with Mplus syntax in Appendix. Our MSEM method eliminates problems of conflated multilevel effects and reduces bias in parameter estimates while offering a coherent framework for conceptualizing and testing multilevel moderation effects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  10. 40 CFR 1054.245 - How do I determine deterioration factors from exhaust durability testing?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., either with pre-existing test data or with new emission measurements. (a) You may ask us to approve... already given us these data for certifying other engines in the same or earlier model years. Use good... type or mixture of fuel expected to have the highest combustion and exhaust temperatures. For dual-fuel...

  11. 40 CFR 1054.245 - How do I determine deterioration factors from exhaust durability testing?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., either with pre-existing test data or with new emission measurements. (a) You may ask us to approve... already given us these data for certifying other engines in the same or earlier model years. Use good... type or mixture of fuel expected to have the highest combustion and exhaust temperatures. For dual-fuel...

  12. 40 CFR 1054.245 - How do I determine deterioration factors from exhaust durability testing?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., either with pre-existing test data or with new emission measurements. (a) You may ask us to approve... already given us these data for certifying other engines in the same or earlier model years. Use good... type or mixture of fuel expected to have the highest combustion and exhaust temperatures. For dual-fuel...

  13. A toolbox and record for scientific models

    NASA Technical Reports Server (NTRS)

    Ellman, Thomas

    1994-01-01

    Computational science presents a host of challenges for the field of knowledge-based software design. Scientific computation models are difficult to construct. Models constructed by one scientist are easily misapplied by other scientists to problems for which they are not well-suited. Finally, models constructed by one scientist are difficult for others to modify or extend to handle new types of problems. Construction of scientific models actually involves much more than the mechanics of building a single computational model. In the course of developing a model, a scientist will often test a candidate model against experimental data or against a priori expectations. Test results often lead to revisions of the model and a consequent need for additional testing. During a single model development session, a scientist typically examines a whole series of alternative models, each using different simplifying assumptions or modeling techniques. A useful scientific software design tool must support these aspects of the model development process as well. In particular, it should propose and carry out tests of candidate models. It should analyze test results and identify models and parts of models that must be changed. It should determine what types of changes can potentially cure a given negative test result. It should organize candidate models, test data, and test results into a coherent record of the development process. Finally, it should exploit the development record for two purposes: (1) automatically determining the applicability of a scientific model to a given problem; (2) supporting revision of a scientific model to handle a new type of problem. Existing knowledge-based software design tools must be extended in order to provide these facilities.

  14. Diagnostic utility of appetite loss in addition to existing prediction models for community-acquired pneumonia in the elderly: a prospective diagnostic study in acute care hospitals in Japan

    PubMed Central

    Yamamoto, Yosuke; Terada, Kazuhiko; Ohta, Mitsuyasu; Mikami, Wakako; Yokota, Hajime; Hayashi, Michio; Miyashita, Jun; Azuma, Teruhisa; Fukuma, Shingo; Fukuhara, Shunichi

    2017-01-01

    Objective Diagnosis of community-acquired pneumonia (CAP) in the elderly is often delayed because of atypical presentation and non-specific symptoms, such as appetite loss, falls and disturbance in consciousness. The aim of this study was to investigate the external validity of existing prediction models and the added value of the non-specific symptoms for the diagnosis of CAP in elderly patients. Design Prospective cohort study. Setting General medicine departments of three teaching hospitals in Japan. Participants A total of 109 elderly patients who consulted for upper respiratory symptoms between 1 October 2014 and 30 September 2016. Main outcome measures The reference standard for CAP was chest radiograph evaluated by two certified radiologists. The existing models were externally validated for diagnostic performance by calibration plot and discrimination. To evaluate the additional value of the non-specific symptoms to the existing prediction models, we developed an extended logistic regression model. Calibration, discrimination, category-free net reclassification improvement (NRI) and decision curve analysis (DCA) were investigated in the extended model. Results Among the existing models, the model by van Vugt demonstrated the best performance, with an area under the curve of 0.75(95% CI 0.63 to 0.88); calibration plot showed good fit despite a significant Hosmer-Lemeshow test (p=0.017). Among the non-specific symptoms, appetite loss had positive likelihood ratio of 3.2 (2.0–5.3), negative likelihood ratio of 0.4 (0.2–0.7) and OR of 7.7 (3.0–19.7). Addition of appetite loss to the model by van Vugt led to improved calibration at p=0.48, NRI of 0.53 (p=0.019) and higher net benefit by DCA. Conclusions Information on appetite loss improved the performance of an existing model for the diagnosis of CAP in the elderly. PMID:29122806

  15. Solar-System Tests of Gravitational Theories

    NASA Technical Reports Server (NTRS)

    Shapiro, Irwin I.

    2005-01-01

    This research is aimed at testing gravitational theory, primarily on an interplanetary scale and using mainly observations of objects in the solar system. Our goal is either to detect departures from the standard model (general relativity) - if any exist within the level of sensitivity of our data - or to support this model by placing tighter bounds on any departure from it. For this project, we have analyzed a combination of observational data with our model of the solar system, including planetary radar ranging, lunar laser ranging, and spacecraft tracking, as well as pulsar timing and pulsar VLBI measurements.

  16. Field investigation of the drift shadow

    USGS Publications Warehouse

    Su, G.W.; Kneafsey, T.J.; Ghezzehei, T.A.; Cook, P.J.; Marshall, B.D.

    2006-01-01

    The "Drift Shadow" is defined as the relatively drier region that forms below subsurface cavities or drifts in unsaturated rock. Its existence has been predicted through analytical and numerical models of unsaturated flow. However, these theoretical predictions have not been demonstrated empirically to date. In this project we plan to test the drift shadow concept through field investigations and compare our observations to simulations. Based on modeling studies we have an identified a suitable site to perform the study at an inactive mine in a sandstone formation. Pretest modeling studies and preliminary characterization of the site are being used to develop the field scale tests.

  17. An optimum organizational structure for a large earth-orbiting multidisciplinary space base. Ph.D. Thesis - Fla. State Univ., 1973

    NASA Technical Reports Server (NTRS)

    Ragusa, J. M.

    1975-01-01

    An optimum hypothetical organizational structure was studied for a large earth-orbiting, multidisciplinary research and applications space base manned by a crew of technologists. Because such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than with the empirical testing of the model. The essential finding of this research was that a four-level project type total matrix model will optimize the efficiency and effectiveness of space base technologists.

  18. Testing the efficacy of existing force-endurance models to account for the prevalence of obesity in the workforce.

    PubMed

    Pajoutan, Mojdeh; Cavuoto, Lora A; Mehta, Ranjana K

    2017-10-01

    This study evaluates whether the existing force-endurance relationship models are predictive of endurance time for overweight and obese individuals, and if not, provide revised models that can be applied for ergonomics practice. Data was collected from 141 participants (49 normal weight, 50 overweight, 42 obese) who each performed isometric endurance tasks of hand grip, shoulder flexion, and trunk extension at four levels of relative workload. Subject-specific fatigue rates and a general model of the force-endurance relationship were determined and compared to two fatigue models from the literature. There was a lack of fit between previous models and the current data for the grip (ICC = 0.8), with a shift toward lower endurance times for the new data. Application of the revised models can facilitate improved workplace design and job evaluation to accommodate the capacities of the current workforce.

  19. Enhanced model of photovoltaic cell/panel/array considering the direct and reverse modes

    NASA Astrophysics Data System (ADS)

    Zegaoui, Abdallah; Boutoubat, Mohamed; Sawicki, Jean-Paul; Kessaissia, Fatma Zohra; Djahbar, Abdelkader; Aillerie, Michel

    2018-05-01

    This paper presents an improved generalized physical model for photovoltaic, PV cells, panels and arrays taking into account the behavior of these devices when considering their biasing existing in direct and reverse modes. Existing PV physical models generally are very efficient for simulating influence of irradiation changes on the short circuit current but they could not visualize the influences of temperature changes. The Enhanced Direct and Reverse Mode model, named EDRM model, enlightens the influence on the short-circuit current of both temperature and irradiation in the reverse mode of the considered PV devices. Due to its easy implementation, the proposed model can be a useful power tool for the development of new photovoltaic systems taking into account and in a more exhaustive manner, environmental conditions. The developed model was tested on a marketed PV panel and it gives a satisfactory results compared with parameters given in the manufacturer datasheet.

  20. High-resolution heat-transfer-coefficient maps applicable to compound-curve surfaces using liquid crystals in a transient wind tunnel

    NASA Technical Reports Server (NTRS)

    Jones, Terry V.; Hippensteele, Steven A.

    1988-01-01

    Tests were performed in a transient heat transfer tunnel in which the model under test was preheated prior to allowing room temperature air to be suddenly drawn over the model. The resulting movement of isothermal contours on the model is revealed using a surface coating of thermochromic liquid crystals that display distinctive colors at particular temperatures. A video record is obtained of a temperature and time data pair for all points on the model during a single test. Experiments on a duct model are reported in which the model was preheated using a hot air stream. A manner in which initial model temperature nonuniformities could be taken into account was investigated. The duct model was also tested with a steady-state measurement technique and results were compared with the transient measurements, but recognizing that differences existed between the upstream thermal boundary conditions. The steady-state and transient measurements were shown to be consistent with predicted values. The main advantage of this transient heat transfer technique using liquid crystals is that since the test model need not be actively heated, high-resolution measurements on surfaces with complex shapes may be obtained.

  1. Characterization of Orbital Debris via Hyper-Velocity Ground-Based Tests

    NASA Technical Reports Server (NTRS)

    Cowardin, Heather

    2016-01-01

    The purpose of the DebriSat project is to replicate a hyper-velocity fragmentation event using modern-day spacecraft materials and construction techniques to better improve the existing DoDand NASA breakup models.

  2. Internet-Based HIV and Sexually Transmitted Infection Testing in British Columbia, Canada: Opinions and Expectations of Prospective Clients

    PubMed Central

    Hottes, Travis Salway; Farrell, Janine; Bondyra, Mark; Haag, Devon; Shoveller, Jean

    2012-01-01

    Background The feasibility and acceptability of Internet-based sexually transmitted infection (STI) testing have been demonstrated; however, few programs have included testing for human immunodeficiency virus (HIV). In British Columbia, Canada, a new initiative will offer online access to chlamydia, gonorrhea, syphilis, and HIV testing, integrated with existing clinic-based services. We presented the model to gay men and other men who have sex with men (MSM) and existing clinic clients through a series of focus groups. Objective To identify perceived benefits, concerns, and expectations of a new model for Internet-based STI and HIV testing among potential end users. Methods Participants were recruited through email invitations, online classifieds, and flyers in STI clinics. A structured interview guide was used. Focus groups were audio recorded, and an observer took detailed field notes. Analysts then listened to audio recordings to validate field notes. Data were coded and analyzed using a scissor-and-sort technique. Results In total, 39 people participated in six focus groups. Most were MSM, and all were active Internet users and experienced with STI/HIV testing. Perceived benefits of Internet-based STI testing included anonymity, convenience, and client-centered control. Salient concerns were reluctance to provide personal information online, distrust of security of data provided online, and the need for comprehensive pretest information and support for those receiving positive results, particularly for HIV. Suggestions emerged for mitigation of these concerns: provide up-front and detailed information about the model, ask only the minimal information required for testing, give positive results only by phone or in person, and ensure that those testing positive are referred for counseling and support. End users expected Internet testing to offer continuous online service delivery, from booking appointments, to transmitting information to the laboratory, to getting prescriptions. Most participants said they would use the service or recommend it to others. Those who indicated they would be unlikely to use it generally either lived near an STI clinic or routinely saw a family doctor with whom they were comfortable testing. Participants expected that the service would provide the greatest benefit to individuals who do not already have access to sensitive sexual health services, are reluctant to test due to stigma, or want to take immediate action (eg, because of a recent potential STI/HIV exposure). Conclusions Internet-based STI/HIV testing has the potential to reduce barriers to testing, as a complement to existing clinic-based services. Trust in the new online service, however, is a prerequisite to client uptake and may be engendered by transparency of information about the model, and by accounting for concerns related to confidentiality, data usage, and provision of positive (especially HIV) results. Ongoing evaluation of this new model will be essential to its success and to the confidence of its users. PMID:22394997

  3. Internet-based HIV and sexually transmitted infection testing in British Columbia, Canada: opinions and expectations of prospective clients.

    PubMed

    Hottes, Travis Salway; Farrell, Janine; Bondyra, Mark; Haag, Devon; Shoveller, Jean; Gilbert, Mark

    2012-03-06

    The feasibility and acceptability of Internet-based sexually transmitted infection (STI) testing have been demonstrated; however, few programs have included testing for human immunodeficiency virus (HIV). In British Columbia, Canada, a new initiative will offer online access to chlamydia, gonorrhea, syphilis, and HIV testing, integrated with existing clinic-based services. We presented the model to gay men and other men who have sex with men (MSM) and existing clinic clients through a series of focus groups. To identify perceived benefits, concerns, and expectations of a new model for Internet-based STI and HIV testing among potential end users. Participants were recruited through email invitations, online classifieds, and flyers in STI clinics. A structured interview guide was used. Focus groups were audio recorded, and an observer took detailed field notes. Analysts then listened to audio recordings to validate field notes. Data were coded and analyzed using a scissor-and-sort technique. In total, 39 people participated in six focus groups. Most were MSM, and all were active Internet users and experienced with STI/HIV testing. Perceived benefits of Internet-based STI testing included anonymity, convenience, and client-centered control. Salient concerns were reluctance to provide personal information online, distrust of security of data provided online, and the need for comprehensive pretest information and support for those receiving positive results, particularly for HIV. Suggestions emerged for mitigation of these concerns: provide up-front and detailed information about the model, ask only the minimal information required for testing, give positive results only by phone or in person, and ensure that those testing positive are referred for counseling and support. End users expected Internet testing to offer continuous online service delivery, from booking appointments, to transmitting information to the laboratory, to getting prescriptions. Most participants said they would use the service or recommend it to others. Those who indicated they would be unlikely to use it generally either lived near an STI clinic or routinely saw a family doctor with whom they were comfortable testing. Participants expected that the service would provide the greatest benefit to individuals who do not already have access to sensitive sexual health services, are reluctant to test due to stigma, or want to take immediate action (eg, because of a recent potential STI/HIV exposure). Internet-based STI/HIV testing has the potential to reduce barriers to testing, as a complement to existing clinic-based services. Trust in the new online service, however, is a prerequisite to client uptake and may be engendered by transparency of information about the model, and by accounting for concerns related to confidentiality, data usage, and provision of positive (especially HIV) results. Ongoing evaluation of this new model will be essential to its success and to the confidence of its users.

  4. Use of Data Libraries for IAEA Nuclear Security Assessment Methodologies (NUSAM) [section 5.4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shull, D.; Lane, M.

    2015-06-23

    Data libraries are essential for the characterization of the facility and provide the documented input which enables the facility assessment results and subsequent conclusions. Data Libraries are historical, verifiable, quantified, and applicable collections of testing data on different types of barriers, sensors, cameras, procedures, and/or personnel. Data libraries are developed and maintained as part of any assessment program or process. Data is collected during the initial stages of facility characterization to aid in the model and/or simulation development process. Data library values may also be developed through the use of state testing centers and/or site resources by testing different typesmore » of barriers, sensors, cameras, procedures, and/or personnel. If no data exists, subject matter expert opinion and manufacturer's specifications/ testing values can be the basis for initially assigning values, but are generally less reliable and lack appropriate confidence measures. The use of existing data libraries that have been developed by a state testing organization reduces the assessment costs by establishing standard delay, detection and assessment values for use by multiple sites or facilities where common barriers and alarms systems exist.« less

  5. Simulation of a Cold Gas Thruster System and Test Data Correlation

    NASA Technical Reports Server (NTRS)

    Hauser, Daniel M.; Quinn, Frank D.

    2012-01-01

    During developmental testing of the Ascent Abort 1 (AA-1) cold gas thruster system, unexpected behavior was detected. Upon further review the design as it existed may not have met the requirements. To determine the best approach for modifying the design, the system was modeled with a dynamic fluid analysis tool (EASY5). The system model consisted of the nitrogen storage tank, pressure regulator, thruster valve, nozzle, and the associated interconnecting line lengths. The regulator and thruster valves were modeled using a combination of the fluid and mechanical modules available in EASY5. The simulation results were then compared against actual system test data. The simulation results exhibited behaviors similar to the test results, such as the pressure regulators response to thruster firings. Potential design solutions were investigated using the analytical model parameters, including increasing the volume downstream of the regulator and increasing the orifice area. Both were shown to improve the regulator response.

  6. Software Testing and Verification in Climate Model Development

    NASA Technical Reports Server (NTRS)

    Clune, Thomas L.; Rood, RIchard B.

    2011-01-01

    Over the past 30 years most climate models have grown from relatively simple representations of a few atmospheric processes to a complex multi-disciplinary system. Computer infrastructure over that period has gone from punch card mainframes to modem parallel clusters. Model implementations have become complex, brittle, and increasingly difficult to extend and maintain. Existing verification processes for model implementations rely almost exclusively upon some combination of detailed analysis of output from full climate simulations and system-level regression tests. In additional to being quite costly in terms of developer time and computing resources, these testing methodologies are limited in terms of the types of defects that can be detected, isolated and diagnosed. Mitigating these weaknesses of coarse-grained testing with finer-grained "unit" tests has been perceived as cumbersome and counter-productive. In the commercial software sector, recent advances in tools and methodology have led to a renaissance for systematic fine-grained testing. We discuss the availability of analogous tools for scientific software and examine benefits that similar testing methodologies could bring to climate modeling software. We describe the unique challenges faced when testing complex numerical algorithms and suggest techniques to minimize and/or eliminate the difficulties.

  7. Community Sediment Transport Modeling, National Ocean Partnership Program

    DTIC Science & Technology

    2009-12-01

    delta . A high-resolution, one-dimensional model that resolves the phase of the forcing gravity waves is being used to test the hypothesized mechanisms...dimensional process models to operational elements in the CSTMS framework. Sherwood and Ferre modified the existing algorithms for tracking stratigraphy ...Verdes shelf, California. Continental Shelf Research ( revised manuscript submitted), [refereed] Frank, D. P., D. L. Foster, and C. R. Sherwood

  8. Quantitative assessment of computational models for retinotopic map formation

    PubMed Central

    Sterratt, David C; Cutts, Catherine S; Willshaw, David J; Eglen, Stephen J

    2014-01-01

    ABSTRACT Molecular and activity‐based cues acting together are thought to guide retinal axons to their terminal sites in vertebrate optic tectum or superior colliculus (SC) to form an ordered map of connections. The details of mechanisms involved, and the degree to which they might interact, are still not well understood. We have developed a framework within which existing computational models can be assessed in an unbiased and quantitative manner against a set of experimental data curated from the mouse retinocollicular system. Our framework facilitates comparison between models, testing new models against known phenotypes and simulating new phenotypes in existing models. We have used this framework to assess four representative models that combine Eph/ephrin gradients and/or activity‐based mechanisms and competition. Two of the models were updated from their original form to fit into our framework. The models were tested against five different phenotypes: wild type, Isl2‐EphA3 ki/ki, Isl2‐EphA3 ki/+, ephrin‐A2,A3,A5 triple knock‐out (TKO), and Math5 −/− (Atoh7). Two models successfully reproduced the extent of the Math5 −/− anteromedial projection, but only one of those could account for the collapse point in Isl2‐EphA3 ki/+. The models needed a weak anteroposterior gradient in the SC to reproduce the residual order in the ephrin‐A2,A3,A5 TKO phenotype, suggesting either an incomplete knock‐out or the presence of another guidance molecule. Our article demonstrates the importance of testing retinotopic models against as full a range of phenotypes as possible, and we have made available MATLAB software, we wrote to facilitate this process. © 2014 Wiley Periodicals, Inc. Develop Neurobiol 75: 641–666, 2015 PMID:25367067

  9. Validation of the Nickel Biotic Ligand Model for Locally Relevant Species in Australian Freshwaters.

    PubMed

    Peters, Adam; Merrington, Graham; Schlekat, Christian; De Schamphelaere, Karel; Stauber, Jennifer; Batley, Graeme; Harford, Andrew; van Dam, Rick; Pease, Ceiwen; Mooney, Tom; Warne, Michael; Hickey, Chris; Glazebrook, Peter; Chapman, John; Smith, Ross; Krassoi, Rick

    2018-06-20

    Australian freshwaters have relatively low water hardness and different calcium to magnesium ratios compared with those in Europe. The hardness values of a substantial proportion of Australian freshwaters fall below the application boundary of the existing European nickel Biotic Ligand Models (NiBLMs) of 2 mg Ca/L. Toxicity testing was undertaken using Hydra viridissima to assess the predictive ability of the existing NiBLM for this species in extremely soft waters. This testing revealed an increased competitive effect of calcium and magnesium with nickel for binding to the biotic ligand in soft water (<10 mg CaCO 3 /L) than at higher water hardness. Modifications were made to the NiBLM by increasing the binding constants for Ca and Mg at the biotic ligand to account for softer waters encountered in Australia and the more important competitive effect of calcium and magnesium on nickel toxicity. To validate the modified NiBLM, ecotoxicity testing was performed on five Australian test species in five different natural Australian waters. Overall, no single water chemistry parameter was able to indicate the trends in toxicity to all of the test species. The modified NiBLMs were able to predict the toxicity of nickel to the test species in the validation studies in natural waters better than the existing NiBLMs. This work suggests that the overarching mechanisms defining nickel bioavailability to freshwater species are globally similar, and that NiBLMs can be used in all freshwater systems with minor modifications. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  10. Estimating daily forest carbon fluxes using a combination of ground and remotely sensed data

    NASA Astrophysics Data System (ADS)

    Chirici, Gherardo; Chiesi, Marta; Corona, Piermaria; Salvati, Riccardo; Papale, Dario; Fibbi, Luca; Sirca, Costantino; Spano, Donatella; Duce, Pierpaolo; Marras, Serena; Matteucci, Giorgio; Cescatti, Alessandro; Maselli, Fabio

    2016-02-01

    Several studies have demonstrated that Monteith's approach can efficiently predict forest gross primary production (GPP), while the modeling of net ecosystem production (NEP) is more critical, requiring the additional simulation of forest respirations. The NEP of different forest ecosystems in Italy was currently simulated by the use of a remote sensing driven parametric model (modified C-Fix) and a biogeochemical model (BIOME-BGC). The outputs of the two models, which simulate forests in quasi-equilibrium conditions, are combined to estimate the carbon fluxes of actual conditions using information regarding the existing woody biomass. The estimates derived from the methodology have been tested against daily reference GPP and NEP data collected through the eddy correlation technique at five study sites in Italy. The first test concerned the theoretical validity of the simulation approach at both annual and daily time scales and was performed using optimal model drivers (i.e., collected or calibrated over the site measurements). Next, the test was repeated to assess the operational applicability of the methodology, which was driven by spatially extended data sets (i.e., data derived from existing wall-to-wall digital maps). A good estimation accuracy was generally obtained for GPP and NEP when using optimal model drivers. The use of spatially extended data sets worsens the accuracy to a varying degree, which is properly characterized. The model drivers with the most influence on the flux modeling strategy are, in increasing order of importance, forest type, soil features, meteorology, and forest woody biomass (growing stock volume).

  11. The seismic response of the Los Angeles basin, California

    USGS Publications Warehouse

    Wald, D.J.; Graves, R.W.

    1998-01-01

    Using strong-motion data recorded in the Los Angeles region from the 1992 (Mw 7.3) Landers earthquake, we have tested the accuracy of existing three-dimensional (3D) velocity models on the simulation of long-period (???2 sec) ground motions in the Los Angeles basin and surrounding San Fernando and San Gabriel Valleys. First, the overall pattern and degree of long-period excitation of the basins were identified in the observations. Within the Los Angeles basin, the recorded amplitudes are about three to four times larger than at sites outside the basins; amplitudes within the San Fernando and San Gabriel Valleys are nearly a factor of 3 greater than surrounding bedrock sites. Then, using a 3D finite-difference numerical modeling approach, we analyzed how variations in 3D earth structure affect simulated waveforms, amplitudes, and the fit to the observed patterns of amplification. Significant differences exist in the 3D velocity models of southern California that we tested (Magistrale et al., 1996; Graves, 1996a; Hauksson and Haase, 1997). Major differences in the models include the velocity of the assumed background models; the depth of the Los Angeles basin; and the depth, location, and geometry of smaller basins. The largest disparities in the response of the models are seen for the San Fernando Valley and the deepest portion of the Los Angeles basin. These arise in large part from variations in the structure of the basins, particularly the effective depth extent, which is mainly due to alternative assumptions about the nature of the basin sediment fill. The general ground-motion characteristics are matched by the 3D model simulations, validating the use of 3D modeling with geologically based velocity-structure models. However, significant shortcomings exist in the overall patterns of amplification and the duration of the long-period response. The successes and limitations of the models for reproducing the recorded ground motions as discussed provide the basis and direction for necessary improvements to earth structure models, whether geologically or tomographically derived. The differences in the response of the earth models tested also translate to variable success in the ability to successfully model the data and add uncertainty to estimates of the basin response given input "scenario" earthquake source models.

  12. Evaluation of machine learning algorithms for improved risk assessment for Down's syndrome.

    PubMed

    Koivu, Aki; Korpimäki, Teemu; Kivelä, Petri; Pahikkala, Tapio; Sairanen, Mikko

    2018-05-04

    Prenatal screening generates a great amount of data that is used for predicting risk of various disorders. Prenatal risk assessment is based on multiple clinical variables and overall performance is defined by how well the risk algorithm is optimized for the population in question. This article evaluates machine learning algorithms to improve performance of first trimester screening of Down syndrome. Machine learning algorithms pose an adaptive alternative to develop better risk assessment models using the existing clinical variables. Two real-world data sets were used to experiment with multiple classification algorithms. Implemented models were tested with a third, real-world, data set and performance was compared to a predicate method, a commercial risk assessment software. Best performing deep neural network model gave an area under the curve of 0.96 and detection rate of 78% with 1% false positive rate with the test data. Support vector machine model gave area under the curve of 0.95 and detection rate of 61% with 1% false positive rate with the same test data. When compared with the predicate method, the best support vector machine model was slightly inferior, but an optimized deep neural network model was able to give higher detection rates with same false positive rate or similar detection rate but with markedly lower false positive rate. This finding could further improve the first trimester screening for Down syndrome, by using existing clinical variables and a large training data derived from a specific population. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Predicate Argument Structure Analysis for Use Case Description Modeling

    NASA Astrophysics Data System (ADS)

    Takeuchi, Hironori; Nakamura, Taiga; Yamaguchi, Takahira

    In a large software system development project, many documents are prepared and updated frequently. In such a situation, support is needed for looking through these documents easily to identify inconsistencies and to maintain traceability. In this research, we focus on the requirements documents such as use cases and consider how to create models from the use case descriptions in unformatted text. In the model construction, we propose a few semantic constraints based on the features of the use cases and use them for a predicate argument structure analysis to assign semantic labels to actors and actions. With this approach, we show that we can assign semantic labels without enhancing any existing general lexical resources such as case frame dictionaries and design a less language-dependent model construction architecture. By using the constructed model, we consider a system for quality analysis of the use cases and automated test case generation to keep the traceability between document sets. We evaluated the reuse of the existing use cases and generated test case steps automatically with the proposed prototype system from real-world use cases in the development of a system using a packaged application. Based on the evaluation, we show how to construct models with high precision from English and Japanese use case data. Also, we could generate good test cases for about 90% of the real use cases through the manual improvement of the descriptions based on the feedback from the quality analysis system.

  14. Testing the effects of in-stream sediment sources and sinks on simulated watershed sediment yield using the coupled U.S. Army Corps of Engineers GSSHA Model and SEDLIB Sediment Transport Library

    NASA Astrophysics Data System (ADS)

    Floyd, I. E.; Downer, C. W.; Brown, G.; Pradhan, N. R.

    2017-12-01

    The Gridded Surface Subsurface Hydrologic Analysis (GSSHA) model is the US Army Corps of Engineers' (USACE)'s only fully coupled overland/in-stream sediment transport model. While the overland sediment transport formulation in GSSHA is considered state of the art, the existing in-stream sediment transport formulation is less robust. A major omission in the formulation of the existing GSSHA in-stream model is the lack of in-stream sources of fine materials. In this effort, we enhanced the in-stream sediment transport capacity of GSSHA by linking GSSHA to the SEDLIB sediment transport library. SEDLIB was developed at the Coastal and Hydraulics Laboratory (CHL) under the System Wide Water Resources Program (SWWRP) and Flood and Coastal (F&C) research program. It is designed to provide a library of sediment flux formulations for hydraulic and hydrologic models, such as GSSHA. This new version of GSSHA, with the updated in-stream sediment transport simulation capability afforded by the linkage to SEDLIB, was tested in against observations in an experimental watershed that had previously been used as a test bed for GSSHA. The results show a significant improvement in the ability to model in-stream sources of fine sediment. This improved capability will broaden the applicability of GSSHA to larger watersheds and watersheds with complex sediment dynamics, such as those subjected to fire hydrology.

  15. Influence of free-stream disturbances on boundary-layer transition

    NASA Technical Reports Server (NTRS)

    Harvey, W. D.

    1978-01-01

    Considerable experimental evidence exists which shows that free stream disturbances (the ratio of root-mean-square pressure fluctuations to mean values) in conventional wind tunnels increase with increasing Mach number at low supersonic to moderate hypersonic speeds. In addition to local conditions, the free stream disturbance level influences transition behavior on simple test models. Based on this observation, existing noise transition data obtained in the same test facility were correlated for a large number of reference sharp cones and flat plates and are shown to collapse along a single curve. This result is a significant improvement over previous attempts to correlate noise transition data.

  16. Propulsion system/flight control integration for supersonic aircraft

    NASA Technical Reports Server (NTRS)

    Reukauf, P. J.; Burcham, F. W., Jr.

    1976-01-01

    Digital integrated control systems are studied. Such systems allow minimization of undesirable interactions while maximizing performance at all flight conditions. One such program is the YF-12 cooperative control program. The existing analog air data computer, autothrottle, autopilot, and inlet control systems are converted to digital systems by using a general purpose airborne computer and interface unit. Existing control laws are programed and tested in flight. Integrated control laws, derived using accurate mathematical models of the airplane and propulsion system in conjunction with modern control techniques, are tested in flight. Analysis indicates that an integrated autothrottle autopilot gives good flight path control and that observers are used to replace failed sensors.

  17. Effect law of Damage Characteristics of Rock Similar Material with Pre-Existing Cracks

    NASA Astrophysics Data System (ADS)

    Li, S. G.; Cheng, X. Y.; Liu, C.

    2017-11-01

    In order to further study the failure mechanism for rock similar materials, this study established the damage model based on accumulative AE events, investigated the damage characteristics for rock similar material samples with pre-existing cracks of varying width under uniaxial compression load. The equipment used in this study is the self-developed YYW-II strain controlled unconfined compression apparatus and the PCIE-8 acoustic emission (AE) monitoring system. The influences of the width of the pre-existing cracks to the damage characteristics of rock similar materials are analyzed. Results show that, (1) the damage model can better describe the damage characteristics of rock similar materials; (2) the tested samples have three stages during failure: initial damage stage, stable development of damage stage, and accelerated development of damage stage; (3) with the width of pre-existing cracks vary from 3mm to 5mm, the damage of rock similar materials increases gradually. The outcomes of this study provided additional values to the research of the failure mechanism for geotechnical similar material models.

  18. The flow of power law fluids in elastic networks and porous media.

    PubMed

    Sochi, Taha

    2016-02-01

    The flow of power law fluids, which include shear thinning and shear thickening as well as Newtonian as a special case, in networks of interconnected elastic tubes is investigated using a residual-based pore scale network modeling method with the employment of newly derived formulae. Two relations describing the mechanical interaction between the local pressure and local cross-sectional area in distensible tubes of elastic nature are considered in the derivation of these formulae. The model can be used to describe shear dependent flows of mainly viscous nature. The behavior of the proposed model is vindicated by several tests in a number of special and limiting cases where the results can be verified quantitatively or qualitatively. The model, which is the first of its kind, incorporates more than one major nonlinearity corresponding to the fluid rheology and conduit mechanical properties, that is non-Newtonian effects and tube distensibility. The formulation, implementation, and performance indicate that the model enjoys certain advantages over the existing models such as being exact within the restricting assumptions on which the model is based, easy implementation, low computational costs, reliability, and smooth convergence. The proposed model can, therefore, be used as an alternative to the existing Newtonian distensible models; moreover, it stretches the capabilities of the existing modeling approaches to reach non-Newtonian rheologies.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vienna, John D.; Schweiger, Michael J.; Bonham, Charles C.

    Roughly half of the projected Hanford high-level waste batches will have waste loadings limited by relatively high concentration of Al2O3. Individual glasses have been formulated and tested to demonstrate that it is possible to increase the loading of these high-Al2O3 wastes in glass by as much as 50%. To implement such increases in waste loading in the Hanford Tank Waste Treatment and Immobilization Plant, the impact of composition on the properties of high-Al2O3 waste glasses must be quantified in the form of validated glass property-composition models. To collect the data necessary for glass property-composition models, a multi-phase experimental approach wasmore » developed. In the first phase of the study, a set of 46 glass compositions were statistically designed to most efficiently backfill existing data in the composition region for high-Al2O3 (15 to 30 wt%) waste glasses. The glasses were fabricated and key glass properties were tested: •Product Consistency Test (PCT) on quench (Q) and canister centerline cooled (CCC) samples •Toxicity Characteristic Leaching Procedure (TCLP) on Q and CCC samples •Crystallinity as a function of temperature (T) at equilibrium and of CCC samples •Viscosity and electrical conductivity as a function of T The measured properties of these glasses were compared to predictions from previously existing models developed over lower Al2O3 concentration ranges. Areas requiring additional testing and modeling were highlighted.« less

  20. Usability evaluation of mobile applications; where do we stand?

    NASA Astrophysics Data System (ADS)

    Zahra, Fatima; Hussain, Azham; Mohd, Haslina

    2017-10-01

    The range and availability of mobile applications is expanding rapidly. With the increased processing power available on portable devices, developers are increasing the range of services by embracing smartphones in their extensive and diverse practices. While usability testing and evaluations of mobile applications have not yet touched the accuracy level of other web based applications. The existing usability models do not adequately capture the complexities of interacting with applications on a mobile platform. Therefore, this study aims to presents review on existing usability models for mobile applications. These models are in their infancy but with time and more research they may eventually be adopted. Moreover, different categories of mobile apps (medical, entertainment, education) possess different functional and non-functional requirements thus customized models are required for diverse mobile applications.

  1. Stem cell-derived organoids to model gastrointestinal facets of cystic fibrosis

    PubMed Central

    Hohwieler, Meike; Perkhofer, Lukas; Liebau, Stefan; Seufferlein, Thomas; Müller, Martin

    2016-01-01

    Cystic fibrosis (CF) is one of the most frequently occurring inherited human diseases caused by mutations in the cystic fibrosis transmembrane conductance regulator (CFTR) which lead to ample defects in anion transport and epithelial fluid secretion. Existing models lack both access to early stages of CF development and a coeval focus on the gastrointestinal CF phenotypes, which become increasingly important due increased life span of the affected individuals. Here, we provide a comprehensive overview of gastrointestinal facets of CF and the opportunity to model these in various systems in an attempt to understand and treat CF. A particular focus is given on forward-leading organoid cultures, which may circumvent current limitations of existing models and thereby provide a platform for drug testing and understanding of disease pathophysiology in gastrointestinal organs. PMID:28815024

  2. Extending Maxwell's equations for dielectric materials using analytical principles from viscoelasticity based on the fractional calculus

    NASA Astrophysics Data System (ADS)

    Wharmby, Andrew William

    Existing fractional calculus models having a non-empirical basis used to describe constitutive relationships between stress and strain in viscoelastic materials are modified to employ all orders of fractional derivatives between zero and one. Parallels between viscoelastic and dielectric theory are drawn so that these modified fractional calculus based models for viscoelastic materials may be used to describe relationships between electric flux density and electric field intensity in dielectric materials. The resulting fractional calculus based dielectric relaxation model is tested using existing complex permittivity data in the radio-frequency bandwidth of a wide variety of homogeneous materials. The consequences that the application of this newly developed fractional calculus based dielectric relaxation model has on Maxwell's equations are also examined through the effects of dielectric dissipation and dispersion.

  3. Lessons Learned from the Wide Field Camera 3 TV1 Test Campaign and Correlation Effort

    NASA Technical Reports Server (NTRS)

    Peabody, Hume; Stavley, Richard; Bast, William

    2007-01-01

    In January 2004, shortly after the Columbia accident, future servicing missions to the Hubble Space Telescope (HST) were cancelled. In response to this, further work on the Wide Field Camera 3 instrument was ceased. Given the maturity level of the design, a characterization thermal test (TV1) was completed in case the mission was re-instated or an alternate mission found on which to fly the instrument. This thermal test yielded some valuable lessons learned with respect to testing configurations and modeling/correlation practices, including: 1. Ensure that the thermal design can be tested 2. Ensure that the model has sufficient detail for accurate predictions 3. Ensure that the power associated with all active control devices is predicted 4. Avoid unit changes for existing models. This paper documents the difficulties presented when these recommendations were not followed.

  4. Capacitor Test, Evaluation. and Modeling Within NASA Electronic Parts and Packaging (NEPP) Program. "Why Ceramic Capacitors Fracture During Manual Soldering and How to Avoid Failures"

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander

    2011-01-01

    Presentation discusses: (1) Why Multi-Layer Ceramic Capacitors(MLCCs) crack during manual soldering? Workmanship and parts issues. (2) Do existing qualification requirements assure crack-free soldering? MIL-spec Thermal Shock (TS) testing. MIL-spec Resistance to Soldering Heat (RSH) test. (3) What test can assure reliable soldering? Mechanical characteristics of ceramics. Comparison of three TS techniques: LND, TSD, and IWT. (4) Simulation of TS conditions.

  5. Noise reduction tests of large-scale-model externally blown flap using trailing-edge blowing and partial flap slot covering. [jet aircraft noise reduction

    NASA Technical Reports Server (NTRS)

    Mckinzie, D. J., Jr.; Burns, R. J.; Wagner, J. M.

    1976-01-01

    Noise data were obtained with a large-scale cold-flow model of a two-flap, under-the-wing, externally blown flap proposed for use on future STOL aircraft. The noise suppression effectiveness of locating a slot conical nozzle at the trailing edge of the second flap and of applying partial covers to the slots between the wing and flaps was evaluated. Overall-sound-pressure-level reductions of 5 db occurred below the wing in the flyover plane. Existing models of several noise sources were applied to the test results. The resulting analytical relation compares favorably with the test data. The noise source mechanisms were analyzed and are discussed.

  6. Experimental measurement and calculation of losses in planar radial magnetic bearings

    NASA Technical Reports Server (NTRS)

    Kasarda, M. E. F.; Allaire, P. E.; Hope, R. W.; Humphris, R. R.

    1994-01-01

    The loss mechanisms associated with magnetic bearings have yet to be adequately characterized or modeled analytically and thus pose a problem for the designer of magnetic bearings. This problem is particularly important for aerospace applications where low power consumption of components is critical. Also, losses are expected to be large for high speed operation. The iron losses in magnetic bearings can be divided into eddy current losses and hysteresis losses. While theoretical models for these losses exist for transformer and electric motor applications, they have not been verified for magnetic bearings. This paper presents the results from a low speed experimental test rig and compares them to calculated values from existing theory. Experimental data was taken over a range of 90 to 2,800 rpm for several bias currents and two different pole configurations. With certain assumptions agreement between measured and calculated power losses was within 16 percent for a number of test configurations.

  7. A methodology for selecting optimum organizations for space communities

    NASA Technical Reports Server (NTRS)

    Ragusa, J. M.

    1978-01-01

    This paper suggests that a methodology exists for selecting optimum organizations for future space communities of various sizes and purposes. Results of an exploratory study to identify an optimum hypothetical organizational structure for a large earth-orbiting multidisciplinary research and applications (R&A) Space Base manned by a mixed crew of technologists are presented. Since such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than the empirical testing of it. The principal finding of this research was that a four-level project type 'total matrix' model will optimize the effectiveness of Space Base technologists. An overall conclusion which can be reached from the research is that application of this methodology, or portions of it, may provide planning insights for the formal organizations which will be needed during the Space Industrialization Age.

  8. The importance of explicitly mapping instructional analogies in science education

    NASA Astrophysics Data System (ADS)

    Asay, Loretta Johnson

    Analogies are ubiquitous during instruction in science classrooms, yet research about the effectiveness of using analogies has produced mixed results. An aspect seldom studied is a model of instruction when using analogies. The few existing models for instruction with analogies have not often been examined quantitatively. The Teaching With Analogies (TWA) model (Glynn, 1991) is one of the models frequently cited in the variety of research about analogies. The TWA model outlines steps for instruction, including the step of explicitly mapping the features of the source to the target. An experimental study was conducted to examine the effects of explicitly mapping the features of the source and target in an analogy during computer-based instruction about electrical circuits. Explicit mapping was compared to no mapping and to a control with no analogy. Participants were ninth- and tenth-grade biology students who were each randomly assigned to one of three conditions (no analogy module, analogy module, or explicitly mapped analogy module) for computer-based instruction. Subjects took a pre-test before the instruction, which was used to assign them to a level of previous knowledge about electrical circuits for analysis of any differential effects. After the instruction modules, students took a post-test about electrical circuits. Two weeks later, they took a delayed post-test. No advantage was found for explicitly mapping the analogy. Learning patterns were the same, regardless of the type of instruction. Those who knew the least about electrical circuits, based on the pre-test, made the most gains. After the two-week delay, this group maintained the largest amount of their gain. Implications exist for science education classrooms, as analogy use should be based on research about effective practices. Further studies are suggested to foster the building of research-based models for classroom instruction with analogies.

  9. Application of Strategic Institutional-Information Technology Alignment Model in Four-Year Institutions of Higher Education

    ERIC Educational Resources Information Center

    Lach-Smith, Barbara

    2010-01-01

    This study examined an existing corporate model of business-information technology alignment for application in higher education and tested the findings by surveying executive and technology leaders in higher education. The purpose of this study was to gain a better understanding of the factors that impact alignment between institutional strategic…

  10. Measuring Levels of End-Users' Acceptance and Use of Hybrid Library Services

    ERIC Educational Resources Information Center

    Tibenderana, Prisca; Ogao, Patrick; Ikoja-Odongo, J.; Wokadala, James

    2010-01-01

    This study concerns the adoption of Information Communication Technology (ICT) services in libraries. The study collected 445 usable data from university library end-users using a cross-sectional survey instrument. It develops, applies and tests a research model of acceptance and use of such services based on an existing UTAUT model by Venkatesh,…

  11. Orbiter/payload contamination control assessment support

    NASA Technical Reports Server (NTRS)

    Rantanen, R. O.; Strange, D. A.; Hetrick, M. A.

    1978-01-01

    The development and integration of 16 payload bay liner filters into the existing shuttle/payload contamination evaluation (SPACE) computer program is discussed as well as an initial mission profile model. As part of the mission profile model, a thermal conversion program, a temperature cycling routine, a flexible plot routine and a mission simulation of orbital flight test 3 are presented.

  12. The Black-White-Other Test Score Gap: Academic Achievement among Mixed Race Adolescents. Institute for Policy Research Working Paper.

    ERIC Educational Resources Information Center

    Herman, Melissa R.

    This paper describes the achievement patterns of a sample of 1,492 multiracial high school students and examines how their achievement fits into existing theoretical models that explain monoracial differences in achievement. These theoretical models include status attainment, parenting style, oppositional culture, and educational attitudes. The…

  13. Wind tunnel and ground static tests of a .094 scale powered model of a modified T-39 lift/cruise fan V/STOL research airplane

    NASA Technical Reports Server (NTRS)

    Hunt, D.; Clinglan, J.; Salemann, V.; Omar, E.

    1977-01-01

    Ground static and wind tunnel test of a scale model modified T-39 airplane are reported. The configuration in the nose and replacement of the existing nacelles with tilting lift/cruise fans. The model was powered with three 14 cm diameter tip driven turbopowered simulators. Forces and moments were measured by an internal strain guage balance. Engine simulator thrust and mass flow were measured by calibrated pressure and temperature instrumentation mounted downstream of the fans. The low speed handling qualities and general aerodynamic characteristics of the modified T-39 were defined. Test variables include thrust level and thrust balance, forward speed, model pitch and sideslip angle at forward speeds, model pitch, roll, and ground height during static tests, lift/cruise fan tilt angle, flap and aileron deflection angle, and horizonal stabilizer angle. The effects of removing the landing gear, the lift/cruise fans, and the tail surfaces were also investigated.

  14. An investigation of the effect of wind cooling on photovoltaic arrays

    NASA Technical Reports Server (NTRS)

    Wen, L.

    1982-01-01

    Convective cooling of photovoltaic modules for different wind conditions, including steady state controlled testing in a solar simulator and natural test environments in a field was investigated. Analytical thermal models of different module designs were used to correlate experimental data. The applicability of existing heat transfer correlations is confirmed. Reasonable agreement is obtained by applying a power law wind profile.

  15. Visual abilities distinguish pitchers from hitters in professional baseball.

    PubMed

    Klemish, David; Ramger, Benjamin; Vittetoe, Kelly; Reiter, Jerome P; Tokdar, Surya T; Appelbaum, Lawrence Gregory

    2018-01-01

    This study aimed to evaluate the possibility that differences in sensorimotor abilities exist between hitters and pitchers in a large cohort of baseball players of varying levels of experience. Secondary data analysis was performed on 9 sensorimotor tasks comprising the Nike Sensory Station assessment battery. Bayesian hierarchical regression modelling was applied to test for differences between pitchers and hitters in data from 566 baseball players (112 high school, 85 college, 369 professional) collected at 20 testing centres. Explanatory variables including height, handedness, eye dominance, concussion history, and player position were modelled along with age curves using basis regression splines. Regression analyses revealed better performance for hitters relative to pitchers at the professional level in the visual clarity and depth perception tasks, but these differences did not exist at the high school or college levels. No significant differences were observed in the other 7 measures of sensorimotor capabilities included in the test battery, and no systematic biases were found between the testing centres. These findings, indicating that professional-level hitters have better visual acuity and depth perception than professional-level pitchers, affirm the notion that highly experienced athletes have differing perceptual skills. Findings are discussed in relation to deliberate practice theory.

  16. Strong gravitational lensing statistics as a test of cosmogonic scenarios

    NASA Technical Reports Server (NTRS)

    Cen, Renyue; Gott, J. Richard, III; Ostriker, Jeremiah P.; Turner, Edwin L.

    1994-01-01

    Gravitational lensing statistics can provide a direct and powerful test of cosmic structure formation theories. Since lensing tests, directly, the magnitude of the nonlinear mass density fluctuations on lines of sight to distant objects, no issues of 'bias' (of mass fluctuations with respect to galaxy density fluctuations) exist here, although lensing observations provide their own ambiguities of interpretation. We develop numerical techniques for generating model density distributions with the very large spatial dynamic range required by lensing considerations and for identifying regions of the simulations capable of multiple image lensing in a conservative and computationally efficient way that should be accurate for splittings significantly larger than 3 seconds. Applying these techniques to existing standard Cold dark matter (CDM) (Omega = 1) and Primeval Baryon Isocurvature (PBI) (Omega = 0.2) simulations (normalized to the Cosmic Background Explorer Satellite (COBE) amplitude), we find that the CDM model predicts large splitting (greater than 8 seconds) lensing events roughly an order-of-magnitude more frequently than the PBI model. Under the reasonable but idealized assumption that lensing structrues can be modeled as singular isothermal spheres (SIS), the predictions can be directly compared to observations of lensing events in quasar samples. Several large splitting (Delta Theta is greater than 8 seconds) cases are predicted in the standard CDM model (the exact number being dependent on the treatment of amplification bias), whereas none is observed. In a formal sense, the comparison excludes the CDM model at high confidence (essentially for the same reason that CDM predicts excessive small-scale cosmic velocity dispersions.) A very rough assessment of low-density but flat CDM model (Omega = 0.3, Lambda/3H(sup 2 sub 0) = 0.7) indicates a far lower and probably acceptable level of lensing. The PBI model is consistent with, but not strongly tested by, the available lensing data, and other open models would presumably do as well as PBI. These preliminary conclusions and the assumptions on which they are based can be tested and the analysis can be applied to other cosmogonic models by straightforward extension of the work presented here.

  17. Finite Element Model Calibration Approach for Ares I-X

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Lazor, Daniel R.; Gaspar, James L.; Parks, Russel A.; Bartolotta, Paul A.

    2010-01-01

    Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of nonconventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pre-test predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.

  18. Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions

    NASA Astrophysics Data System (ADS)

    Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter

    2017-11-01

    Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.

  19. Psychological Pathways Linking Social Support to Health Outcomes: A Visit with the “Ghosts” of Research Past, Present, and Future

    PubMed Central

    Uchino, Bert N.; Bowen, Kimberly; Carlisle, McKenzie; Birmingham, Wendy

    2012-01-01

    Contemporary models postulate the importance of psychological mechanisms linking perceived and received social support to physical health outcomes. In this review, we examine studies that directly tested the potential psychological mechanisms responsible for links between social support and health-relevant physiological processes (1980s to 2010). Inconsistent with existing theoretical models, no evidence was found that psychological mechanisms such as depression, perceived stress, and other affective processes are directly responsible for links between support and health. We discuss the importance of considering statistical/design issues, emerging conceptual perspectives, and limitations of our existing models for future research aimed at elucidating the psychological mechanisms responsible for links between social support and physical health outcomes. PMID:22326104

  20. Forecasting client transitions in British Columbia's Long-Term Care Program.

    PubMed Central

    Lane, D; Uyeno, D; Stark, A; Gutman, G; McCashin, B

    1987-01-01

    This article presents a model for the annual transitions of clients through various home and facility placements in a long-term care program. The model, an application of Markov chain analysis, is developed, tested, and applied to over 9,000 clients (N = 9,483) in British Columbia's Long Term Care Program (LTC) over the period 1978-1983. Results show that the model gives accurate forecasts of the progress of groups of clients from state to state in the long-term care system from time of admission until eventual death. Statistical methods are used to test the modeling hypothesis that clients' year-over-year transitions occur in constant proportions from state to state within the long-term care system. Tests are carried out by examining actual year-over-year transitions of each year's new admission cohort (1978-1983). Various subsets of the available data are analyzed and, after accounting for clear differences among annual cohorts, the most acceptable model of the actual client transition data occurred when clients were separated into male and female groups, i.e., the transition behavior of each group is describable by a different Markov model. To validate the model, we develop model estimates for the numbers of existing clients in each state of the long-term care system for the period (1981-1983) for which actual data are available. When these estimates are compared with the actual data, total weighted absolute deviations do not exceed 10 percent of actuals. Finally, we use the properties of the Markov chain probability transition matrix and simulation methods to develop three-year forecasts with prediction intervals for the distribution of the existing total clients into each state of the system. The tests, forecasts, and Markov model supplemental information are contained in a mechanized procedure suitable for a microcomputer. The procedure provides a powerful, efficient tool for decision makers planning facilities and services in response to the needs of long-term care clients. PMID:3121537

  1. An Integrated Tiered Service Delivery Model (ITSDM) Based on Local CD4 Testing Demands Can Improve Turn-Around Times and Save Costs whilst Ensuring Accessible and Scalable CD4 Services across a National Programme

    PubMed Central

    Glencross, Deborah K.; Coetzee, Lindi M.; Cassim, Naseem

    2014-01-01

    Background The South African National Health Laboratory Service (NHLS) responded to HIV treatment initiatives with two-tiered CD4 laboratory services in 2004. Increasing programmatic burden, as more patients access anti-retroviral therapy (ART), has demanded extending CD4 services to meet increasing clinical needs. The aim of this study was to review existing services and develop a service-model that integrated laboratory-based and point-of-care testing (POCT), to extend national coverage, improve local turn-around/(TAT) and contain programmatic costs. Methods NHLS Corporate Data Warehouse CD4 data, from 60–70 laboratories and 4756 referring health facilities was reviewed for referral laboratory workload, respective referring facility volumes and related TAT, from 2009–2012. Results An integrated tiered service delivery model (ITSDM) is proposed. Tier-1/POCT delivers CD4 testing at single health-clinics providing ART in hard-to-reach areas (<5 samples/day). Laboratory-based testing is extended with Tier-2/POC-Hubs (processing ≤30–40 CD4 samples/day), consolidating POCT across 8–10 health-clinics with other HIV-related testing and Tier-3/‘community’ laboratories, serving ≤40 health-clinics, processing ≤150 samples/day. Existing Tier-4/‘regional’ laboratories serve ≤100 facilities and process <350 samples/day; Tier-5 are high-volume ‘metro’/centralized laboratories (>350–1500 tests/day, serving ≥200 health-clinics). Tier-6 provides national support for standardisation, harmonization and quality across the organization. Conclusion The ITSDM offers improved local TAT by extending CD4 services into rural/remote areas with new Tier-3 or Tier-2/POC-Hub services installed in existing community laboratories, most with developed infrastructure. The advantage of lower laboratory CD4 costs and use of existing infrastructure enables subsidization of delivery of more expensive POC services, into hard-to-reach districts without reasonable access to a local CD4 laboratory. Full ITSDM implementation across 5 service tiers (as opposed to widespread implementation of POC testing to extend service) can facilitate sustainable ‘full service coverage’ across South Africa, and save>than R125 million in HIV/AIDS programmatic costs. ITSDM hierarchical parental-support also assures laboratory/POC management, equipment maintenance, quality control and on-going training between tiers. PMID:25490718

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Kandler A; Santhanagopalan, Shriram; Yang, Chuanbo

    Computer models are helping to accelerate the design and validation of next generation batteries and provide valuable insights not possible through experimental testing alone. Validated 3-D physics-based models exist for predicting electrochemical performance, thermal and mechanical response of cells and packs under normal and abuse scenarios. The talk describes present efforts to make the models better suited for engineering design, including improving their computation speed, developing faster processes for model parameter identification including under aging, and predicting the performance of a proposed electrode material recipe a priori using microstructure models.

  3. Consumer preference models: fuzzy theory approach

    NASA Astrophysics Data System (ADS)

    Turksen, I. B.; Wilson, I. A.

    1993-12-01

    Consumer preference models are widely used in new product design, marketing management, pricing and market segmentation. The purpose of this article is to develop and test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation) and how much to make (market share prediction).

  4. Improvements, testing and development of the ADM-τ sub-grid surface tension model for two-phase LES

    NASA Astrophysics Data System (ADS)

    Aniszewski, Wojciech

    2016-12-01

    In this paper, a specific subgrid term occurring in Large Eddy Simulation (LES) of two-phase flows is investigated. This and other subgrid terms are presented, we subsequently elaborate on the existing models for those and re-formulate the ADM-τ model for sub-grid surface tension previously published by these authors. This paper presents a substantial, conceptual simplification over the original model version, accompanied by a decrease in its computational cost. At the same time, it addresses the issues the original model version faced, e.g. introduces non-isotropic applicability criteria based on resolved interface's principal curvature radii. Additionally, this paper introduces more throughout testing of the ADM-τ, in both simple and complex flows.

  5. Field Test of a Hybrid Finite-Difference and Analytic Element Regional Model.

    PubMed

    Abrams, D B; Haitjema, H M; Feinstein, D T; Hunt, R J

    2016-01-01

    Regional finite-difference models often have cell sizes that are too large to sufficiently model well-stream interactions. Here, a steady-state hybrid model is applied whereby the upper layer or layers of a coarse MODFLOW model are replaced by the analytic element model GFLOW, which represents surface waters and wells as line and point sinks. The two models are coupled by transferring cell-by-cell leakage obtained from the original MODFLOW model to the bottom of the GFLOW model. A real-world test of the hybrid model approach is applied on a subdomain of an existing model of the Lake Michigan Basin. The original (coarse) MODFLOW model consists of six layers, the top four of which are aggregated into GFLOW as a single layer, while the bottom two layers remain part of MODFLOW in the hybrid model. The hybrid model and a refined "benchmark" MODFLOW model simulate similar baseflows. The hybrid and benchmark models also simulate similar baseflow reductions due to nearby pumping when the well is located within the layers represented by GFLOW. However, the benchmark model requires refinement of the model grid in the local area of interest, while the hybrid approach uses a gridless top layer and is thus unaffected by grid discretization errors. The hybrid approach is well suited to facilitate cost-effective retrofitting of existing coarse grid MODFLOW models commonly used for regional studies because it leverages the strengths of both finite-difference and analytic element methods for predictions in mildly heterogeneous systems that can be simulated with steady-state conditions. © 2015, National Ground Water Association.

  6. Accuracy and equivalence testing of crown ratio models and assessment of their impact on diameter growth and basal area increment predictions of two variants of the Forest Vegetation Simulator

    Treesearch

    Laura P. Leites; Andrew P. Robinson; Nicholas L. Crookston

    2009-01-01

    Diameter growth (DG) equations in many existing forest growth and yield models use tree crown ratio (CR) as a predictor variable. Where CR is not measured, it is estimated from other measured variables. We evaluated CR estimation accuracy for the models in two Forest Vegetation Simulator variants: the exponential and the logistic CR models used in the North...

  7. Statistical Considerations Concerning Dissimilar Regulatory Requirements for Dissolution Similarity Assessment. The Example of Immediate-Release Dosage Forms.

    PubMed

    Jasińska-Stroschein, Magdalena; Kurczewska, Urszula; Orszulak-Michalak, Daria

    2017-05-01

    When performing in vitro dissolution testing, especially in the area of biowaivers, it is necessary to follow regulatory guidelines to minimize the risk of an unsafe or ineffective product being approved. The present study examines model-independent and model-dependent methods of comparing dissolution profiles based on various compared and contrasted international guidelines. Dissolution profiles for immediate release solid oral dosage forms were generated. The test material comprised tablets containing several substances, with at least 85% of the labeled amount dissolved within 15 min, 20-30 min, or 45 min. Dissolution profile similarity can vary with regard to the following criteria: time point selection (including the last time point), coefficient of variation, and statistical method selection. Variation between regulatory guidance and statistical methods can raise methodological questions and result potentially in a different outcome when reporting dissolution profile testing. The harmonization of existing guidelines would address existing problems concerning the interpretation of regulatory recommendations and research findings. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  8. Measuring and Advancing Experimental Design Ability in an Introductory Course without Altering Existing Lab Curriculum†

    PubMed Central

    Shanks, Ryan A.; Robertson, Chuck L.; Haygood, Christian S.; Herdliksa, Anna M.; Herdliska, Heather R.; Lloyd, Steven A.

    2017-01-01

    Introductory biology courses provide an important opportunity to prepare students for future courses, yet existing cookbook labs, although important in their own way, fail to provide many of the advantages of semester-long research experiences. Engaging, authentic research experiences aid biology students in meeting many learning goals. Therefore, overlaying a research experience onto the existing lab structure allows faculty to overcome barriers involving curricular change. Here we propose a working model for this overlay design in an introductory biology course and detail a means to conduct this lab with minimal increases in student and faculty workloads. Furthermore, we conducted exploratory factor analysis of the Experimental Design Ability Test (EDAT) and uncovered two latent factors which provide valid means to assess this overlay model’s ability to increase advanced experimental design abilities. In a pre-test/post-test design, we demonstrate significant increases in both basic and advanced experimental design abilities in an experimental and comparison group. We measured significantly higher gains in advanced experimental design understanding in students in the experimental group. We believe this overlay model and EDAT factor analysis contribute a novel means to conduct and assess the effectiveness of authentic research experiences in an introductory course without major changes to the course curriculum and with minimal increases in faculty and student workloads. PMID:28904647

  9. FIELD INVESTIGATIONS OF THE DRIFT SHADOW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    G. W. Su, T. J. Kneafsey, T. A. Ghezzehei, B. D. Marshall, and P. J. Cook

    The ''Drift Shadow'' is defined as the relatively drier region that forms below subsurface cavities or drifts in unsaturated rock. Its existence has been predicted through analytical and numerical models of unsaturated flow. However, these theoretical predictions have not been demonstrated empirically to date. In this project they plan to test the drift shadow concept through field investigations and compare our observations to simulations. Based on modeling studies they have an identified suitable site to perform the study at an inactive mine in a sandstone formation. Pretest modeling studies and preliminary characterization of the site are being used to developmore » the field scale tests.« less

  10. Mathematical modeling of the process of determining the standards for process losses in the transfer of thermal energy of the coolant

    NASA Astrophysics Data System (ADS)

    Akhmetova, I. G.; Chichirova, N. D.

    2017-11-01

    Currently the actual problem is a precise definition of the normative and actual heat loss. Existing methods - experimental, on metering devices, on the basis of mathematical modeling methods are not without drawbacks. Heat losses establishing during the heat carrier transport has an impact on the tariff structure of heat supply organizations. This quantity determination also promotes proper choice of main and auxiliary equipment power, temperature chart of heat supply networks, as well as the heating system structure choice with the decentralization. Calculation of actual heat loss and their comparison with standard values justifies the performance of works on improvement of the heat networks with the replacement of piping or its insulation. To determine the cause of discrepancies between normative and actual heat losses thermal tests on the magnitude of the actual heat losses in the 124 sections of heat networks in Kazan. As were carried out the result mathematical model of the regulatory definition of heat losses is developed and tested. This model differ from differs the existing according the piping insulation type. The application of this factor will bring the value of calculative normative losses heat energy to their actual value. It is of great importance for enterprises operating distribution networks and because of the conditions of their configuration and extensions do not have the technical ability to produce thermal testing.

  11. A Practical, Robust Methodology for Acquiring New Observation Data Using Computationally Expensive Groundwater Models

    NASA Astrophysics Data System (ADS)

    Siade, Adam J.; Hall, Joel; Karelse, Robert N.

    2017-11-01

    Regional groundwater flow models play an important role in decision making regarding water resources; however, the uncertainty embedded in model parameters and model assumptions can significantly hinder the reliability of model predictions. One way to reduce this uncertainty is to collect new observation data from the field. However, determining where and when to obtain such data is not straightforward. There exist a number of data-worth and experimental design strategies developed for this purpose. However, these studies often ignore issues related to real-world groundwater models such as computational expense, existing observation data, high-parameter dimension, etc. In this study, we propose a methodology, based on existing methods and software, to efficiently conduct such analyses for large-scale, complex regional groundwater flow systems for which there is a wealth of available observation data. The method utilizes the well-established d-optimality criterion, and the minimax criterion for robust sampling strategies. The so-called Null-Space Monte Carlo method is used to reduce the computational burden associated with uncertainty quantification. And, a heuristic methodology, based on the concept of the greedy algorithm, is proposed for developing robust designs with subsets of the posterior parameter samples. The proposed methodology is tested on a synthetic regional groundwater model, and subsequently applied to an existing, complex, regional groundwater system in the Perth region of Western Australia. The results indicate that robust designs can be obtained efficiently, within reasonable computational resources, for making regional decisions regarding groundwater level sampling.

  12. Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations

    DOE PAGES

    Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.; ...

    2017-08-29

    Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene havemore » visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. In conclusion, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.« less

  13. Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.

    Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene havemore » visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. In conclusion, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.« less

  14. Multiscale digital Arabidopsis predicts individual organ and whole-organism growth.

    PubMed

    Chew, Yin Hoon; Wenden, Bénédicte; Flis, Anna; Mengin, Virginie; Taylor, Jasper; Davey, Christopher L; Tindal, Christopher; Thomas, Howard; Ougham, Helen J; de Reffye, Philippe; Stitt, Mark; Williams, Mathew; Muetzelfeldt, Robert; Halliday, Karen J; Millar, Andrew J

    2014-09-30

    Understanding how dynamic molecular networks affect whole-organism physiology, analogous to mapping genotype to phenotype, remains a key challenge in biology. Quantitative models that represent processes at multiple scales and link understanding from several research domains can help to tackle this problem. Such integrated models are more common in crop science and ecophysiology than in the research communities that elucidate molecular networks. Several laboratories have modeled particular aspects of growth in Arabidopsis thaliana, but it was unclear whether these existing models could productively be combined. We test this approach by constructing a multiscale model of Arabidopsis rosette growth. Four existing models were integrated with minimal parameter modification (leaf water content and one flowering parameter used measured data). The resulting framework model links genetic regulation and biochemical dynamics to events at the organ and whole-plant levels, helping to understand the combined effects of endogenous and environmental regulators on Arabidopsis growth. The framework model was validated and tested with metabolic, physiological, and biomass data from two laboratories, for five photoperiods, three accessions, and a transgenic line, highlighting the plasticity of plant growth strategies. The model was extended to include stochastic development. Model simulations gave insight into the developmental control of leaf production and provided a quantitative explanation for the pleiotropic developmental phenotype caused by overexpression of miR156, which was an open question. Modular, multiscale models, assembling knowledge from systems biology to ecophysiology, will help to understand and to engineer plant behavior from the genome to the field.

  15. A design procedure for fan inflow control structures

    NASA Technical Reports Server (NTRS)

    Gedge, M. R.

    1980-01-01

    Significant differences exist in the noise generated by engine in flight and engines operating on the test stand. It was observed that these differences can be reduced by use of an inflow control structure (ICS) in the static test configuration. The results of the second phase of a three phase program are described and the results of a test program conducted to assess and modify various theoretical models, leading to the development of an ICS design system is summarized.

  16. Adopting Internet Standards for Orbital Use

    NASA Technical Reports Server (NTRS)

    Wood, Lloyd; Ivancic, William; da Silva Curiel, Alex; Jackson, Chris; Stewart, Dave; Shell, Dave; Hodgson, Dave

    2005-01-01

    After a year of testing and demonstrating a Cisco mobile access router intended for terrestrial use onboard the low-Earth-orbiting UK-DMC satellite as part of a larger merged ground/space IP-based internetwork, we reflect on and discuss the benefits and drawbacks of integration and standards reuse for small satellite missions. Benefits include ease of operation and the ability to leverage existing systems and infrastructure designed for general use, as well as reuse of existing, known, and well-understood security and operational models. Drawbacks include cases where integration work was needed to bridge the gaps in assumptions between different systems, and where performance considerations outweighed the benefits of reuse of pre-existing file transfer protocols. We find similarities with the terrestrial IP networks whose technologies we have adopted and also some significant differences in operational models and assumptions that must be considered.

  17. Radiation Detection Computational Benchmark Scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing differentmore » techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for compilation. This is a report describing the details of the selected Benchmarks and results from various transport codes.« less

  18. Multivariate normality

    NASA Technical Reports Server (NTRS)

    Crutcher, H. L.; Falls, L. W.

    1976-01-01

    Sets of experimentally determined or routinely observed data provide information about the past, present and, hopefully, future sets of similarly produced data. An infinite set of statistical models exists which may be used to describe the data sets. The normal distribution is one model. If it serves at all, it serves well. If a data set, or a transformation of the set, representative of a larger population can be described by the normal distribution, then valid statistical inferences can be drawn. There are several tests which may be applied to a data set to determine whether the univariate normal model adequately describes the set. The chi-square test based on Pearson's work in the late nineteenth and early twentieth centuries is often used. Like all tests, it has some weaknesses which are discussed in elementary texts. Extension of the chi-square test to the multivariate normal model is provided. Tables and graphs permit easier application of the test in the higher dimensions. Several examples, using recorded data, illustrate the procedures. Tests of maximum absolute differences, mean sum of squares of residuals, runs and changes of sign are included in these tests. Dimensions one through five with selected sample sizes 11 to 101 are used to illustrate the statistical tests developed.

  19. Aerothermal Ground Testing of Flexible Thermal Protection Systems for Hypersonic Inflatable Aerodynamic Decelerators

    NASA Technical Reports Server (NTRS)

    Bruce, Walter E., III; Mesick, Nathaniel J.; Ferlemann, Paul G.; Siemers, Paul M., III; DelCorso, Joseph A.; Hughes, Stephen J.; Tobin, Steven A.; Kardell, Matthew P.

    2012-01-01

    Flexible TPS development involves ground testing and analysis necessary to characterize performance of the FTPS candidates prior to flight testing. This paper provides an overview of the analysis and ground testing efforts performed over the last year at the NASA Langley Research Center and in the Boeing Large-Core Arc Tunnel (LCAT). In the LCAT test series, material layups were subjected to aerothermal loads commensurate with peak re-entry conditions enveloping a range of HIAD mission trajectories. The FTPS layups were tested over a heat flux range from 20 to 50 W/cm with associated surface pressures of 3 to 8 kPa. To support the testing effort a significant redesign of the existing shear (wedge) model holder from previous testing efforts was undertaken to develop a new test technique for supporting and evaluating the FTPS in the high-temperature, arc jet flow. Since the FTPS test samples typically experience a geometry change during testing, computational fluid dynamic (CFD) models of the arc jet flow field and test model were developed to support the testing effort. The CFD results were used to help determine the test conditions experienced by the test samples as the surface geometry changes. This paper includes an overview of the Boeing LCAT facility, the general approach for testing FTPS, CFD analysis methodology and results, model holder design and test methodology, and selected thermal results of several FTPS layups.

  20. Long residence times of rapidly decomposable soil organic matter: application of a multi-phase, multi-component, and vertically-resolved model (TOUGHREACTv1) to soil carbon dynamics

    NASA Astrophysics Data System (ADS)

    Riley, W. J.; Maggi, F. M.; Kleber, M.; Torn, M. S.; Tang, J. Y.; Dwivedi, D.; Guerry, N.

    2014-01-01

    Accurate representation of soil organic matter (SOM) dynamics in Earth System Models is critical for future climate prediction, yet large uncertainties exist regarding how, and to what extent, the suite of proposed relevant mechanisms should be included. To investigate how various mechanisms interact to influence SOM storage and dynamics, we developed a SOM reaction network integrated in a one-dimensional, multi-phase, and multi-component reactive transport solver. The model includes representations of bacterial and fungal activity, multiple archetypal polymeric and monomeric carbon substrate groups, aqueous chemistry, aqueous advection and diffusion, gaseous diffusion, and adsorption (and protection) and desorption from the soil mineral phase. The model predictions reasonably matched observed depth-resolved SOM and dissolved organic carbon (DOC) stocks in grassland ecosystems as well as lignin content and fungi to aerobic bacteria ratios. We performed a suite of sensitivity analyses under equilibrium and dynamic conditions to examine the role of dynamic sorption, microbial assimilation rates, and carbon inputs. To our knowledge, observations do not exist to fully test such a complicated model structure or to test the hypotheses used to explain observations of substantial storage of very old SOM below the rooting depth. Nevertheless, we demonstrated that a reasonable combination of sorption parameters, microbial biomass and necromass dynamics, and advective transport can match observations without resorting to an arbitrary depth-dependent decline in SOM turnover rates, as is often done. We conclude that, contrary to assertions derived from existing turnover time based model formulations, observed carbon content and δ14C vertical profiles are consistent with a representation of SOM dynamics consisting of (1) carbon compounds without designated intrinsic turnover times, (2) vertical aqueous transport, and (3) dynamic protection on mineral surfaces.

  1. Long residence times of rapidly decomposable soil organic matter: application of a multi-phase, multi-component, and vertically resolved model (BAMS1) to soil carbon dynamics

    NASA Astrophysics Data System (ADS)

    Riley, W. J.; Maggi, F.; Kleber, M.; Torn, M. S.; Tang, J. Y.; Dwivedi, D.; Guerry, N.

    2014-07-01

    Accurate representation of soil organic matter (SOM) dynamics in Earth system models is critical for future climate prediction, yet large uncertainties exist regarding how, and to what extent, the suite of proposed relevant mechanisms should be included. To investigate how various mechanisms interact to influence SOM storage and dynamics, we developed an SOM reaction network integrated in a one-dimensional, multi-phase, and multi-component reactive transport solver. The model includes representations of bacterial and fungal activity, multiple archetypal polymeric and monomeric carbon substrate groups, aqueous chemistry, aqueous advection and diffusion, gaseous diffusion, and adsorption (and protection) and desorption from the soil mineral phase. The model predictions reasonably matched observed depth-resolved SOM and dissolved organic matter (DOM) stocks and fluxes, lignin content, and fungi to aerobic bacteria ratios. We performed a suite of sensitivity analyses under equilibrium and dynamic conditions to examine the role of dynamic sorption, microbial assimilation rates, and carbon inputs. To our knowledge, observations do not exist to fully test such a complicated model structure or to test the hypotheses used to explain observations of substantial storage of very old SOM below the rooting depth. Nevertheless, we demonstrated that a reasonable combination of sorption parameters, microbial biomass and necromass dynamics, and advective transport can match observations without resorting to an arbitrary depth-dependent decline in SOM turnover rates, as is often done. We conclude that, contrary to assertions derived from existing turnover time based model formulations, observed carbon content and Δ14C vertical profiles are consistent with a representation of SOM consisting of carbon compounds with relatively fast reaction rates, vertical aqueous transport, and dynamic protection on mineral surfaces.

  2. Statistical models for incorporating data from routine HIV testing of pregnant women at antenatal clinics into HIV/AIDS epidemic estimates.

    PubMed

    Sheng, Ben; Marsh, Kimberly; Slavkovic, Aleksandra B; Gregson, Simon; Eaton, Jeffrey W; Bao, Le

    2017-04-01

    HIV prevalence data collected from routine HIV testing of pregnant women at antenatal clinics (ANC-RT) are potentially available from all facilities that offer testing services to pregnant women and can be used to improve estimates of national and subnational HIV prevalence trends. We develop methods to incorporate these new data source into the Joint United Nations Programme on AIDS Estimation and Projection Package in Spectrum 2017. We develop a new statistical model for incorporating ANC-RT HIV prevalence data, aggregated either to the health facility level (site-level) or regionally (census-level), to estimate HIV prevalence alongside existing sources of HIV prevalence data from ANC unlinked anonymous testing (ANC-UAT) and household-based national population surveys. Synthetic data are generated to understand how the availability of ANC-RT data affects the accuracy of various parameter estimates. We estimate HIV prevalence and additional parameters using both ANC-RT and other existing data. Fitting HIV prevalence using synthetic data generally gives precise estimates of the underlying trend and other parameters. More years of ANC-RT data should improve prevalence estimates. More ANC-RT sites and continuation with existing ANC-UAT sites may improve the estimate of calibration between ANC-UAT and ANC-RT sites. We have proposed methods to incorporate ANC-RT data into Spectrum to obtain more precise estimates of prevalence and other measures of the epidemic. Many assumptions about the accuracy, consistency, and representativeness of ANC-RT prevalence underlie the use of these data for monitoring HIV epidemic trends and should be tested as more data become available from national ANC-RT programs.

  3. Statistical Models for Incorporating Data from Routine HIV Testing of Pregnant Women at Antenatal Clinics into HIV/AIDS Epidemic Estimates

    PubMed Central

    Sheng, Ben; Marsh, Kimberly; Slavkovic, Aleksandra B.; Gregson, Simon; Eaton, Jeffrey W.; Bao, Le

    2017-01-01

    Objective HIV prevalence data collected from routine HIV testing of pregnant women at antenatal clinics (ANC-RT) are potentially available from all facilities that offer testing services to pregnant women, and can be used to improve estimates of national and sub-national HIV prevalence trends. We develop methods to incorporate this new data source into the UNAIDS Estimation and Projection Package (EPP) in Spectrum 2017. Methods We develop a new statistical model for incorporating ANC-RT HIV prevalence data, aggregated either to the health facility level (‘site-level’) or regionally (‘census-level’), to estimate HIV prevalence alongside existing sources of HIV prevalence data from ANC unlinked anonymous testing (ANC-UAT) and household-based national population surveys. Synthetic data are generated to understand how the availability of ANC-RT data affects the accuracy of various parameter estimates. Results We estimate HIV prevalence and additional parameters using both ANC-RT and other existing data. Fitting HIV prevalence using synthetic data generally gives precise estimates of the underlying trend and other parameters. More years of ANC-RT data should improve prevalence estimates. More ANC-RT sites and continuation with existing ANC-UAT sites may improve the estimate of calibration between ANC-UAT and ANC-RT sites. Conclusion We have proposed methods to incorporate ANC-RT data into Spectrum to obtain more precise estimates of prevalence and other measures of the epidemic. Many assumptions about the accuracy, consistency, and representativeness of ANC-RT prevalence underlie the use of these data for monitoring HIV epidemic trends, and should be tested as more data become available from national ANC-RT programs. PMID:28296804

  4. Structural Model of the Relationships among Cognitive Processes, Visual Motor Integration, and Academic Achievement in Students with Mild Intellectual Disability (MID)

    ERIC Educational Resources Information Center

    Taha, Mohamed Mostafa

    2016-01-01

    This study aimed to test a proposed structural model of the relationships and existing paths among cognitive processes (attention and planning), visual motor integration, and academic achievement in reading, writing, and mathematics. The study sample consisted of 50 students with mild intellectual disability or MID. The average age of these…

  5. Burning rates of wood cribs with implications for wildland fires

    Treesearch

    Sara McAllister; Mark Finney

    2016-01-01

    Wood cribs are often used as ignition sources for room fire tests and the well characterized burning rates may also have applications to wildland fires. The burning rate of wildland fuel structures, whether the needle layer on the ground or trees and shrubs themselves, is not addressed in any operational fire model and no simple model exists. Several relations...

  6. Effect of Liquefaction on Lateral Response of Piles by Centrifuge Model Tests

    DOT National Transportation Integrated Search

    1995-01-01

    This article presents work conducted on the effect of liquefaction on lateral pile response. Many existing bridges are founded on piles driven through loose sand that may liquefy during earthquake shaking. Both lateral stiffness and lateral capacity ...

  7. Development of Interspecies Correlation Models for Petroleum Hydrocarbons

    EPA Science Inventory

    Estimating the consequences of petroleum products to water column organisms has commonly been hampered by limited acute toxicity data, which exists only for a relatively small number of test species. In this study, we developed petroleum-specific Interspecies Correlation Estimati...

  8. Towards a New Generation of Agricultural System Data, Models and Knowledge Products: Design and Improvement

    NASA Technical Reports Server (NTRS)

    Antle, John M.; Basso, Bruno; Conant, Richard T.; Godfray, H. Charles J.; Jones, James W.; Herrero, Mario; Howitt, Richard E.; Keating, Brian A.; Munoz-Carpena, Rafael; Rosenzweig, Cynthia

    2016-01-01

    This paper presents ideas for a new generation of agricultural system models that could meet the needs of a growing community of end-users exemplified by a set of Use Cases. We envision new data, models and knowledge products that could accelerate the innovation process that is needed to achieve the goal of achieving sustainable local, regional and global food security. We identify desirable features for models, and describe some of the potential advances that we envisage for model components and their integration. We propose an implementation strategy that would link a "pre-competitive" space for model development to a "competitive space" for knowledge product development and through private-public partnerships for new data infrastructure. Specific model improvements would be based on further testing and evaluation of existing models, the development and testing of modular model components and integration, and linkages of model integration platforms to new data management and visualization tools.

  9. Towards a new generation of agricultural system data, models and knowledge products: Design and improvement.

    PubMed

    Antle, John M; Basso, Bruno; Conant, Richard T; Godfray, H Charles J; Jones, James W; Herrero, Mario; Howitt, Richard E; Keating, Brian A; Munoz-Carpena, Rafael; Rosenzweig, Cynthia; Tittonell, Pablo; Wheeler, Tim R

    2017-07-01

    This paper presents ideas for a new generation of agricultural system models that could meet the needs of a growing community of end-users exemplified by a set of Use Cases. We envision new data, models and knowledge products that could accelerate the innovation process that is needed to achieve the goal of achieving sustainable local, regional and global food security. We identify desirable features for models, and describe some of the potential advances that we envisage for model components and their integration. We propose an implementation strategy that would link a "pre-competitive" space for model development to a "competitive space" for knowledge product development and through private-public partnerships for new data infrastructure. Specific model improvements would be based on further testing and evaluation of existing models, the development and testing of modular model components and integration, and linkages of model integration platforms to new data management and visualization tools.

  10. Mach Stability Improvements Using an Existing Second Throat Capability at the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Chan, David T.; Balakrishna, Sundareswara; Walker, Eric L.; Goodliff, Scott L.

    2015-01-01

    Recent data quality improvements at the National Transonic Facility have an intended goal of reducing the Mach number variation in a data point to within plus or minus 0.0005, with the ultimate goal of reducing the data repeatability of the drag coefficient for full-span subsonic transport models at transonic speeds to within half a drag count. This paper will discuss the Mach stability improvements achieved through the use of an existing second throat capability at the NTF to create a minimum area at the end of the test section. These improvements were demonstrated using both the NASA Common Research Model and the NTF Pathfinder-I model in recent experiments. Sonic conditions at the throat were verified using sidewall static pressure data. The Mach variation levels from both experiments in the baseline tunnel configuration and the choked tunnel configuration will be presented and the correlation between Mach number and drag will also be examined. Finally, a brief discussion is given on the consequences of using the second throat in its location at the end of the test section.

  11. Mach Stability Improvements Using an Existing Second Throat Capability at the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Chan, David T.

    2015-01-01

    Recent data quality improvements at the National Transonic Facility (NTF) have an intended goal of reducing the Mach number variation in a data point to within unit vector A plus or minus 0.0005, with the ultimate goal of reducing the data repeatability of the drag coefficient for full-span subsonic transport models at transonic speeds to within half of a drag count. This paper will discuss the Mach stability improvements achieved through the use of an existing second throat capability at the NTF to create a minimum area at the end of the test section. These improvements were demonstrated using both the NASA Common Research Model and the NTF Pathfinder-I model in recent experiments. Sonic conditions at the throat were verified using sidewall static pressure data. The Mach variation levels from both experiments in the baseline tunnel configuration and the choked tunnel configuration will be presented. Finally, a brief discussion is given on the consequences of using the second throat in its location at the end of the test section.

  12. Technology Transferred to the Kirby Company

    NASA Technical Reports Server (NTRS)

    1996-01-01

    NASA Lewis Research Center's Propulsion Systems Branch, evaluated the structural and vibration characteristics of the Kirby Model G-4 fan. Modes of vibration and resonance potential were evaluated in the Holography Test Lab at Lewis. As a result of the Lewis tests and rotor structural evaluation, Kirby engineers gained new insights into their existing design, enabling them to develop a more robust fan for use in their vacuum cleaners.

  13. Two Tracks of Thought: A Structural Model of the Test for Creative Thinking-Drawing Production (TCT-DP)

    ERIC Educational Resources Information Center

    Nogueira, Sara Ibérico; Almeida, Leonor S.; Lima, Tiago Souza

    2017-01-01

    The Test for Creative Thinking-Drawing Production (TCT-DP, Urban & Jellen, 1986) is one of the most used instruments for the assessment of creative potential. Few studies exist regarding its factorial structure, and all of them were limited to using an exploratory approach. The aim of this research was to assess the factorial structure of the…

  14. Characterization of structural connections using free and forced response test data

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles; Huckelbridge, Arthur A.

    1989-01-01

    The accurate prediction of system dynamic response often has been limited by deficiencies in existing capabilities to characterize connections adequately. Connections between structural components often are complex mechanically, and difficult to accurately model analytically. Improved analytical models for connections are needed to improve system dynamic preditions. A procedure for identifying physical connection properties from free and forced response test data is developed, then verified utilizing a system having both a linear and nonlinear connection. Connection properties are computed in terms of physical parameters so that the physical characteristics of the connections can better be understood, in addition to providing improved input for the system model. The identification procedure is applicable to multi-degree of freedom systems, and does not require that the test data be measured directly at the connection locations.

  15. Community models for wildlife impact assessment: a review of concepts and approaches

    USGS Publications Warehouse

    Schroeder, Richard L.

    1987-01-01

    The first two sections of this paper are concerned with defining and bounding communities, and describing those attributes of the community that are quantifiable and suitable for wildlife impact assessment purposes. Prior to the development or use of a community model, it is important to have a clear understanding of the concept of a community and a knowledge of the types of community attributes that can serve as outputs for the development of models. Clearly defined, unambiguous model outputs are essential for three reasons: (1) to ensure that the measured community attributes relate to the wildlife resource objectives of the study; (2) to allow testing of the outputs in experimental studies, to determine accuracy, and to allow for improvements based on such testing; and (3) to enable others to clearly understand the community attribute that has been measured. The third section of this paper described input variables that may be used to predict various community attributes. These input variables do not include direct measures of wildlife populations. Most impact assessments involve projects that result in drastic changes in habitat, such as changes in land use, vegetation, or available area. Therefore, the model input variables described in this section deal primarily with habitat related features. Several existing community models are described in the fourth section of this paper. A general description of each model is provided, including the nature of the input variables and the model output. The logic and assumptions of each model are discussed, along with data requirements needed to use the model. The fifth section provides guidance on the selection and development of community models. Identification of the community attribute that is of concern will determine the type of model most suitable for a particular application. This section provides guidelines on selected an existing model, as well as a discussion of the major steps to be followed in modifying an existing model or developing a new model. Considerations associated with the use of community models with the Habitat Evaluation Procedures are also discussed. The final section of the paper summarizes major findings of interest to field biologists and provides recommendations concerning the implementation of selected concepts in wildlife community analyses.

  16. Kinetic model for multidimensional opinion formation

    NASA Astrophysics Data System (ADS)

    Boudin, Laurent; Monaco, Roberto; Salvarani, Francesco

    2010-03-01

    In this paper, we deal with a kinetic model to describe the evolution of the opinion in a closed group with respect to a choice between multiple options (e.g., political parties), which takes into account two main mechanisms of opinion formation, namely, the interaction between individuals and the effect of the mass media. We numerically test the model in some relevant cases and eventually provide an existence and a uniqueness result for it.

  17. Performance Testing of a Trace Contaminant Control Subassembly for the International Space Station

    NASA Technical Reports Server (NTRS)

    Perry, J. L.; Curtis, R. E.; Alexandre, K. L.; Ruggiero, L. L.; Shtessel, N.

    1998-01-01

    As part of the International Space Station (ISS) Trace Contaminant Control Subassembly (TCCS) development, a performance test has been conducted to provide reference data for flight verification analyses. This test, which used the U.S. Habitation Module (U.S. Hab) TCCS as the test article, was designed to add to the existing database on TCCS performance. Included in this database are results obtained during ISS development testing; testing of functionally similar TCCS prototype units; and bench scale testing of activated charcoal, oxidation catalyst, and granular lithium hydroxide (LiOH). The present database has served as the basis for the development and validation of a computerized TCCS process simulation model. This model serves as the primary means for verifying the ISS TCCS performance. In order to mitigate risk associated with this verification approach, the U.S. Hab TCCS performance test provides an additional set of data which serve to anchor both the process model and previously-obtained development test data to flight hardware performance. The following discussion provides relevant background followed by a summary of the test hardware, objectives, requirements, and facilities. Facility and test article performance during the test is summarized, test results are presented, and the TCCS's performance relative to past test experience is discussed. Performance predictions made with the TCCS process model are compared with the U.S. Hab TCCS test results to demonstrate its validation.

  18. Retargeting of existing FORTRAN program and development of parallel compilers

    NASA Technical Reports Server (NTRS)

    Agrawal, Dharma P.

    1988-01-01

    The software models used in implementing the parallelizing compiler for the B-HIVE multiprocessor system are described. The various models and strategies used in the compiler development are: flexible granularity model, which allows a compromise between two extreme granularity models; communication model, which is capable of precisely describing the interprocessor communication timings and patterns; loop type detection strategy, which identifies different types of loops; critical path with coloring scheme, which is a versatile scheduling strategy for any multicomputer with some associated communication costs; and loop allocation strategy, which realizes optimum overlapped operations between computation and communication of the system. Using these models, several sample routines of the AIR3D package are examined and tested. It may be noted that automatically generated codes are highly parallelized to provide the maximized degree of parallelism, obtaining the speedup up to a 28 to 32-processor system. A comparison of parallel codes for both the existing and proposed communication model, is performed and the corresponding expected speedup factors are obtained. The experimentation shows that the B-HIVE compiler produces more efficient codes than existing techniques. Work is progressing well in completing the final phase of the compiler. Numerous enhancements are needed to improve the capabilities of the parallelizing compiler.

  19. Behavioral testing in rodent models of orofacial neuropathic and inflammatory pain

    PubMed Central

    Krzyzanowska, Agnieszka; Avendaño, Carlos

    2012-01-01

    Orofacial pain conditions are often very debilitating to the patient and difficult to treat. While clinical interest is high, the proportion of studies performed in the orofacial region in laboratory animals is relatively low, compared with other body regions. This is partly due to difficulties in testing freely moving animals and therefore lack of reliable testing methods. Here we present a comprehensive review of the currently used rodent models of inflammatory and neuropathic pain adapted to the orofacial areas, taking into account the difficulties and drawbacks of the existing approaches. We examine the available testing methods and procedures used for assessing the behavioral responses in the face in both mice and rats and provide a summary of some pharmacological agents used in these paradigms to date. The use of these agents in animal models is also compared with outcomes observed in the clinic. PMID:23139912

  20. Advanced air revitalization system modeling and testing

    NASA Technical Reports Server (NTRS)

    Dall-Baumann, Liese; Jeng, Frank; Christian, Steve; Edeer, Marybeth; Lin, Chin

    1990-01-01

    To support manned lunar and Martian exploration, an extensive evaluation of air revitalization subsystems (ARS) is being conducted. The major operations under study include carbon dioxide removal and reduction; oxygen and nitrogen production, storage, and distribution; humidity and temperature control; and trace contaminant control. A comprehensive analysis program based on a generalized block flow model was developed to facilitate the evaluation of various processes and their interaction. ASPEN PLUS was used in modelling carbon dioxide removal and reduction. Several life support test stands were developed to test new and existing technologies for their potential applicability in space. The goal was to identify processes which use compact, lightweight equipment and maximize the recovery of oxygen and water. The carbon dioxide removal test stands include solid amine/vacuum desorption (SAVD), regenerative silver oxide chemisorption, and electrochemical carbon dioxide concentration (EDC). Membrane-based carbon dioxide removal and humidity control, catalytic reduction of carbon dioxide, and catalytic oxidation of trace contaminants were also investigated.

  1. Composite Overwrapped Pressure Vessels (COPV) Stress Rupture Test

    NASA Technical Reports Server (NTRS)

    Russell, Richard; Flynn, Howard; Forth, Scott; Greene, Nathanael; Kezian, Michael; Varanauski, Don; Yoder, Tommy; Woodworth, Warren

    2009-01-01

    One of the major concerns for the aging Space Shuttle fleet is the stress rupture life of composite overwrapped pressure vessels (COPVs). Stress rupture life of a COPV has been defined as the minimum time during which the composite maintains structural integrity considering the combined effects of stress levels and time. To assist in the evaluation of the aging COPVs in the Orbiter fleet an analytical reliability model was developed. The actual data used to construct this model was from testing of COPVs constructed of similar, but not exactly same materials and pressure cycles as used on Orbiter vessels. Since no actual Orbiter COPV stress rupture data exists the Space Shuttle Program decided to run a stress rupture test to compare to model predictions. Due to availability of spares, the testing was unfortunately limited to one 40" vessel. The stress rupture test was performed at maximum operating pressure at an elevated temperature to accelerate aging. The test was performed in two phases. The first phase, 130 F, a moderately accelerated test designed to achieve the midpoint of the model predicted point reliability. The more aggressive second phase, performed at 160 F was designed to determine if the test article will exceed the 95% confidence interval of the model. This paper will discuss the results of this test, it's implications and possible follow-on testing.

  2. Diagnostic utility of appetite loss in addition to existing prediction models for community-acquired pneumonia in the elderly: a prospective diagnostic study in acute care hospitals in Japan.

    PubMed

    Takada, Toshihiko; Yamamoto, Yosuke; Terada, Kazuhiko; Ohta, Mitsuyasu; Mikami, Wakako; Yokota, Hajime; Hayashi, Michio; Miyashita, Jun; Azuma, Teruhisa; Fukuma, Shingo; Fukuhara, Shunichi

    2017-11-08

    Diagnosis of community-acquired pneumonia (CAP) in the elderly is often delayed because of atypical presentation and non-specific symptoms, such as appetite loss, falls and disturbance in consciousness. The aim of this study was to investigate the external validity of existing prediction models and the added value of the non-specific symptoms for the diagnosis of CAP in elderly patients. Prospective cohort study. General medicine departments of three teaching hospitals in Japan. A total of 109 elderly patients who consulted for upper respiratory symptoms between 1 October 2014 and 30 September 2016. The reference standard for CAP was chest radiograph evaluated by two certified radiologists. The existing models were externally validated for diagnostic performance by calibration plot and discrimination. To evaluate the additional value of the non-specific symptoms to the existing prediction models, we developed an extended logistic regression model. Calibration, discrimination, category-free net reclassification improvement (NRI) and decision curve analysis (DCA) were investigated in the extended model. Among the existing models, the model by van Vugt demonstrated the best performance, with an area under the curve of 0.75(95% CI 0.63 to 0.88); calibration plot showed good fit despite a significant Hosmer-Lemeshow test (p=0.017). Among the non-specific symptoms, appetite loss had positive likelihood ratio of 3.2 (2.0-5.3), negative likelihood ratio of 0.4 (0.2-0.7) and OR of 7.7 (3.0-19.7). Addition of appetite loss to the model by van Vugt led to improved calibration at p=0.48, NRI of 0.53 (p=0.019) and higher net benefit by DCA. Information on appetite loss improved the performance of an existing model for the diagnosis of CAP in the elderly. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. Pressure distributions obtained on a 0.10-scale model of the Space Shuttle Orbiter's forebody in the Ames Unitary Plan Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Siemers, P. M., III; Henry, M. W.

    1986-01-01

    Pressure distribution test data obtained on a 0.10-scale model of the forward fuselage of the Space Shuttle Orbiter are presented without analysis. The tests were completed in the Ames Unitary Wind Tunnel (UPWT). The UPWT tests were conducted in two different test sections operating in the continuous mode, the 8 x 7 feet and 9 x 7 feet test sections. Each test section has its own Mach number range, 1.6 to 2.5 and 2.5 to 3.5 for the 9 x 7 feet and 8 x 7 feet test section, respectively. The test Reynolds number ranged from 1.6 to 2.5 x 10 to the 6th power ft and 0.6 to 2.0 x 10 to the 6th power ft, respectively. The tests were conducted in support of the development of the Shuttle Entry Air Data System (SEADS). In addition to modeling the 20 SEADS orifices, the wind-tunnel model was also instrumented with orifices to match Development Flight Instrumentation (DFI) port locations that existed on the Space Shuttle Columbia (OV-102) during the Orbiter Flight test program. This DFI simulation has provided a means for comparisons between reentry flight pressure data and wind-tunnel and computational data.

  4. Muon g - 2 in the aligned two Higgs doublet model

    DOE PAGES

    Han, Tao; Kang, Sin Kyu; Sayre, Joshua

    2016-02-16

    In this paper, we study the Two-Higgs-Doublet Model with the aligned Yukawa sector (A2HDM) in light of the observed excess measured in the muon anomalous magnetic moment. We take into account the existing theoretical and experimental constraints with up-to-date values and demonstrate that a phenomenologically interesting region of parameter space exists. With a detailed parameter scan, we show a much larger region of viable parameter space in this model beyond the limiting case Type X 2HDM as obtained before. It features the existence of light scalar states with masses 3 GeV ≲ m H ≲ 50 GeV, or 10 GeVmore » ≲ m A ≲ 130 GeV, with enhanced couplings to tau leptons. The charged Higgs boson is typically heavier, with 200 GeV ≲ m H+ ≲ 630 GeV. The surviving parameter space is forced into the CP-conserving limit by EDM constraints. Some Standard Model observables may be significantly modified, including a possible new decay mode of the SMlike Higgs boson to four taus. Lastly, we comment on future measurements and direct searches for those effects at the LHC as tests of the model.« less

  5. Structural Dynamic Analyses And Test Predictions For Spacecraft Structures With Non-Linearities

    NASA Astrophysics Data System (ADS)

    Vergniaud, Jean-Baptiste; Soula, Laurent; Newerla, Alfred

    2012-07-01

    The overall objective of the mechanical development and verification process is to ensure that the spacecraft structure is able to sustain the mechanical environments encountered during launch. In general the spacecraft structures are a-priori assumed to behave linear, i.e. the responses to a static load or dynamic excitation, respectively, will increase or decrease proportionally to the amplitude of the load or excitation induced. However, past experiences have shown that various non-linearities might exist in spacecraft structures and the consequences of their dynamic effects can significantly affect the development and verification process. Current processes are mainly adapted to linear spacecraft structure behaviour. No clear rules exist for dealing with major structure non-linearities. They are handled outside the process by individual analysis and margin policy, and analyses after tests to justify the CLA coverage. Non-linearities can primarily affect the current spacecraft development and verification process on two aspects. Prediction of flights loads by launcher/satellite coupled loads analyses (CLA): only linear satellite models are delivered for performing CLA and no well-established rules exist how to properly linearize a model when non- linearities are present. The potential impact of the linearization on the results of the CLA has not yet been properly analyzed. There are thus difficulties to assess that CLA results will cover actual flight levels. Management of satellite verification tests: the CLA results generated with a linear satellite FEM are assumed flight representative. If the internal non- linearities are present in the tested satellite then there might be difficulties to determine which input level must be passed to cover satellite internal loads. The non-linear behaviour can also disturb the shaker control, putting the satellite at risk by potentially imposing too high levels. This paper presents the results of a test campaign performed in the frame of an ESA TRP study [1]. A bread-board including typical non-linearities has been designed, manufactured and tested through a typical spacecraft dynamic test campaign. The study has demonstrate the capabilities to perform non-linear dynamic test predictions on a flight representative spacecraft, the good correlation of test results with respect to Finite Elements Model (FEM) prediction and the possibility to identify modal behaviour and to characterize non-linearities characteristics from test results. As a synthesis for this study, overall guidelines have been derived on the mechanical verification process to improve level of expertise on tests involving spacecraft including non-linearity.

  6. Modeling the Galaxy-Halo Connection: An open-source approach with Halotools

    NASA Astrophysics Data System (ADS)

    Hearin, Andrew

    2016-03-01

    Although the modern form of galaxy-halo modeling has been in place for over ten years, there exists no common code base for carrying out large-scale structure calculations. Considering, for example, the advances in CMB science made possible by Boltzmann-solvers such as CMBFast, CAMB and CLASS, there are clear precedents for how theorists working in a well-defined subfield can mutually benefit from such a code base. Motivated by these and other examples, I present Halotools: an open-source, object-oriented python package for building and testing models of the galaxy-halo connection. Halotools is community-driven, and already includes contributions from over a dozen scientists spread across numerous universities. Designed with high-speed performance in mind, the package generates mock observations of synthetic galaxy populations with sufficient speed to conduct expansive MCMC likelihood analyses over a diverse and highly customizable set of models. The package includes an automated test suite and extensive web-hosted documentation and tutorials (halotools.readthedocs.org). I conclude the talk by describing how Halotools can be used to analyze existing datasets to obtain robust and novel constraints on galaxy evolution models, and by outlining the Halotools program to prepare the field of cosmology for the arrival of Stage IV dark energy experiments.

  7. Low Activation Joining of SiC/SiC Composites for Fusion Applications: Modeling Miniature Torsion Tests with Elastic and Elastic-Plastic Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henager, Charles H.; Nguyen, Ba Nghiep; Kurtz, Richard J.

    2015-03-01

    The use of SiC and SiC-composites in fission or fusion environments requires joining methods for assembling systems. The international fusion community designed miniature torsion specimens for joint testing and irradiation in test reactors with limited irradiation volumes. These torsion specimens fail out-of-plane when joints are strong and when elastic moduli are within a certain range compared to SiC, which causes difficulties in determining shear strengths for joints or for comparing unirradiated and irradiated joints. A finite element damage model was developed that indicates fracture is likely to occur within the joined pieces to cause out-of-plane failures for miniature torsion specimensmore » when a certain modulus and strength ratio between the joint material and the joined material exists. The model was extended to treat elastic-plastic joints such as SiC/epoxy and steel/epoxy joints tested as validation of the specimen design.« less

  8. Mechanical design of a rotary balance system for NASA. Langley Research Center's vertical spin tunnel

    NASA Technical Reports Server (NTRS)

    Allred, J. W.; Fleck, V. J.

    1992-01-01

    A new lightweight Rotary Balance System is presently being fabricated and installed as part of a major upgrade to the existing 20 Foot Vertical Spin Tunnel. This upgrade to improve model testing productivity of the only free spinning vertical wind tunnel includes a modern fan/drive and tunnel control system, an updated video recording system, and the new rotary balance system. The rotary balance is a mechanical apparatus which enables the measurement of aerodynamic force and moment data under spinning conditions (100 rpm). This data is used in spin analysis and is vital to the implementation of large amplitude maneuvering simulations required for all new high performance aircraft. The new rotary balance system described in this report will permit greater test efficiency and improved data accuracy. Rotary Balance testing with the model enclosed in a tare bag can also be performed to obtain resulting model forces from the spinning operation. The rotary balance system will be stored against the tunnel sidewall during free flight model testing.

  9. Validation of catchment models for predicting land-use and climate change impacts. 2. Case study for a Mediterranean catchment

    NASA Astrophysics Data System (ADS)

    Parkin, G.; O'Donnell, G.; Ewen, J.; Bathurst, J. C.; O'Connell, P. E.; Lavabre, J.

    1996-02-01

    Validation methods commonly used to test catchment models are not capable of demonstrating a model's fitness for making predictions for catchments where the catchment response is not known (including hypothetical catchments, and future conditions of existing catchments which are subject to land-use or climate change). This paper describes the first use of a new method of validation (Ewen and Parkin, 1996. J. Hydrol., 175: 583-594) designed to address these types of application; the method involves making 'blind' predictions of selected hydrological responses which are considered important for a particular application. SHETRAN (a physically based, distributed catchment modelling system) is tested on a small Mediterranean catchment. The test involves quantification of the uncertainty in four predicted features of the catchment response (continuous hydrograph, peak discharge rates, monthly runoff, and total runoff), and comparison of observations with the predicted ranges for these features. The results of this test are considered encouraging.

  10. Polishing, coating and integration of SiC mirrors for space telescopes

    NASA Astrophysics Data System (ADS)

    Rodolfo, Jacques

    2017-11-01

    In the last years, the technology of SiC mirrors took an increasingly significant part in the field of space telescopes. Sagem is involved in the JWST program to manufacture and test the optical components of the NIRSpec instrument. The instrument is made of 3 TMAs and 4 plane mirrors made of SiC. Sagem is in charge of the CVD cladding, the polishing, the coating of the mirrors and the integration and testing of the TMAs. The qualification of the process has been performed through the manufacturing and testing of the qualification model of the FOR TMA. This TMA has shown very good performances both at ambient and during the cryo test. The polishing process has been improved for the manufacturing of the flight model. This improvement has been driven by the BRDF performance of the mirror. This parameter has been deeply analysed and a model has been built to predict the performance of the mirrors. The existing Dittman model have been analysed and found to be optimistic.

  11. Finite Element Model Calibration Approach for Area I-X

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Gaspar, James L.; Lazor, Daniel R.; Parks, Russell A.; Bartolotta, Paul A.

    2010-01-01

    Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of non-conventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pretest predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.

  12. Development, testing, and numerical modeling of a foam sandwich biocomposite

    NASA Astrophysics Data System (ADS)

    Chachra, Ricky

    This study develops a novel sandwich composite material using plant based materials for potential use in nonstructural building applications. The face sheets comprise woven hemp fabric and a sap based epoxy, while the core comprises castor oil based foam with waste rice hulls as reinforcement. Mechanical properties of the individual materials are tested in uniaxial compression and tension for the foam and hemp, respectively. The sandwich composite is tested in 3 point bending. Flexural results are compared to a finite element model developed in the commercial software Abaqus, and the validated model is then used to investigate alternate sandwich geometries. Sandwich model responses are compared to existing standards for nonstructural building panels, showing that the novel material is roughly half the strength of equally thick drywall. When space limitations are not an issue, a double thickness sandwich biocomposite is found to be a structurally acceptable replacement for standard gypsum drywall.

  13. The Static and Dynamic Rotary Stability Derivatives at Subsonic Speeds of an Airplane Model Having Wing and Tail Surfaces Swept Back 45 degrees

    NASA Technical Reports Server (NTRS)

    Lopez, Armando E.; Buell, Donald A.; Tinling, Bruce E.

    1959-01-01

    Wind-tunnel measurements were made of the static and dynamic rotary stability derivatives of an airplane model having sweptback wing and tail surfaces. The Mach number range of the tests was from 0.23 to 0.94. The components of the model were tested in various combinations so that the separate contribution to the stability derivatives of the component parts and the interference effects could be determined. Estimates of the dynamic rotary derivatives based on some of the simpler existing procedures which utilize static force data were found to be in reasonable agreement with the experimental results at low angles of attack. The results of the static and dynamic measurements were used to compute the short-period oscillatory characteristics of an airplane geometrically similar to the test model. The results of these calculations are compared with military flying qualities requirements.

  14. Composite Overwrapped Pressure Vessels (COPV): Flight Rationale for the Space Shuttle Program

    NASA Technical Reports Server (NTRS)

    Kezirian, Michael T.; Johnson, Kevin L.; Phoenix, Stuart L.

    2011-01-01

    Each Orbiter Vehicle (Space Shuttle Program) contains up to 24 Kevlar49/Epoxy Composite Overwrapped Pressure Vessels (COPV) for storage of pressurized gases. In the wake of the Columbia accident and the ensuing Return To Flight (RTF) activities, Orbiter engineers reexamined COPV flight certification. The original COPV design calculations were updated to include recently declassified Kevlar COPV test data from Lawrence Livermore National Laboratory (LLNL) and to incorporate changes in how the Space Shuttle was operated as opposed to orinigially envisioned. 2005 estimates for the probability of a catastrophic failure over the life of the program (from STS-1 through STS-107) were one-in-five. To address this unacceptable risk, the Orbiter Project Office (OPO) initiated a comprehensive investigation to understand and mitigate this risk. First, the team considered and eventually deemed unfeasible procuring and replacing all existing flight COPVs. OPO replaced the two vessels with the highest risk with existing flight spare units. Second, OPO instituted operational improvements in ground procedures to signficiantly reduce risk, without adversely affecting Shuttle capability. Third, OPO developed a comprehensive model to quantify the likelihood of occurrance. A fully-instrumented burst test (recording a lower burst pressure than expected) on a flight-certified vessel provided critical understanding of the behavior of Orbiter COPVs. A more accurate model was based on a newly-compiled comprehensive database of Kevlar data from LLNL and elsewhere. Considering hardware changes, operational improvements and reliability model refinements, the mean reliability was determined to be 0.998 for the remainder of the Shuttle Program (from 2007, for STS- 118 thru STS-135). Since limited hardware resources precluded full model validation through multiple tests, additional model confidence was sought through the first-ever Accelerated Stress Rupture Test (ASRT) of a flown flight article. A Bayesian statistical approach was developed to interpret possible test results. Since the lifetime observed in the ASRT exceeded initial estimates by one to two orders of magnitude, the Space Shuttle Program deemed there was significant conservatism in the model and accepted continued operation with existing flight hardware. Given the variability in tank-to-tank original prooftest response, a non-destructive evaluation (NDE) technique utilizing Raman Spectroscopy was developed to directly measure COPV residual stress state. Preliminary results showed that patterns of low fiber elastic strains over the outside vessel surface, together with measured permanent volume growth during proof, could be directly correlated to increased fiber stress ratios on the inside fibers adjacent to the liner, and thus reduced reliability.

  15. Process Improvements in Training Device Acceptance Testing: A Study in Total Quality Management

    DTIC Science & Technology

    1990-12-12

    Quality Management , a small group of Government and industry specialists examined the existing training device acceptance test process for potential improvements. The agreed-to mission of the Air Force/Industry partnership was to continuously identify and promote implementable approaches to minimize the cost and time required for acceptance testing while ensuring that validated performance supports the user training requirements. Application of a Total Quality process improvement model focused on the customers and their requirements, analyzed how work was accomplished, and

  16. Carbon deposition model for oxygen-hydrocarbon combustion

    NASA Technical Reports Server (NTRS)

    Bossard, John A.

    1988-01-01

    The objectives are to use existing hardware to verify and extend the database generated on the original test programs. The data to be obtained are the carbon deposition characteristics when methane is used at injection densities comparable to full scale values. The database will be extended to include liquid natural gas (LNG) testing at low injection densities for gas generator/preburner conditions. The testing will be performed at mixture ratios between 0.25 and 0.60, and at chamber pressures between 750 and 1500 psi.

  17. SPSS and SAS programming for the testing of mediation models.

    PubMed

    Dudley, William N; Benuzillo, Jose G; Carrico, Mineh S

    2004-01-01

    Mediation modeling can explain the nature of the relation among three or more variables. In addition, it can be used to show how a variable mediates the relation between levels of intervention and outcome. The Sobel test, developed in 1990, provides a statistical method for determining the influence of a mediator on an intervention or outcome. Although interactive Web-based and stand-alone methods exist for computing the Sobel test, SPSS and SAS programs that automatically run the required regression analyses and computations increase the accessibility of mediation modeling to nursing researchers. To illustrate the utility of the Sobel test and to make this programming available to the Nursing Research audience in both SAS and SPSS. The history, logic, and technical aspects of mediation testing are introduced. The syntax files sobel.sps and sobel.sas, created to automate the computation of the regression analysis and test statistic, are available from the corresponding author. The reported programming allows the user to complete mediation testing with the user's own data in a single-step fashion. A technical manual included with the programming provides instruction on program use and interpretation of the output. Mediation modeling is a useful tool for describing the relation between three or more variables. Programming and manuals for using this model are made available.

  18. Development and Testing of Building Energy Model Using Non-Linear Auto Regression Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Arida, Maya Ahmad

    In 1972 sustainable development concept existed and during The years it became one of the most important solution to save natural resources and energy, but now with rising energy costs and increasing awareness of the effect of global warming, the development of building energy saving methods and models become apparently more necessary for sustainable future. According to U.S. Energy Information Administration EIA (EIA), today buildings in the U.S. consume 72 percent of electricity produced, and use 55 percent of U.S. natural gas. Buildings account for about 40 percent of the energy consumed in the United States, more than industry and transportation. Of this energy, heating and cooling systems use about 55 percent. If energy-use trends continue, buildings will become the largest consumer of global energy by 2025. This thesis proposes procedures and analysis techniques for building energy system and optimization methods using time series auto regression artificial neural networks. The model predicts whole building energy consumptions as a function of four input variables, dry bulb and wet bulb outdoor air temperatures, hour of day and type of day. The proposed model and the optimization process are tested using data collected from an existing building located in Greensboro, NC. The testing results show that the model can capture very well the system performance, and The optimization method was also developed to automate the process of finding the best model structure that can produce the best accurate prediction against the actual data. The results show that the developed model can provide results sufficiently accurate for its use in various energy efficiency and saving estimation applications.

  19. [Evaluation of accuracy of virtual occlusal definition in Angle class I molar relationship].

    PubMed

    Wu, L; Liu, X J; Li, Z L; Wang, X

    2018-02-18

    To evaluate the accuracy of virtual occlusal definition in non-Angle class I molar relationship, and to evaluate the clinical feasibility. Twenty pairs of models of orthognathic patients were included in this study. The inclusion criteria were: (1) finished with pre-surgical orthodontic treatment and (2) stable final occlusion. The exclusion criteria were: (1) existence of distorted teeth, (2) needs for segmentation, (3) defect of dentition except for orthodontic extraction ones, and (4) existence of tooth space. The tooth-extracted test group included 10 models with two premolars extracted during preoperative orthodontic treatment. Their molar relationships were not Angle class I relationship. The non-tooth-extracted test group included another 10 models without teeth extracted, therefore their molar relationships were Angle class I. To define the final occlusion in virtual environment, two steps were included: (1) The morphology data of upper and lower dentition were digitalized by surface scanner (Smart Optics/Activity 102; Model-Tray GmbH, Hamburg, Germany); (2) the virtual relationships were defined using 3Shape software. The control standard of final occlusion was manually defined using gypsum models and then digitalized by surface scanner. The final occlusion of test group and control standard were overlapped according to lower dentition morphology. Errors were evaluated by calculating the distance between the corresponding reference points of testing group and control standard locations. The overall errors for upper dentition between test group and control standard location were (0.51±0.18) mm in non-tooth-extracted test group and (0.60±0.36) mm in tooth-extracted test group. The errors were significantly different between these two test groups (P<0.05). However, in both test groups, the errors of each tooth in a single dentition does not differ from one another. There was no significant difference between errors in tooth-extracted test group and 1 mm (P>0.05); and the accuracy of non-tooth-extracted group was significantly smaller than 1 mm (P<0.05). The error of virtual occlusal definition of none class I molar relationship is higher than that of class I relationship, with an accuracy of 1 mm. However, its accuracy is still feasible for clinical application.

  20. Integration of HIV/AIDS services into African primary health care: lessons learned for health system strengthening in Mozambique - a case study

    PubMed Central

    2010-01-01

    Introduction In 2004, Mozambique, supported by large increases in international disease-specific funding, initiated a national rapid scale-up of antiretroviral treatment (ART) and HIV care through a vertical "Day Hospital" approach. Though this model showed substantial increases in people receiving treatment, it diverted scarce resources away from the primary health care (PHC) system. In 2005, the Ministry of Health (MOH) began an effort to use HIV/AIDS treatment and care resources as a means to strengthen their PHC system. The MOH worked closely with a number of NGOs to integrate HIV programs more effectively into existing public-sector PHC services. Case Description In 2005, the Ministry of Health and Health Alliance International initiated an effort in two provinces to integrate ART into the existing primary health care system through health units distributed across 23 districts. Integration included: a) placing ART services in existing units; b) retraining existing workers; c) strengthening laboratories, testing, and referral linkages; e) expanding testing in TB wards; f) integrating HIV and antenatal services; and g) improving district-level management. Discussion: By 2008, treatment was available in nearly 67 health facilities in 23 districts. Nearly 30,000 adults were on ART. Over 80,000 enrolled in the HIV/AIDS program. Loss to follow-up from antenatal and TB testing to ART services has declined from 70% to less than 10% in many integrated sites. Average time from HIV testing to ART initiation is significantly faster and adherence to ART is better in smaller peripheral clinics than in vertical day hospitals. Integration has also improved other non-HIV aspects of primary health care. Conclusion The integration approach enables the public sector PHC system to test more patients for HIV, place more patients on ART more quickly and efficiently, reduce loss-to-follow-up, and achieve greater geographic HIV care coverage compared to the vertical model. Through the integration process, HIV resources have been used to rehabilitate PHC infrastructure (including laboratories and pharmacies), strengthen supervision, fill workforce gaps, and improve patient flow between services and facilities in ways that can benefit all programs. Using aid resources to integrate and better link HIV care with existing services can strengthen wider PHC systems. PMID:20180975

  1. Very Low Head Turbine Deployment in Canada

    NASA Astrophysics Data System (ADS)

    Kemp, P.; Williams, C.; Sasseville, Remi; Anderson, N.

    2014-03-01

    The Very Low Head (VLH) turbine is a recent turbine technology developed in Europe for low head sites in the 1.4 - 4.2 m range. The VLH turbine is primarily targeted for installation at existing hydraulic structures to provide a low impact, low cost, yet highly efficient solution. Over 35 VLH turbines have been successfully installed in Europe and the first VLH deployment for North America is underway at Wasdell Falls in Ontario, Canada. Deployment opportunities abound in Canada with an estimated 80,000 existing structures within North America for possible low-head hydro development. There are several new considerations and challenges for the deployment of the VLH turbine technology in Canada in adapting to the hydraulic, environmental, electrical and social requirements. Several studies were completed to determine suitable approaches and design modifications to mitigate risk and confirm turbine performance. Diverse types of existing weirs and spillways pose certain hydraulic design challenges. Physical and numerical modelling of the VLH deployment alternatives provided for performance optimization. For this application, studies characterizing the influence of upstream obstacles using water tunnel model testing as well as full-scale prototype flow dynamics testing were completed. A Cold Climate Adaptation Package (CCA) was developed to allow year-round turbine operation in ice covered rivers. The CCA package facilitates turbine extraction and accommodates ice forces, frazil ice, ad-freezing and cold temperatures that are not present at the European sites. The Permanent Magnet Generator (PMG) presents some unique challenges in meeting Canadian utility interconnection requirements. Specific attention to the frequency driver control and protection requirements resulted in a driver design with greater over-voltage capability for the PMG as well as other key attributes. Environmental studies in Europe included fish friendliness testing comprised of multiple in-river live passage tests for a wide variety of fish species. Latest test results indicate fish passage survivability close to 100%. Further fish studies are planned in Canada later this year. Successful deployment must meet societal requirements to gain community acceptance and public approval. Aesthetics considerations include low noise, disguised control buildings and vigilant turbine integration into the low profile existing structures. The resulting design was selected for deployment at existing historic National Park waterway structures. The integration of all of these design elements permits the successful deployment of the VLH turbine in Canada.

  2. Evidence-based selection of training compounds for use in the mechanism-based integrated prediction of drug-induced liver injury in man.

    PubMed

    Dragovic, Sanja; Vermeulen, Nico P E; Gerets, Helga H; Hewitt, Philip G; Ingelman-Sundberg, Magnus; Park, B Kevin; Juhila, Satu; Snoeys, Jan; Weaver, Richard J

    2016-12-01

    The current test systems employed by pharmaceutical industry are poorly predictive for drug-induced liver injury (DILI). The 'MIP-DILI' project addresses this situation by the development of innovative preclinical test systems which are both mechanism-based and of physiological, pharmacological and pathological relevance to DILI in humans. An iterative, tiered approach with respect to test compounds, test systems, bioanalysis and systems analysis is adopted to evaluate existing models and develop new models that can provide validated test systems with respect to the prediction of specific forms of DILI and further elucidation of mechanisms. An essential component of this effort is the choice of compound training set that will be used to inform refinement and/or development of new model systems that allow prediction based on knowledge of mechanisms, in a tiered fashion. In this review, we focus on the selection of MIP-DILI training compounds for mechanism-based evaluation of non-clinical prediction of DILI. The selected compounds address both hepatocellular and cholestatic DILI patterns in man, covering a broad range of pharmacologies and chemistries, and taking into account available data on potential DILI mechanisms (e.g. mitochondrial injury, reactive metabolites, biliary transport inhibition, and immune responses). Known mechanisms by which these compounds are believed to cause liver injury have been described, where many if not all drugs in this review appear to exhibit multiple toxicological mechanisms. Thus, the training compounds selection offered a valuable tool to profile DILI mechanisms and to interrogate existing and novel in vitro systems for the prediction of human DILI.

  3. Assignment of boundary conditions in embedded ground water flow models

    USGS Publications Warehouse

    Leake, S.A.

    1998-01-01

    Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger-scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger.scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.

  4. USGS Arctic Ocean Carbon Cruise 2012: Field Activity L-01-12-AR to collect carbon data in the Arctic Ocean, August-September 2012

    USGS Publications Warehouse

    Robbins, Lisa L.; Wynn, Jonathan; Knorr, Paul O.; Onac, Bogdan; Lisle, John T.; McMullen, Katherine Y.; Yates, Kimberly K.; Byrne, Robert H.; Liu, Xuewu

    2014-01-01

    During the cruise, underway continuous and discrete water samples were collected, and discrete water samples were collected at stations to document the carbonate chemistry of the Arctic waters and quantify the saturation state of seawater with respect to calcium carbonate. These data are critical for providing baseline information in areas where no data have existed prior and will also be used to test existing models and predict future trends.

  5. Experimental study of main rotor tip geometry and tail rotor interactions in hover. Volume 1. Text and figures

    NASA Technical Reports Server (NTRS)

    Balch, D. T.; Lombardi, J.

    1985-01-01

    A model scale hover test was conducted in the Sikorsky Aircraft Model rotor hover Facility to identify and quantify the impact of the tail rotor on the demonstrated advantages of advanced geometry tip configurations. The test was conducted using the Basic Model Test Rig and two scaled main rotor systems, one representing a 1/5.727 scale UH-60A BLACK HAWK and the others a 1/4.71 scale S-76. Eight alternate rotor tip configurations were tested, 3 on the BLACK HAWK rotor and 6 on the S-76 rotor. Four of these tips were then selected for testing in close proximity to an operating tail rotor (operating in both tractor and pusher modes) to determine if the performance advantages that could be obtained from the use of advanced geometry tips in a main rotor only environment would still exist in the more complex flow field involving a tail rotor. The test showed that overall the tail rotor effects on the advanced tip configurations tested are not substantially different from the effects on conventional tips.

  6. Predicted Sensitivity for Tests of Short-range Gravity with a Novel Parallel-plate Torsion Pendulum

    NASA Astrophysics Data System (ADS)

    Richards, Matthew; Baxley, Brandon; Hoyle, C. D.; Leopardi, Holly; Shook, David

    2011-11-01

    The parallel-plate torsion pendulum apparatus at Humboldt State University is designed to test the Weak Equivalence Principle (WEP) and the gravitational inverse-square law (ISL) of General Relativity at unprecedented levels in the sub-millimeter regime. Some versions of String Theory predict additional dimensions that might affect the gravitational inverse-square law (ISL) at sub-millimeter levels. Some models also predict the existence of unobserved subatomic particles, which if exist, could cause a violation in the WEP at short distances. Short-range tests of gravity and the WEP are also instrumental in investigating possible proposed mechanisms that attempt to explain the accelerated expansion of the universe, generally attributed to Dark Energy. The weakness of the gravitational force makes measurement very difficult at small scales. Testing such a minimal force requires highly isolated experimental systems and precise measurement and control instrumentation. Moreover, a dedicated test of the WEP has not been performed below the millimeter scale. This talk will discuss the improved sensitivity that we expect to achieve in short-range gravity tests with respect to previous efforts that employ different experimental configurations.

  7. Tracking signal test to monitor an intelligent time series forecasting model

    NASA Astrophysics Data System (ADS)

    Deng, Yan; Jaraiedi, Majid; Iskander, Wafik H.

    2004-03-01

    Extensive research has been conducted on the subject of Intelligent Time Series forecasting, including many variations on the use of neural networks. However, investigation of model adequacy over time, after the training processes is completed, remains to be fully explored. In this paper we demonstrate a how a smoothed error tracking signals test can be incorporated into a neuro-fuzzy model to monitor the forecasting process and as a statistical measure for keeping the forecasting model up-to-date. The proposed monitoring procedure is effective in the detection of nonrandom changes, due to model inadequacy or lack of unbiasedness in the estimation of model parameters and deviations from the existing patterns. This powerful detection device will result in improved forecast accuracy in the long run. An example data set has been used to demonstrate the application of the proposed method.

  8. Estimation of phosphorus loss from agricultural land in the Heartland region using the APEX model: a first step to evaluating phosphorus indices

    USDA-ARS?s Scientific Manuscript database

    Purpose. Phosphorus (P) indices are a key tool to minimize P loss from agricultural fields but there is insufficient water quality data to fully test them. Our goal is to use the Agricultural Policy/Environmental eXtender Model (APEX), calibrated with existing edge-of-field runoff data, to refine P...

  9. The Role of Diaspora in University-Industry Relationships in Globalised Knowledge Economy: The Case of Palestine

    ERIC Educational Resources Information Center

    Sharabati-Shahin, Mervat H. N.; Thiruchelvam, K.

    2013-01-01

    University-industry (U-I) linkage is not a new concept. Although there are models for such linkage that have been tested or used, they may remain unsuitable in certain countries and communities. With the unique situation of the Palestinians, the existing models may fall short of meeting the specific needs and targets of establishing such a…

  10. Operational methods of HIV testing in emergency departments: a systematic review.

    PubMed

    Haukoos, Jason S; White, Douglas A E; Lyons, Michael S; Hopkins, Emily; Calderon, Yvette; Kalish, Brian; Rothman, Richard E

    2011-07-01

    Casual review of existing literature reveals a multitude of individualized approaches to emergency department (ED) HIV testing. Cataloging the operational options of each approach could assist translation by disseminating existing knowledge, endorsing variability as a means to address testing barriers, and laying a foundation for future work in the area of operational models and outcomes investigation. The objective of this study is to provide a detailed account of the various models and operational constructs that have been described for performing HIV testing in EDs. Systematic review of PUBMED, EMBASE, the Cumulative Index to Nursing and Allied Health Literature (CINAHL), and the Web of Science through February 6, 2009 was performed. Three investigators independently reviewed all potential abstracts and identified all studies that met the following criteria for inclusion: original research, performance of HIV testing in an ED in the United States, description of operational methods, and reporting of specific testing outcomes. Each study was independently assessed and data from each were abstracted with standardized instruments. Summary and pooled descriptive statistics were reported by using recently published nomenclature and definitions for ED HIV testing. The primary search yielded 947 potential studies, of which 25 (3%) were included in the final analysis. Of the 25 included studies, 13 (52%) reported results using nontargeted screening as the only patient selection method. Most programs reported using voluntary, opt-in consent and separate, signed consent forms. A variety of assays and communication methods were used, but relatively limited outcomes data were reported. Currently, limited evidence exists to inform HIV testing practices in EDs. There appears to be recent progression toward the use of rapid assays and nontargeted patient selection methods, with the rate at which reports are published in the peer-reviewed literature increasing. Additional research will be required, including controlled clinical trials, more structured program evaluation, and a focus on an expanded profile of outcome measures, to further improve our understanding of which HIV testing methods are most effective in the ED. Copyright © 2011. Published by Mosby, Inc.

  11. Validation of an arterial tortuosity measure with application to hypertension collection of clinical hypertensive patients

    PubMed Central

    2011-01-01

    Background Hypertension may increase tortuosity or twistedness of arteries. We applied a centerline extraction algorithm and tortuosity metric to magnetic resonance angiography (MRA) brain images to quantitatively measure the tortuosity of arterial vessel centerlines. The most commonly used arterial tortuosity measure is the distance factor metric (DFM). This study tested a DFM based measurement’s ability to detect increases in arterial tortuosity of hypertensives using existing images. Existing images presented challenges such as different resolutions which may affect the tortuosity measurement, different depths of the area imaged, and different artifacts of imaging that require filtering. Methods The stability and accuracy of alternative centerline algorithms was validated in numerically generated models and test brain MRA data. Existing images were gathered from previous studies and clinical medical systems by manually reading electronic medical records to identify hypertensives and negatives. Images of different resolutions were interpolated to similar resolutions. Arterial tortuosity in MRA images was measured from a DFM curve and tested on numerically generated models as well as MRA images from two hypertensive and three negative control populations. Comparisons were made between different resolutions, different filters, hypertensives versus negatives, and different negative controls. Results In tests using numerical models of a simple helix, the measured tortuosity increased as expected with more tightly coiled helices. Interpolation reduced resolution-dependent differences in measured tortuosity. The Korean hypertensive population had significantly higher arterial tortuosity than its corresponding negative control population across multiple arteries. In addition one negative control population of different ethnicity had significantly less arterial tortuosity than the other two. Conclusions Tortuosity can be compared between images of different resolutions by interpolating from lower to higher resolutions. Use of a universal negative control was not possible in this study. The method described here detected elevated arterial tortuosity in a hypertensive population compared to the negative control population and can be used to study this relation in other populations. PMID:22166145

  12. Validation of an arterial tortuosity measure with application to hypertension collection of clinical hypertensive patients.

    PubMed

    Diedrich, Karl T; Roberts, John A; Schmidt, Richard H; Kang, Chang-Ki; Cho, Zang-Hee; Parker, Dennis L

    2011-10-18

    Hypertension may increase tortuosity or twistedness of arteries. We applied a centerline extraction algorithm and tortuosity metric to magnetic resonance angiography (MRA) brain images to quantitatively measure the tortuosity of arterial vessel centerlines. The most commonly used arterial tortuosity measure is the distance factor metric (DFM). This study tested a DFM based measurement's ability to detect increases in arterial tortuosity of hypertensives using existing images. Existing images presented challenges such as different resolutions which may affect the tortuosity measurement, different depths of the area imaged, and different artifacts of imaging that require filtering. The stability and accuracy of alternative centerline algorithms was validated in numerically generated models and test brain MRA data. Existing images were gathered from previous studies and clinical medical systems by manually reading electronic medical records to identify hypertensives and negatives. Images of different resolutions were interpolated to similar resolutions. Arterial tortuosity in MRA images was measured from a DFM curve and tested on numerically generated models as well as MRA images from two hypertensive and three negative control populations. Comparisons were made between different resolutions, different filters, hypertensives versus negatives, and different negative controls. In tests using numerical models of a simple helix, the measured tortuosity increased as expected with more tightly coiled helices. Interpolation reduced resolution-dependent differences in measured tortuosity. The Korean hypertensive population had significantly higher arterial tortuosity than its corresponding negative control population across multiple arteries. In addition one negative control population of different ethnicity had significantly less arterial tortuosity than the other two. Tortuosity can be compared between images of different resolutions by interpolating from lower to higher resolutions. Use of a universal negative control was not possible in this study. The method described here detected elevated arterial tortuosity in a hypertensive population compared to the negative control population and can be used to study this relation in other populations.

  13. Longitudinal driver model and collision warning and avoidance algorithms based on human driving databases

    NASA Astrophysics Data System (ADS)

    Lee, Kangwon

    Intelligent vehicle systems, such as Adaptive Cruise Control (ACC) or Collision Warning/Collision Avoidance (CW/CA), are currently under development, and several companies have already offered ACC on selected models. Control or decision-making algorithms of these systems are commonly evaluated under extensive computer simulations and well-defined scenarios on test tracks. However, they have rarely been validated with large quantities of naturalistic human driving data. This dissertation utilized two University of Michigan Transportation Research Institute databases (Intelligent Cruise Control Field Operational Test and System for Assessment of Vehicle Motion Environment) in the development and evaluation of longitudinal driver models and CW/CA algorithms. First, to examine how drivers normally follow other vehicles, the vehicle motion data from the databases were processed using a Kalman smoother. The processed data was then used to fit and evaluate existing longitudinal driver models (e.g., the linear follow-the-leader model, the Newell's special model, the nonlinear follow-the-leader model, the linear optimal control model, the Gipps model and the optimal velocity model). A modified version of the Gipps model was proposed and found to be accurate in both microscopic (vehicle) and macroscopic (traffic) senses. Second, to examine emergency braking behavior and to evaluate CW/CA algorithms, the concepts of signal detection theory and a performance index suitable for unbalanced situations (few threatening data points vs. many safe data points) are introduced. Selected existing CW/CA algorithms were found to have a performance index (geometric mean of true-positive rate and precision) not exceeding 20%. To optimize the parameters of the CW/CA algorithms, a new numerical optimization scheme was developed to replace the original data points with their representative statistics. A new CW/CA algorithm was proposed, which was found to score higher than 55% in the performance index. This dissertation provides a model of how drivers follow lead-vehicles that is much more accurate than other models in the literature. Furthermore, the data-based approach was used to confirm that a CW/CA algorithm utilizing lead-vehicle braking was substantially more effective than existing algorithms, leading to collision warning systems that are much more likely to contribute to driver safety.

  14. Design, fabrication, test and delivery of a K-band antenna breadboard model

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The results of a research effort to develop a Ku-Band single channel monopulse antenna with significant improvements in efficiency and bandwidth are reported. A single aperture, multimode horn, utilized in a near field Cassegrainian configuration, was the technique selected for achieving the desired efficiency and bandwidth performance. In order to provide wide polarization flexibility, a wire grid, space filter polarizer was developed. A solid state switching network with appropriate driving electronics provides the receive channel sum and difference signal interface with an existing Apollo type tracking electronics subsystem. A full scale breadboard model of the antenna was fabricated and tested. Performance of the model was well within the requirements and goals of the contract.

  15. Software reliability perspectives

    NASA Technical Reports Server (NTRS)

    Wilson, Larry; Shen, Wenhui

    1987-01-01

    Software which is used in life critical functions must be known to be highly reliable before installation. This requires a strong testing program to estimate the reliability, since neither formal methods, software engineering nor fault tolerant methods can guarantee perfection. Prior to the final testing software goes through a debugging period and many models have been developed to try to estimate reliability from the debugging data. However, the existing models are poorly validated and often give poor performance. This paper emphasizes the fact that part of their failures can be attributed to the random nature of the debugging data given to these models as input, and it poses the problem of correcting this defect as an area of future research.

  16. 2014 Assessment of the Ballistic Missile Defense System (BMDS)

    DTIC Science & Technology

    2015-03-23

    for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of...take several more years to collect the test data needed to adequately VV&A the BMDS M&S required to perform such assessments. As data are collected ...Accreditation is possible only if a sufficient quantity and quality of flight test data have been collected to support model verification and

  17. Using Pre-test/Post-test Data To Evaluate the Effectiveness of Computer Aided Instruction (A Study of CAI and Its Use with Developmental Reading Students).

    ERIC Educational Resources Information Center

    Lansford, Carl E.

    As computer aided instruction (CAI) and distance learning become more popular, a model for easily evaluating these teaching methods must be developed, one which will enable replication of the study each year. This paper discusses the results of a study using existing dependent and independent variables to evaluate CAI for developmental reading…

  18. Evaluation of Icing Scaling on Swept NACA 0012 Airfoil Models

    NASA Technical Reports Server (NTRS)

    Tsao, Jen-Ching; Lee, Sam

    2012-01-01

    Icing scaling tests in the NASA Glenn Icing Research Tunnel (IRT) were performed on swept wing models using existing recommended scaling methods that were originally developed for straight wing. Some needed modifications on the stagnation-point local collection efficiency (i.e., beta(sub 0) calculation and the corresponding convective heat transfer coefficient for swept NACA 0012 airfoil models have been studied and reported in 2009, and the correlations will be used in the current study. The reference tests used a 91.4-cm chord, 152.4-cm span, adjustable sweep airfoil model of NACA 0012 profile at velocities of 100 and 150 knot and MVD of 44 and 93 mm. Scale-to-reference model size ratio was 1:2.4. All tests were conducted at 0deg angle of attack (AoA) and 45deg sweep angle. Ice shape comparison results were presented for stagnation-point freezing fractions in the range of 0.4 to 1.0. Preliminary results showed that good scaling was achieved for the conditions test by using the modified scaling methods developed for swept wing icing.

  19. Laminar flow studies of a low-temperature space radiator model using D-shaped tubes

    NASA Technical Reports Server (NTRS)

    Cintula, T. C.; Prok, G. M.; Johnston, D. B.

    1972-01-01

    Test results of a low-temperature space radiator model are presented. Radiator performance is evaluated with a low-thermal-conductivity fluid in laminar flow in D-shaped cross-section tubes. The test covered a Reynolds number range from 50 to 4500 and a fluid temperature range from 294 to 414 K (70 to 286 F). For low-temperature radiators, the fluid-to-surface temperature differential was predominately influenced by fluid temperature in laminar flow. Heat transfer and pressure drop for the radiator tube could be predicted within engineering accuracy from existing correlations.

  20. An optimum organizational structure for a large earth-orbiting multidisciplinary Space Base

    NASA Technical Reports Server (NTRS)

    Ragusa, J. M.

    1973-01-01

    The purpose of this exploratory study was to identify an optimum hypothetical organizational structure for a large earth-orbiting multidisciplinary research and applications (R&A) Space Base manned by a mixed crew of technologists. Since such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than the empirical testing of it. The essential finding of this research was that a four-level project type 'total matrix' model will optimize the efficiency and effectiveness of Space Base technologists.

  1. Modelling of the 10-micrometer natural laser emission from the mesospheres of Mars and Venus

    NASA Technical Reports Server (NTRS)

    Deming, D.; Mumma, M. J.

    1983-01-01

    The NLTE radiative transfer problem is solved to obtain the 00 deg 1 vibrational state population. This model successfully reproduces the existing center-to-limb observations, although higher spatial resolution observations are needed for a definitive test. The model also predicts total fluxes which are close to the observed values. The strength of the emission is predicted to be closely related to the instantaneous near-IR solar heating rate.

  2. Modeling of the 10-micron natural laser emission from the mesospheres of Mars and Venus

    NASA Technical Reports Server (NTRS)

    Deming, D.; Mumma, M. J.

    1983-01-01

    The NLTE radiative transfer problem is solved to obtain the 00 deg 1 vibrational state population. This model successfully reproduces the existing center-to-limb observations, although higher spatial resolution observations are needed for a definitive test. The model also predicts total fluxes which are close to the observed values. The strength of the emission is predicted to be closely related to the instantaneous near-IR solar heating rate.

  3. The Development of a Modelling Solution to Address Manpower and Personnel Issues Using the IPME

    DTIC Science & Technology

    2010-11-01

    training for a military system. It deals with the number of personnel spaces and available people. One of the main concerns in this domain is to...are often addressed by examining existing solutions for similar systems and/or a trial-and-error method based on human-in- the -loop tests. Such an...significant effort and resources on the development of a human performance modelling software, the Integrated Performance Modelling Environment (IPME

  4. Study of solid state photomultiplier

    NASA Technical Reports Server (NTRS)

    Hays, K. M.; Laviolette, R. A.

    1987-01-01

    Available solid state photomultiplier (SSPM) detectors were tested under low-background, low temperature conditions to determine the conditions producing optimal sensitivity in a space-based astronomy system such as a liquid cooled helium telescope in orbit. Detector temperatures varied between 6 and 9 K, with background flux ranging from 10 to the 13th power to less than 10 to the 6th power photons/square cm-s. Measured parameters included quantum efficiency, noise, dark current, and spectral response. Experimental data were reduced, analyzed, and combined with existing data to build the SSPM data base included herein. The results were compared to analytical models of SSPM performance where appropriate models existed. Analytical models presented here were developed to be as consistent with the data base as practicable. Significant differences between the theory and data are described. Some models were developed or updated as a result of this study.

  5. Pain sensitivity mediates the relationship between stress and headache intensity in chronic tension-type headache.

    PubMed

    Cathcart, Stuart; Bhullar, Navjot; Immink, Maarten; Della Vedova, Chris; Hayball, John

    2012-01-01

    A central model for chronic tension-type headache (CTH) posits that stress contributes to headache, in part, by aggravating existing hyperalgesia in CTH sufferers. The prediction from this model that pain sensitivity mediates the relationship between stress and headache activity has not yet been examined. To determine whether pain sensitivity mediates the relationship between stress and prospective headache activity in CTH sufferers. Self-reported stress, pain sensitivity and prospective headache activity were measured in 53 CTH sufferers recruited from the general population. Pain sensitivity was modelled as a mediator between stress and headache activity, and tested using a nonparametric bootstrap analysis. Pain sensitivity significantly mediated the relationship between stress and headache intensity. The results of the present study support the central model for CTH, which posits that stress contributes to headache, in part, by aggravating existing hyperalgesia in CTH sufferers. Implications for the mechanisms and treatment of CTH are discussed.

  6. Perspectives on Creating Clinically Relevant Blast Models for Mild Traumatic Brain Injury and Post Traumatic Stress Disorder Symptoms

    PubMed Central

    Brenner, Lisa A.; Bahraini, Nazanin; Hernández, Theresa D.

    2012-01-01

    Military personnel are returning from Iraq and Afghanistan and reporting non-specific physical (somatic), behavioral, psychological, and cognitive symptoms. Many of these symptoms are frequently associated with mild traumatic brain injury (mTBI) and/or post traumatic stress disorder (PTSD). Despite significant attention and advances in assessment and intervention for these two conditions, challenges persist. To address this, clinically relevant blast models are essential in the full characterization of this type of injury, as well as in the testing and identification of potential treatment strategies. In this publication, existing diagnostic challenges and current treatment practices for mTBI and/or PTSD will be summarized, along with suggestions regarding how what has been learned from existing models of PTSD and traditional mechanism (e.g., non-blast) traumatic brain injury can be used to facilitate the development of clinically relevant blast models. PMID:22408635

  7. Recent "Ground Testing" Experiences in the National Full-Scale Aerodynamics Complex

    NASA Technical Reports Server (NTRS)

    Zell, Peter; Stich, Phil; Sverdrup, Jacobs; George, M. W. (Technical Monitor)

    2002-01-01

    The large test sections of the National Full-scale Aerodynamics Complex (NFAC) wind tunnels provide ideal controlled wind environments to test ground-based objects and vehicles. Though this facility was designed and provisioned primarily for aeronautical testing requirements, several experiments have been designed to utilize existing model mount structures to support "non-flying" systems. This presentation will discuss some of the ground-based testing capabilities of the facility and provide examples of groundbased tests conducted in the facility to date. It will also address some future work envisioned and solicit input from the SATA membership on ways to improve the service that NASA makes available to customers.

  8. Single shaft automotive gas turbine engine characterization test

    NASA Technical Reports Server (NTRS)

    Johnson, R. A.

    1979-01-01

    An automotive gas turbine incorporating a single stage centrifugal compressor and a single stage radial inflow turbine is described. Among the engine's features is the use of wide range variable geometry at the inlet guide vanes, the compressor diffuser vanes, and the turbine inlet vanes to achieve improved part load fuel economy. The engine was tested to determine its performance in both the variable geometry and equivalent fixed geometry modes. Testing was conducted without the originally designed recuperator. Test results were compared with the predicted performance of the nonrecuperative engine based on existing component rig test maps. Agreement between test results and the computer model was achieved.

  9. Modeling Input Errors to Improve Uncertainty Estimates for Sediment Transport Model Predictions

    NASA Astrophysics Data System (ADS)

    Jung, J. Y.; Niemann, J. D.; Greimann, B. P.

    2016-12-01

    Bayesian methods using Markov chain Monte Carlo algorithms have recently been applied to sediment transport models to assess the uncertainty in the model predictions due to the parameter values. Unfortunately, the existing approaches can only attribute overall uncertainty to the parameters. This limitation is critical because no model can produce accurate forecasts if forced with inaccurate input data, even if the model is well founded in physical theory. In this research, an existing Bayesian method is modified to consider the potential errors in input data during the uncertainty evaluation process. The input error is modeled using Gaussian distributions, and the means and standard deviations are treated as uncertain parameters. The proposed approach is tested by coupling it to the Sedimentation and River Hydraulics - One Dimension (SRH-1D) model and simulating a 23-km reach of the Tachia River in Taiwan. The Wu equation in SRH-1D is used for computing the transport capacity for a bed material load of non-cohesive material. Three types of input data are considered uncertain: (1) the input flowrate at the upstream boundary, (2) the water surface elevation at the downstream boundary, and (3) the water surface elevation at a hydraulic structure in the middle of the reach. The benefits of modeling the input errors in the uncertainty analysis are evaluated by comparing the accuracy of the most likely forecast and the coverage of the observed data by the credible intervals to those of the existing method. The results indicate that the internal boundary condition has the largest uncertainty among those considered. Overall, the uncertainty estimates from the new method are notably different from those of the existing method for both the calibration and forecast periods.

  10. A model for predicting thermal properties of asphalt mixtures from their constituents

    NASA Astrophysics Data System (ADS)

    Keller, Merlin; Roche, Alexis; Lavielle, Marc

    Numerous theoretical and experimental approaches have been developed to predict the effective thermal conductivity of composite materials such as polymers, foams, epoxies, soils and concrete. None of such models have been applied to asphalt concrete. This study attempts to develop a model to predict the thermal conductivity of asphalt concrete from its constituents that will contribute to the asphalt industry by reducing costs and saving time on laboratory testing. The necessity to do the laboratory testing would be no longer required when a mix for the pavement is created with desired thermal properties at the design stage by selecting correct constituents. This thesis investigated six existing predictive models for applicability to asphalt mixtures, and four standard mathematical techniques were used to develop a regression model to predict the effective thermal conductivity. The effective thermal conductivities of 81 asphalt specimens were used as the response variables, and the thermal conductivities and volume fractions of their constituents were used as the predictors. The conducted statistical analyses showed that the measured values of thermal conductivities of the mixtures are affected by the bitumen and aggregate content, but not by the air content. Contrarily, the predicted data for some investigated models are highly sensitive to air voids, but not to bitumen and/or aggregate content. Additionally, the comparison of the experimental with analytical data showed that none of the existing models gave satisfactory results; on the other hand, two regression models (Exponential 1* and Linear 3*) are promising for asphalt concrete.

  11. Addressing species diversity in biotransformation: variability in expressed transcripts of hepatic biotransformation enzymes among fishes

    EPA Science Inventory

    There is increasing evidence that diverse xenobiotic metabolizing enzymes exist among fishes, potentially resulting in different chemical sensitivities and accumulation, but this has never been systematically evaluated. One concern is that model test species such as rainbow trou...

  12. On the Genealogy of Asexual Diploids

    NASA Astrophysics Data System (ADS)

    Lam, Fumei; Langley, Charles H.; Song, Yun S.

    Given molecular genetic data from diploid individuals that, at present, reproduce mostly or exclusively asexually without recombination, an important problem in evolutionary biology is detecting evidence of past sexual reproduction (i.e., meiosis and mating) and recombination (both meiotic and mitotic). However, currently there is a lack of computational tools for carrying out such a study. In this paper, we formulate a new problem of reconstructing diploid genealogies under the assumption of no sexual reproduction or recombination, with the ultimate goal being to devise genealogy-based tools for testing deviation from these assumptions. We first consider the infinite-sites model of mutation and develop linear-time algorithms to test the existence of an asexual diploid genealogy compatible with the infinite-sites model of mutation, and to construct one if it exists. Then, we relax the infinite-sites assumption and develop an integer linear programming formulation to reconstruct asexual diploid genealogies with the minimum number of homoplasy (back or recurrent mutation) events. We apply our algorithms on simulated data sets with sizes of biological interest.

  13. A toolbox and a record for scientific model development

    NASA Technical Reports Server (NTRS)

    Ellman, Thomas

    1994-01-01

    Scientific computation can benefit from software tools that facilitate construction of computational models, control the application of models, and aid in revising models to handle new situations. Existing environments for scientific programming provide only limited means of handling these tasks. This paper describes a two pronged approach for handling these tasks: (1) designing a 'Model Development Toolbox' that includes a basic set of model constructing operations; and (2) designing a 'Model Development Record' that is automatically generated during model construction. The record is subsequently exploited by tools that control the application of scientific models and revise models to handle new situations. Our two pronged approach is motivated by our belief that the model development toolbox and record should be highly interdependent. In particular, a suitable model development record can be constructed only when models are developed using a well defined set of operations. We expect this research to facilitate rapid development of new scientific computational models, to help ensure appropriate use of such models and to facilitate sharing of such models among working computational scientists. We are testing this approach by extending SIGMA, and existing knowledge-based scientific software design tool.

  14. The evolution of endothermy in Cenozoic mammals: a plesiomorphic-apomorphic continuum.

    PubMed

    Lovegrove, Barry Gordon

    2012-02-01

    The evolution of endothermy in birds and mammals was one of the most important events in the evolution of the vertebrates. Past tests of hypotheses on the evolution of endothermy in mammals have relied largely on analyses of the relationship between basal and maximum metabolic rate, and artificial selection experiments. I argue that components of existing hypotheses, as well as new hypotheses, can be tested using an alternative macrophysiological modeling approach by examining the development of endothermy during the Cenozoic. Recent mammals display a 10°C range in body temperature which is sufficiently large to identify the selective forces that have driven the development of endothermy from a plesiomorphic (ancestral) Cretaceous or Jurassic condition. A model is presented (the Plesiomorphic-Apomorphic Endothermy Model, PAE Model) which proposes that heterothermy, i.e. bouts of normothermy (constant body temperature) interspersed with adaptive heterothermy (e.g. daily torpor and/or hibernation), was the ancestral condition from which apomorphic (derived), rigid homeothermy evolved. All terrestrial mammal lineages are examined for existing data to test the model, as well as for missing data that could be used to test the model. With the exception of Scandentia and Dermoptera, about which little is known, all mammalian orders that include small-sized mammals (<500 g), have species which are heterothermic and display characteristics of endothermy which fall somewhere along a plesiomorphic-apomorphic continuum. Orders which do not have heterothermic representatives (Cetartiodactyla, Perissodactyla, Pholidota, and Lagomorpha) are comprised of medium- to large-sized mammals that have either lost the capacity for heterothermy, or in which heterothermy has yet to be measured. Mammalian heterothermy seems to be plesiomorphic and probably evolved once in the mammalian lineage. Several categories of endothermy are identified (protoendothermy, plesioendothermy, apoendothermy, basoendothermy, mesoendothermy, supraendothermy, and reversed mesoendothermy) to describe the evolution of endothermy during the Cenozoic. The PAE Model should facilitate the testing of hypotheses using a range of macrophysiological methods (e.g. the comparative method and the reconstruction of ancestral states). © 2011 The Author. Biological Reviews © 2011 Cambridge Philosophical Society.

  15. Modeling the dynamic crush of impact mitigating materials

    NASA Astrophysics Data System (ADS)

    Logan, R. W.; McMichael, L. D.

    1995-05-01

    Crushable materials are commonly utilized in the design of structural components to absorb energy and mitigate shock during the dynamic impact of a complex structure, such as an automobile chassis or drum-type shipping container. The development and application of several finite-element material models which have been developed at various times at LLNL for DYNA3D are discussed. Between the models, they are able to account for several of the predominant mechanisms which typically influence the dynamic mechanical behavior of crushable materials. One issue we addressed was that no single existing model would account for the entire gambit of constitutive features which are important for crushable materials. Thus, we describe the implementation and use of an additional material model which attempts to provide a more comprehensive model of the mechanics of crushable material behavior. This model combines features of the pre-existing DYNA models and incorporates some new features as well in an invariant large-strain formulation. In addition to examining the behavior of a unit cell in uniaxial compression, two cases were chosen to evaluate the capabilities and accuracy of the various material models in DYNA. In the first case, a model for foam filled box beams was developed and compared to test data from a four-point bend test. The model was subsequently used to study its effectiveness in energy absorption in an aluminum extrusion, spaceframe, vehicle chassis. The second case examined the response of the AT-400A shipping container and the performance of the overpack material during accident environments selected from 10CFR71 and IAEA regulations.

  16. Modeling of Instrument Landing System (ILS) localizer signal on runway 25L at Los Angeles International Airport

    NASA Technical Reports Server (NTRS)

    Hueschen, Richard M.; Knox, Charles E.

    1994-01-01

    A joint NASA/FAA flight test has been made to record instrument landing system (ILS) localizer receiver signals for use in mathematically modeling the ILS localizer for future simulation studies and airplane flight tracking tasks. The flight test was conducted on a portion of the ILS localizer installed on runway 25L at the Los Angeles International Airport. The tests covered the range from 10 to 32 n.mi. from the localizer antenna. Precision radar tracking information was compared with the recorded localizer deviation data. Data analysis showed that the ILS signal centerline was offset to the left of runway centerline by 0.071 degrees and that no significant bends existed on the localizer beam. Suggested simulation models for the ILS localizer are formed from a statistical analysis.

  17. Causality analysis in business performance measurement system using system dynamics methodology

    NASA Astrophysics Data System (ADS)

    Yusof, Zainuridah; Yusoff, Wan Fadzilah Wan; Maarof, Faridah

    2014-07-01

    One of the main components of the Balanced Scorecard (BSC) that differentiates it from any other performance measurement system (PMS) is the Strategy Map with its unidirectional causality feature. Despite its apparent popularity, criticisms on the causality have been rigorously discussed by earlier researchers. In seeking empirical evidence of causality, propositions based on the service profit chain theory were developed and tested using the econometrics analysis, Granger causality test on the 45 data points. However, the insufficiency of well-established causality models was found as only 40% of the causal linkages were supported by the data. Expert knowledge was suggested to be used in the situations of insufficiency of historical data. The Delphi method was selected and conducted in obtaining the consensus of the causality existence among the 15 selected expert persons by utilizing 3 rounds of questionnaires. Study revealed that only 20% of the propositions were not supported. The existences of bidirectional causality which demonstrate significant dynamic environmental complexity through interaction among measures were obtained from both methods. With that, a computer modeling and simulation using System Dynamics (SD) methodology was develop as an experimental platform to identify how policies impacting the business performance in such environments. The reproduction, sensitivity and extreme condition tests were conducted onto developed SD model to ensure their capability in mimic the reality, robustness and validity for causality analysis platform. This study applied a theoretical service management model within the BSC domain to a practical situation using SD methodology where very limited work has been done.

  18. The Earthquake Source Inversion Validation (SIV) - Project: Summary, Status, Outlook

    NASA Astrophysics Data System (ADS)

    Mai, P. M.

    2017-12-01

    Finite-fault earthquake source inversions infer the (time-dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, this kinematic source inversion is ill-posed and returns non-unique solutions, as seen for instance in multiple source models for the same earthquake, obtained by different research teams, that often exhibit remarkable dissimilarities. To address the uncertainties in earthquake-source inversions and to understand strengths and weaknesses of various methods, the Source Inversion Validation (SIV) project developed a set of forward-modeling exercises and inversion benchmarks. Several research teams then use these validation exercises to test their codes and methods, but also to develop and benchmark new approaches. In this presentation I will summarize the SIV strategy, the existing benchmark exercises and corresponding results. Using various waveform-misfit criteria and newly developed statistical comparison tools to quantify source-model (dis)similarities, the SIV platforms is able to rank solutions and identify particularly promising source inversion approaches. Existing SIV exercises (with related data and descriptions) and all computational tools remain available via the open online collaboration platform; additional exercises and benchmark tests will be uploaded once they are fully developed. I encourage source modelers to use the SIV benchmarks for developing and testing new methods. The SIV efforts have already led to several promising new techniques for tackling the earthquake-source imaging problem. I expect that future SIV benchmarks will provide further innovations and insights into earthquake source kinematics that will ultimately help to better understand the dynamics of the rupture process.

  19. A Syrian golden hamster model recapitulating ebola hemorrhagic fever.

    PubMed

    Ebihara, Hideki; Zivcec, Marko; Gardner, Donald; Falzarano, Darryl; LaCasse, Rachel; Rosenke, Rebecca; Long, Dan; Haddock, Elaine; Fischer, Elizabeth; Kawaoka, Yoshihiro; Feldmann, Heinz

    2013-01-15

    Ebola hemorrhagic fever (EHF) is a severe viral infection for which no effective treatment or vaccine is currently available. While the nonhuman primate (NHP) model is used for final evaluation of experimental vaccines and therapeutic efficacy, rodent models have been widely used in ebolavirus research because of their convenience. However, the validity of rodent models has been questioned given their low predictive value for efficacy testing of vaccines and therapeutics, a result of the inconsistent manifestation of coagulopathy seen in EHF. Here, we describe a lethal Syrian hamster model of EHF using mouse-adapted Ebola virus. Infected hamsters displayed most clinical hallmarks of EHF, including severe coagulopathy and uncontrolled host immune responses. Thus, the hamster seems to be superior to the existing rodent models, offering a better tool for understanding the critical processes in pathogenesis and providing a new model for evaluating prophylactic and postexposure interventions prior to testing in NHPs.

  20. A Syrian Golden Hamster Model Recapitulating Ebola Hemorrhagic Fever

    PubMed Central

    Ebihara, Hideki; Zivcec, Marko; Gardner, Donald; Falzarano, Darryl; LaCasse, Rachel; Rosenke, Rebecca; Long, Dan; Haddock, Elaine; Fischer, Elizabeth; Kawaoka, Yoshihiro; Feldmann, Heinz

    2013-01-01

    Ebola hemorrhagic fever (EHF) is a severe viral infection for which no effective treatment or vaccine is currently available. While the nonhuman primate (NHP) model is used for final evaluation of experimental vaccines and therapeutic efficacy, rodent models have been widely used in ebolavirus research because of their convenience. However, the validity of rodent models has been questioned given their low predictive value for efficacy testing of vaccines and therapeutics, a result of the inconsistent manifestation of coagulopathy seen in EHF. Here, we describe a lethal Syrian hamster model of EHF using mouse-adapted Ebola virus. Infected hamsters displayed most clinical hallmarks of EHF, including severe coagulopathy and uncontrolled host immune responses. Thus, the hamster seems to be superior to the existing rodent models, offering a better tool for understanding the critical processes in pathogenesis and providing a new model for evaluating prophylactic and postexposure interventions prior to testing in NHPs. PMID:23045629

  1. Plenum response to simulated disturbances of the model and fan inlet guide vanes in a transonic tunnel

    NASA Technical Reports Server (NTRS)

    Gloss, B. B.

    1980-01-01

    In order to aid in the design of the National Transonic Facility (NTF) control system, test section/plenum response studies were carried out in a 0.186 scale model of the NTF high speed duct. Two types of disturbances, those induced by the model and those induced by the compressor inlet guide vanes were simulated. Some observations with regard to the test section/plenum response tests are summarized as follows. A resonance frequency for the test section/plenum area of the tunnel of approximately 50 Hz was observed for Mach numbers from 0.40 to 0.90. However, since the plenum is 3.1 times (based on volume) too large for the scaled size of the test section, care must be taken in extrapolating these data to NTF conditions. The plenum pressure data indicate the existence of pressure gradients in the plenum. The test results indicate that the difference between test section static pressure and plenum pressure is dependent on test section flow conditions. Plenum response to inlet guide vane type disturbances appears to be slower than plenum response to test section disturbances.

  2. Effect of DC voltage pulses on memristor behavior.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Brian R.

    2013-10-01

    Current knowledge of memristor behavior is limited to a few physical models of which little comprehensive data collection has taken place. The purpose of this research is to collect data in search of exploitable memristor behavior by designing and implementing tests on a HP Labs Rev2 Memristor Test Board. The results are then graphed in their optimal format for conceptualizing behavioral patterns. This series of experiments has concluded the existence of an additional memristor state affecting the behavior of memristors when pulsed with positively polarized DC voltages. This effect has been observed across multiple memristors and data sets. The followingmore » pages outline the process that led to the hypothetical existence and eventual proof of this additional state of memristor behavior.« less

  3. Magnetic suspension and balance systems (MSBSs)

    NASA Technical Reports Server (NTRS)

    Britcher, Colin P.; Kilgore, Robert A.

    1987-01-01

    The problems of wind tunnel testing are outlined, with attention given to the problems caused by mechanical support systems, such as support interference, dynamic-testing restrictions, and low productivity. The basic principles of magnetic suspension are highlighted, along with the history of magnetic suspension and balance systems. Roll control, size limitations, high angle of attack, reliability, position sensing, and calibration are discussed among the problems and limitations of the existing magnetic suspension and balance systems. Examples of the existing systems are presented, and design studies for future systems are outlined. Problems specific to large-scale magnetic suspension and balance systems, such as high model loads, requirements for high-power electromagnets, high-capacity power supplies, highly sophisticated control systems and position sensors, and high costs are assessed.

  4. Suicide risk factors for young adults: testing a model across ethnicities.

    PubMed

    Gutierrez, P M; Rodriguez, P J; Garcia, P

    2001-06-01

    A general path model based on existing suicide risk research was developed to test factors contributing to current suicidal ideation in young adults. A sample of 673 undergraduate students completed a packet of questionnaires containing the Beck Depression Inventory, Adult Suicidal Ideation Questionnaire, and Multi-Attitude Suicide Tendency Scale. They also provided information on history of suicidality and exposure to attempted and completed suicide in others. Structural equation modeling was used to test the fit of the data to the hypothesized model. Goodness-of-fit indices were adequate and supported the interactive effects of exposure, repulsion by life, depression, and history of self-harm on current ideation. Model fit for three subgroups based on race/ethnicity (i.e., White, Black, and Hispanic) determined that repulsion by life and depression function differently across groups. Implications of these findings for current methods of suicide risk assessment and future research are discussed in the context of the importance of culture.

  5. Tests of local Lorentz invariance violation of gravity in the standard model extension with pulsars.

    PubMed

    Shao, Lijing

    2014-03-21

    The standard model extension is an effective field theory introducing all possible Lorentz-violating (LV) operators to the standard model and general relativity (GR). In the pure-gravity sector of minimal standard model extension, nine coefficients describe dominant observable deviations from GR. We systematically implemented 27 tests from 13 pulsar systems to tightly constrain eight linear combinations of these coefficients with extensive Monte Carlo simulations. It constitutes the first detailed and systematic test of the pure-gravity sector of minimal standard model extension with the state-of-the-art pulsar observations. No deviation from GR was detected. The limits of LV coefficients are expressed in the canonical Sun-centered celestial-equatorial frame for the convenience of further studies. They are all improved by significant factors of tens to hundreds with existing ones. As a consequence, Einstein's equivalence principle is verified substantially further by pulsar experiments in terms of local Lorentz invariance in gravity.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, Paul A.; Cooper, Candice Frances; Burnett, Damon J.

    Light body armor development for the warfighter is based on trial-and-error testing of prototype designs against ballistic projectiles. Torso armor testing against blast is virtually nonexistent but necessary to ensure adequate protection against injury to the heart and lungs. In this report, we discuss the development of a high-fidelity human torso model, it's merging with the existing Sandia Human Head-Neck Model, and development of the modeling & simulation (M&S) capabilities necessary to simulate wound injury scenarios. Using the new Sandia Human Torso Model, we demonstrate the advantage of virtual simulation in the investigation of wound injury as it relates tomore » the warfighter experience. We present the results of virtual simulations of blast loading and ballistic projectile impact to the tors o with and without notional protective armor. In this manner, we demonstrate the ad vantages of applying a modeling and simulation approach to the investigation of wound injury and relative merit assessments of protective body armor without the need for trial-and-error testing.« less

  7. Used fuel rail shock and vibration testing options analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross, Steven B.; Best, Ralph E.; Klymyshyn, Nicholas A.

    2014-09-25

    The objective of the rail shock and vibration tests is to complete the framework needed to quantify loads of fuel assembly components that are necessary to guide materials research and establish a technical basis for review organizations such as the U.S. Nuclear Regulatory Commission (NRC). A significant body of experimental and numerical modeling data exists to quantify loads and failure limits applicable to normal conditions of transport (NCT) rail transport, but the data are based on assumptions that can only be verified through experimental testing. The test options presented in this report represent possible paths for acquiring the data thatmore » are needed to confirm the assumptions of previous work, validate modeling methods that will be needed for evaluating transported fuel on a case-by-case basis, and inform material test campaigns on the anticipated range of fuel loading. The ultimate goal of this testing is to close all of the existing knowledge gaps related to the loading of used fuel under NCT conditions and inform the experiments and analysis program on specific endpoints for their research. The options include tests that would use an actual railcar, surrogate assemblies, and real or simulated rail transportation casks. The railcar carrying the cradle, cask, and surrogate fuel assembly payload would be moved in a train operating over rail track modified or selected to impart shock and vibration forces that occur during normal rail transportation. Computer modeling would be used to help design surrogates that may be needed for a rail cask, a cask’s internal basket, and a transport cradle. The objective of the design of surrogate components would be to provide a test platform that effectively simulates responses to rail shock and vibration loads that would be exhibited by state-of-the-art rail cask, basket, and/or cradle structures. The computer models would also be used to help determine the placement of instrumentation (accelerometers and strain gauges) on the surrogate fuel assemblies, cask and cradle structures, and the railcar so that forces and deflections that would result in the greatest potential for damage to high burnup and long-cooled UNF can be determined. For purposes of this report we consider testing on controlled track when we have control of the track and speed to facilitate modeling.« less

  8. MAFsnp: A Multi-Sample Accurate and Flexible SNP Caller Using Next-Generation Sequencing Data

    PubMed Central

    Hu, Jiyuan; Li, Tengfei; Xiu, Zidi; Zhang, Hong

    2015-01-01

    Most existing statistical methods developed for calling single nucleotide polymorphisms (SNPs) using next-generation sequencing (NGS) data are based on Bayesian frameworks, and there does not exist any SNP caller that produces p-values for calling SNPs in a frequentist framework. To fill in this gap, we develop a new method MAFsnp, a Multiple-sample based Accurate and Flexible algorithm for calling SNPs with NGS data. MAFsnp is based on an estimated likelihood ratio test (eLRT) statistic. In practical situation, the involved parameter is very close to the boundary of the parametric space, so the standard large sample property is not suitable to evaluate the finite-sample distribution of the eLRT statistic. Observing that the distribution of the test statistic is a mixture of zero and a continuous part, we propose to model the test statistic with a novel two-parameter mixture distribution. Once the parameters in the mixture distribution are estimated, p-values can be easily calculated for detecting SNPs, and the multiple-testing corrected p-values can be used to control false discovery rate (FDR) at any pre-specified level. With simulated data, MAFsnp is shown to have much better control of FDR than the existing SNP callers. Through the application to two real datasets, MAFsnp is also shown to outperform the existing SNP callers in terms of calling accuracy. An R package “MAFsnp” implementing the new SNP caller is freely available at http://homepage.fudan.edu.cn/zhangh/softwares/. PMID:26309201

  9. Computer simulation of solder joint failure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burchett, S.N.; Frear, D.R.; Rashid, M.M.

    The thermomechanical fatigue failure of solder joints is increasingly becoming an important reliability issue for electronic packages. The purpose of this Laboratory Directed Research and Development (LDRD) project was to develop computational tools for simulating the behavior of solder joints under strain and temperature cycling, taking into account the microstructural heterogeneities that exist in as-solidified near eutectic Sn-Pb joints, as well as subsequent microstructural evolution. The authors present two computational constitutive models, a two-phase model and a single-phase model, that were developed to predict the behavior of near eutectic Sn-Pb solder joints under fatigue conditions. Unique metallurgical tests provide themore » fundamental input for the constitutive relations. The two-phase model mathematically predicts the heterogeneous coarsening behavior of near eutectic Sn-Pb solder. The finite element simulations with this model agree qualitatively with experimental thermomechanical fatigue tests. The simulations show that the presence of an initial heterogeneity in the solder microstructure could significantly degrade the fatigue lifetime. The single-phase model was developed to predict solder joint behavior using materials data for constitutive relation constants that could be determined through straightforward metallurgical experiments. Special thermomechanical fatigue tests were developed to give fundamental materials input to the models, and an in situ SEM thermomechanical fatigue test system was developed to characterize microstructural evolution and the mechanical behavior of solder joints during the test. A shear/torsion test sample was developed to impose strain in two different orientations. Materials constants were derived from these tests. The simulation results from the two-phase model showed good fit to the experimental test results.« less

  10. Using existing case-mix methods to fund trauma cases.

    PubMed

    Monakova, Julia; Blais, Irene; Botz, Charles; Chechulin, Yuriy; Picciano, Gino; Basinski, Antoni

    2010-01-01

    Policymakers frequently face the need to increase funding in isolated and frequently heterogeneous (clinically and in terms of resource consumption) patient subpopulations. This article presents a methodologic solution for testing the appropriateness of using existing grouping and weighting methodologies for funding subsets of patients in the scenario where a case-mix approach is preferable to a flat-rate based payment system. Using as an example the subpopulation of trauma cases of Ontario lead trauma hospitals, the statistical techniques of linear and nonlinear regression models, regression trees, and spline models were applied to examine the fit of the existing case-mix groups and reference weights for the trauma cases. The analyses demonstrated that for funding Ontario trauma cases, the existing case-mix systems can form the basis for rational and equitable hospital funding, decreasing the need to develop a different grouper for this subset of patients. This study confirmed that Injury Severity Score is a poor predictor of costs for trauma patients. Although our analysis used the Canadian case-mix classification system and cost weights, the demonstrated concept of using existing case-mix systems to develop funding rates for specific subsets of patient populations may be applicable internationally.

  11. Full-scale testing and numerical modeling of a multistory masonry structure subjected to internal blast loading

    NASA Astrophysics Data System (ADS)

    Zapata, Brian Jarvis

    As military and diplomatic representatives of the United States are deployed throughout the world, they must frequently make use of local, existing facilities; it is inevitable that some of these will be load bearing unreinforced masonry (URM) structures. Although generally suitable for conventional design loads, load bearing URM presents a unique hazard, with respect to collapse, when exposed to blast loading. There is therefore a need to study the blast resistance of load bearing URM construction in order to better protect US citizens assigned to dangerous locales. To address this, the Department of Civil and Environmental Engineering at the University of North Carolina at Charlotte conducted three blast tests inside a decommissioned, coal-fired, power plant prior to its scheduled demolition. The power plant's walls were constructed of URM and provided an excellent opportunity to study the response of URM walls in-situ. Post-test analytical studies investigated the ability of existing blast load prediction methodologies to model the case of a cylindrical charge with a low height of burst. It was found that even for the relatively simple blast chamber geometries of these tests, simplified analysis methods predicted blast impulses with an average net error of 22%. The study suggested that existing simplified analysis methods would benefit from additional development to better predict blast loads from cylinders detonated near the ground's surface. A hydrocode, CTH, was also used to perform two and three-dimensional simulations of the blast events. In order to use the hydrocode, Jones Wilkins Lee (JWL) equation of state (EOS) coefficients were developed for the experiment's Unimax dynamite charges; a novel energy-scaling technique was developed which permits the derivation of new JWL coefficients from an existing coefficient set. The hydrocode simulations were able to simulate blast impulses with an average absolute error of 34.5%. Moreover, the hydrocode simulations provided highly resolved spatio-temporal blast loading data for subsequent structural simulations. Equivalent single-degree-of-freedom (ESDOF) structural response models were then used to predict the out-of-plane deflections of blast chamber walls. A new resistance function was developed which permits a URM wall to crack at any height; numerical methodologies were also developed to compute transformation factors required for use in the ESDOF method. When combined with the CTH derived blast loading predictions, the ESDOF models were able to predict out-of-plane deflections with reasonable accuracy. Further investigations were performed using finite element models constructed in LS-DYNA; the models used elastic elements combined with contacts possessing a tension/shear cutoff and the ability to simulate fracture energy release. Using the CTH predicted blast loads and carefully selected constitutive parameters, the LS-DYNA models were able to both qualitatively and quantitatively predict blast chamber wall deflections and damage patterns. Moreover, the finite element models suggested several modes of response which cannot be modeled by current ESDOF methods; the effect of these response modes on the accuracy of ESDOF predictions warrants further study.

  12. The design and development of a triaxial wear-testing joint simulator.

    PubMed

    Green, A S; O'Connell, M K; Lyons, A S; James, S P

    1999-01-01

    Most of the existing wear testers created to wear test total hip replacements, specifically the acetabular component, are designed to exert only an axial force and provide rotation in a close approximation of the actual femoral movement. The Rocky Mountain Joint Simulator was designed to exert three orthogonal forces and provide rotations about the X-, Y- and Z-axes to more closely simulate the physiological forces and motions found in the human gait cycle. The RMJS was also designed with adaptability for other joints, such as knees or canine hips, through the use of hydraulics and a computer-programmable control system. Such adaptability and functionality allows the researcher to more closely model a gait cycle, thereby obtaining wear patterns that resemble those found in retrieved implants more closely than existing simulators. Research is ongoing into the tuning and evaluation of the machine and preliminary acetabular component wear test results will be presented at the conference.

  13. In search of antiepileptogenic treatments for post-traumatic epilepsy.

    PubMed

    Saletti, Patricia G; Ali, Idrish; Casillas-Espinosa, Pablo M; Semple, Bridgette D; Lisgaras, Christos; Moshé, Solomon L; Galanopoulou, Aristea S

    2018-06-21

    Post-traumatic epilepsy (PTE) occurs in 20% of individuals with acquired epilepsy, and can impact significantly the quality of life due to the seizures and other functional or cognitive and behavioral outcomes of the traumatic brain injury (TBI) and PTE. There is no available antiepileptogenic or disease modifying treatment for PTE. Animal models of TBI and PTE have been developed, offering useful insights on the value of inflammatory, neurodegenerative pathways, hemorrhages and iron accumulation, calcium channels and other target pathways that could be used for treatment development. Most of the existing preclinical studies test efficacy towards pathologies of functional recovery after TBI, while a few studies are emerging testing the effects towards induced or spontaneous seizures. Here we review the existing preclinical trials testing new candidate treatments for TBI sequelae and PTE, and discuss future directions for efforts aiming at developing antiepileptogenic and disease-modifying treatments. Copyright © 2018. Published by Elsevier Inc.

  14. Tempest: Mesoscale test case suite results and the effect of order-of-accuracy on pressure gradient force errors

    NASA Astrophysics Data System (ADS)

    Guerra, J. E.; Ullrich, P. A.

    2014-12-01

    Tempest is a new non-hydrostatic atmospheric modeling framework that allows for investigation and intercomparison of high-order numerical methods. It is composed of a dynamical core based on a finite-element formulation of arbitrary order operating on cubed-sphere and Cartesian meshes with topography. The underlying technology is briefly discussed, including a novel Hybrid Finite Element Method (HFEM) vertical coordinate coupled with high-order Implicit/Explicit (IMEX) time integration to control vertically propagating sound waves. Here, we show results from a suite of Mesoscale testing cases from the literature that demonstrate the accuracy, performance, and properties of Tempest on regular Cartesian meshes. The test cases include wave propagation behavior, Kelvin-Helmholtz instabilities, and flow interaction with topography. Comparisons are made to existing results highlighting improvements made in resolving atmospheric dynamics in the vertical direction where many existing methods are deficient.

  15. Hot tearing studies in AA5182

    NASA Astrophysics Data System (ADS)

    van Haaften, W. M.; Kool, W. H.; Katgerman, L.

    2002-10-01

    One of the major problems during direct chill (DC) casting is hot tearing. These tears initiate during solidification of the alloy and may run through the entire ingot. To study the hot tearing mechanism, tensile tests were carried out in semisolid state and at low strain rates, and crack propagation was studied in situ by scanning electron microscopy (SEM). These experimentally induced cracks were compared with hot tears developed in an AA5182 ingot during a casting trial in an industrial research facility. Similarities in the microstructure of the tensile test specimens and the hot tears indicate that hot tearing can be simulated by performing tensile tests at semisolid temperatures. The experimental data were compared with existing hot tearing models and it was concluded that the latter are restricted to relatively high liquid fractions because they do not take into account the existence of solid bridges in the crack.

  16. Comparison of model propeller tests with airfoil theory

    NASA Technical Reports Server (NTRS)

    Durand, William F; Lesley, E P

    1925-01-01

    The purpose of the investigation covered by this report was the examination of the degree of approach which may be anticipated between laboratory tests on model airplane propellers and results computed by the airfoil theory, based on tests of airfoils representative of successive blade sections. It is known that the corrections of angles of attack and for aspect ratio, speed, and interference rest either on experimental data or on somewhat uncertain theoretical assumptions. The general situation as regards these four sets of corrections is far from satisfactory, and while it is recognized that occasion exists for the consideration of such corrections, their determination in any given case is a matter of considerable uncertainty. There exists at the present time no theory generally accepted and sufficiently comprehensive to indicate the amount of such corrections, and the application to individual cases of the experimental data available is, at best, uncertain. While the results of this first phase of the investigation are less positive than had been hoped might be the case, the establishment of the general degree of approach between the two sets of results which might be anticipated on the basis of this simpler mode of application seems to have been desirable.

  17. Wind-Tunnel Investigations of Blunt-Body Drag Reduction Using Forebody Surface Roughness

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Sprague, Stephanie; Naughton, Jonathan W.; Curry, Robert E. (Technical Monitor)

    2001-01-01

    This paper presents results of wind-tunnel tests that demonstrate a novel drag reduction technique for blunt-based vehicles. For these tests, the forebody roughness of a blunt-based model was modified using micomachined surface overlays. As forebody roughness increases, boundary layer at the model aft thickens and reduces the shearing effect of external flow on the separated flow behind the base region, resulting in reduced base drag. For vehicle configurations with large base drag, existing data predict that a small increment in forebody friction drag will result in a relatively large decrease in base drag. If the added increment in forebody skin drag is optimized with respect to base drag, reducing the total drag of the configuration is possible. The wind-tunnel tests results conclusively demonstrate the existence of a forebody dragbase drag optimal point. The data demonstrate that the base drag coefficient corresponding to the drag minimum lies between 0.225 and 0.275, referenced to the base area. Most importantly, the data show a drag reduction of approximately 15% when the drag optimum is reached. When this drag reduction is scaled to the X-33 base area, drag savings approaching 45,000 N (10,000 lbf) can be realized.

  18. Implementation of Chaotic Gaussian Particle Swarm Optimization for Optimize Learning-to-Rank Software Defect Prediction Model Construction

    NASA Astrophysics Data System (ADS)

    Buchari, M. A.; Mardiyanto, S.; Hendradjaya, B.

    2018-03-01

    Finding the existence of software defect as early as possible is the purpose of research about software defect prediction. Software defect prediction activity is required to not only state the existence of defects, but also to be able to give a list of priorities which modules require a more intensive test. Therefore, the allocation of test resources can be managed efficiently. Learning to rank is one of the approach that can provide defect module ranking data for the purposes of software testing. In this study, we propose a meta-heuristic chaotic Gaussian particle swarm optimization to improve the accuracy of learning to rank software defect prediction approach. We have used 11 public benchmark data sets as experimental data. Our overall results has demonstrated that the prediction models construct using Chaotic Gaussian Particle Swarm Optimization gets better accuracy on 5 data sets, ties in 5 data sets and gets worse in 1 data sets. Thus, we conclude that the application of Chaotic Gaussian Particle Swarm Optimization in Learning-to-Rank approach can improve the accuracy of the defect module ranking in data sets that have high-dimensional features.

  19. Development of an Experiment High Performance Nozzle Research Program

    NASA Technical Reports Server (NTRS)

    2004-01-01

    As proposed in the above OAI/NASA Glenn Research Center (GRC) Co-Operative Agreement the objective of the work was to provide consultation and assistance to the NASA GRC GTX Rocket Based Combined Cycle (RBCC) Program Team in planning and developing requirements, scale model concepts, and plans for an experimental nozzle research program. The GTX was one of the launch vehicle concepts being studied as a possible future replacement for the aging NASA Space Shuttle, and was one RBCC element in the ongoing NASA Access to Space R&D Program (Reference 1). The ultimate program objective was the development of an appropriate experimental research program to evaluate and validate proposed nozzle concepts, and thereby result in the optimization of a high performance nozzle for the GTX launch vehicle. Included in this task were the identification of appropriate existing test facilities, development of requirements for new non-existent test rigs and fixtures, develop scale nozzle model concepts, and propose corresponding test plans. Also included were the evaluation of originally proposed and alternate nozzle designs (in-house and contractor), evaluation of Computational Fluid Dynamics (CFD) study results, and make recommendations for geometric changes to result in improved nozzle thrust coefficient performance (Cfg).

  20. Improved animal models for testing gene therapy for atherosclerosis.

    PubMed

    Du, Liang; Zhang, Jingwan; De Meyer, Guido R Y; Flynn, Rowan; Dichek, David A

    2014-04-01

    Gene therapy delivered to the blood vessel wall could augment current therapies for atherosclerosis, including systemic drug therapy and stenting. However, identification of clinically useful vectors and effective therapeutic transgenes remains at the preclinical stage. Identification of effective vectors and transgenes would be accelerated by availability of animal models that allow practical and expeditious testing of vessel-wall-directed gene therapy. Such models would include humanlike lesions that develop rapidly in vessels that are amenable to efficient gene delivery. Moreover, because human atherosclerosis develops in normal vessels, gene therapy that prevents atherosclerosis is most logically tested in relatively normal arteries. Similarly, gene therapy that causes atherosclerosis regression requires gene delivery to an existing lesion. Here we report development of three new rabbit models for testing vessel-wall-directed gene therapy that either prevents or reverses atherosclerosis. Carotid artery intimal lesions in these new models develop within 2-7 months after initiation of a high-fat diet and are 20-80 times larger than lesions in a model we described previously. Individual models allow generation of lesions that are relatively rich in either macrophages or smooth muscle cells, permitting testing of gene therapy strategies targeted at either cell type. Two of the models include gene delivery to essentially normal arteries and will be useful for identifying strategies that prevent lesion development. The third model generates lesions rapidly in vector-naïve animals and can be used for testing gene therapy that promotes lesion regression. These models are optimized for testing helper-dependent adenovirus (HDAd)-mediated gene therapy; however, they could be easily adapted for testing of other vectors or of different types of molecular therapies, delivered directly to the blood vessel wall. Our data also supports the promise of HDAd to deliver long-term therapy from vascular endothelium without accelerating atherosclerotic disease.

  1. Development of a thermal and structural model for a NASTRAN finite-element analysis of a hypersonic wing test structure

    NASA Technical Reports Server (NTRS)

    Lameris, J.

    1984-01-01

    The development of a thermal and structural model for a hypersonic wing test structure using the NASTRAN finite-element method as its primary analytical tool is described. A detailed analysis was defined to obtain the temperature and thermal stress distribution in the whole wing as well as the five upper and lower root panels. During the development of the models, it was found that the thermal application of NASTRAN and the VIEW program, used for the generation of the radiation exchange coefficients, were definicent. Although for most of these deficiencies solutions could be found, the existence of one particular deficiency in the current thermal model prevented the final computation of the temperature distributions. A SPAR analysis of a single bay of the wing, using data converted from the original NASTRAN model, indicates that local temperature-time distributions can be obtained with good agreement with the test data. The conversion of the NASTRAN thermal model into a SPAR model is recommended to meet the immediate goal of obtaining an accurate thermal stress distribution.

  2. Genetic parameters for milk mineral content and acidity predicted by mid-infrared spectroscopy in Holstein-Friesian cows.

    PubMed

    Toffanin, V; Penasa, M; McParland, S; Berry, D P; Cassandro, M; De Marchi, M

    2015-05-01

    The aim of the present study was to estimate genetic parameters for calcium (Ca), phosphorus (P) and titratable acidity (TA) in bovine milk predicted by mid-IR spectroscopy (MIRS). Data consisted of 2458 Italian Holstein-Friesian cows sampled once in 220 farms. Information per sample on protein and fat percentage, pH and somatic cell count, as well as test-day milk yield, was also available. (Co)variance components were estimated using univariate and bivariate animal linear mixed models. Fixed effects considered in the analyses were herd of sampling, parity, lactation stage and a two-way interaction between parity and lactation stage; an additive genetic and residual term were included in the models as random effects. Estimates of heritability for Ca, P and TA were 0.10, 0.12 and 0.26, respectively. Positive moderate to strong phenotypic correlations (0.33 to 0.82) existed between Ca, P and TA, whereas phenotypic weak to moderate correlations (0.00 to 0.45) existed between these traits with both milk quality and yield. Moderate to strong genetic correlations (0.28 to 0.92) existed between Ca, P and TA, and between these predicted traits with both fat and protein percentage (0.35 to 0.91). The existence of heritable genetic variation for Ca, P and TA, coupled with the potential to predict these components for routine cow milk testing, imply that genetic gain in these traits is indeed possible.

  3. 40 CFR 60.5230 - What records must I keep?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Existing Sewage Sludge Incineration Units Model Rule-Recordkeeping and Reporting § 60.5230 What records... standards under this subpart. (ii) Procedures for receiving, handling, and feeding sewage sludge. (iii... performance test, if in addition to sewage sludge. (x) For each qualified operator and other plant personnel...

  4. 40 CFR 60.5230 - What records must I keep?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Existing Sewage Sludge Incineration Units Model Rule-Recordkeeping and Reporting § 60.5230 What records... standards under this subpart. (ii) Procedures for receiving, handling, and feeding sewage sludge. (iii... performance test, if in addition to sewage sludge. (x) For each qualified operator and other plant personnel...

  5. 40 CFR 60.5230 - What records must I keep?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Existing Sewage Sludge Incineration Units Model Rule-Recordkeeping and Reporting § 60.5230 What records... standards under this subpart. (ii) Procedures for receiving, handling, and feeding sewage sludge. (iii... performance test, if in addition to sewage sludge. (x) For each qualified operator and other plant personnel...

  6. 40 CFR 60.5230 - What records must I keep?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Existing Sewage Sludge Incineration Units Model Rule-Recordkeeping and Reporting § 60.5230 What records... standards under this subpart. (ii) Procedures for receiving, handling, and feeding sewage sludge. (iii... performance test, if in addition to sewage sludge. (x) For each qualified operator and other plant personnel...

  7. Active Tensor Magnetic Gradiometer System

    DTIC Science & Technology

    2007-11-01

    Modify Forward Computer Models .............................................................................................2 Modify TMGS Simulator...active magnetic gradient measurement system are based upon the existing tensor magnetic gradiometer system ( TMGS ) developed under project MM-1328...Magnetic Gradiometer System ( TMGS ) for UXO Detection, Imaging, and Discrimination.” The TMGS developed under MM-1328 was successfully tested at the

  8. Financial Management in the Strategic Systems Project Office.

    DTIC Science & Technology

    SSPO, the largest program office in the Navy and in existence for over 20 years, has perfected time tested financial management procedures which may...serve as a model for the student of program management. This report presents an overview of the SSPO financial management concepts and general

  9. A Multifaceted Model of Training in Psychological Assessment.

    ERIC Educational Resources Information Center

    Levy, Leon H.

    Much of the controversy over training in diagnostic testing between internship training centers and universities results from the implicit producer-consumer relationship which exists between them. A collaborative relationship is proposed as an alternative, in which the training activities of universities and internship centers are seen as…

  10. 76 FR 80920 - Decision and Order Granting a Waiver to Miele, Inc. From the U.S. Department of Energy...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-27

    ... certain basic models that run on a 208 volt electrical supply. Under today's decision and order, Miele... dishwasher that runs on an electrical supply voltage of 208 volts. The existing test procedure under Title 10...

  11. Developing an ACT-R Model of Mental Manipulation

    DTIC Science & Technology

    2000-05-01

    entire test. However, from the verbal protocol and existing literature ( Biederman , 1987 ), it was clear that subjects had a tendency to break images into...Associates. Anderson, J.R. (1993). Rules of the mind. Lawrence Erlbaum Associates. Biederman , I. ( 1987 ). Recognition by components: A theory of human

  12. Predicting nitrogen loading with land-cover composition: how can watershed size affect model performance?

    PubMed

    Zhang, Tao; Yang, Xiaojun

    2013-01-01

    Watershed-wide land-cover proportions can be used to predict the in-stream non-point source pollutant loadings through regression modeling. However, the model performance can vary greatly across different study sites and among various watersheds. Existing literature has shown that this type of regression modeling tends to perform better for large watersheds than for small ones, and that such a performance variation has been largely linked with different interwatershed landscape heterogeneity levels. The purpose of this study is to further examine the previously mentioned empirical observation based on a set of watersheds in the northern part of Georgia (USA) to explore the underlying causes of the variation in model performance. Through the combined use of the neutral landscape modeling approach and a spatially explicit nutrient loading model, we tested whether the regression model performance variation over the watershed groups ranging in size is due to the different watershed landscape heterogeneity levels. We adopted three neutral landscape modeling criteria that were tied with different similarity levels in watershed landscape properties and used the nutrient loading model to estimate the nitrogen loads for these neutral watersheds. Then we compared the regression model performance for the real and neutral landscape scenarios, respectively. We found that watershed size can affect the regression model performance both directly and indirectly. Along with the indirect effect through interwatershed heterogeneity, watershed size can directly affect the model performance over the watersheds varying in size. We also found that the regression model performance can be more significantly affected by other physiographic properties shaping nitrogen delivery effectiveness than the watershed land-cover heterogeneity. This study contrasts with many existing studies because it goes beyond hypothesis formulation based on empirical observations and into hypothesis testing to explore the fundamental mechanism.

  13. Unified Sequence-Based Association Tests Allowing for Multiple Functional Annotations and Meta-analysis of Noncoding Variation in Metabochip Data.

    PubMed

    He, Zihuai; Xu, Bin; Lee, Seunggeun; Ionita-Laza, Iuliana

    2017-09-07

    Substantial progress has been made in the functional annotation of genetic variation in the human genome. Integrative analysis that incorporates such functional annotations into sequencing studies can aid the discovery of disease-associated genetic variants, especially those with unknown function and located outside protein-coding regions. Direct incorporation of one functional annotation as weight in existing dispersion and burden tests can suffer substantial loss of power when the functional annotation is not predictive of the risk status of a variant. Here, we have developed unified tests that can utilize multiple functional annotations simultaneously for integrative association analysis with efficient computational techniques. We show that the proposed tests significantly improve power when variant risk status can be predicted by functional annotations. Importantly, when functional annotations are not predictive of risk status, the proposed tests incur only minimal loss of power in relation to existing dispersion and burden tests, and under certain circumstances they can even have improved power by learning a weight that better approximates the underlying disease model in a data-adaptive manner. The tests can be constructed with summary statistics of existing dispersion and burden tests for sequencing data, therefore allowing meta-analysis of multiple studies without sharing individual-level data. We applied the proposed tests to a meta-analysis of noncoding rare variants in Metabochip data on 12,281 individuals from eight studies for lipid traits. By incorporating the Eigen functional score, we detected significant associations between noncoding rare variants in SLC22A3 and low-density lipoprotein and total cholesterol, associations that are missed by standard dispersion and burden tests. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  14. Posterior probability of linkage and maximal lod score.

    PubMed

    Génin, E; Martinez, M; Clerget-Darpoux, F

    1995-01-01

    To detect linkage between a trait and a marker, Morton (1955) proposed to calculate the lod score z(theta 1) at a given value theta 1 of the recombination fraction. If z(theta 1) reaches +3 then linkage is concluded. However, in practice, lod scores are calculated for different values of the recombination fraction between 0 and 0.5 and the test is based on the maximum value of the lod score Zmax. The impact of this deviation of the test on the probability that in fact linkage does not exist, when linkage was concluded, is documented here. This posterior probability of no linkage can be derived by using Bayes' theorem. It is less than 5% when the lod score at a predetermined theta 1 is used for the test. But, for a Zmax of +3, we showed that it can reach 16.4%. Thus, considering a composite alternative hypothesis instead of a single one decreases the reliability of the test. The reliability decreases rapidly when Zmax is less than +3. Given a Zmax of +2.5, there is a 33% chance that linkage does not exist. Moreover, the posterior probability depends not only on the value of Zmax but also jointly on the family structures and on the genetic model. For a given Zmax, the chance that linkage exists may then vary.

  15. Systems development of a stall/spin research facility using remotely controlled/augmented aircraft models. Volume 1: Systems overview

    NASA Technical Reports Server (NTRS)

    Montoya, R. J.; Jai, A. R.; Parker, C. D.

    1979-01-01

    A ground based, general purpose, real time, digital control system simulator (CSS) is specified, developed, and integrated with the existing instrumentation van of the testing facility. This CSS is built around a PDP-11/55, and its operational software was developed to meet the dual goal of providing the immediate capability to represent the F-18 drop model control laws and the flexibility for expansion to represent more complex control laws typical of control configured vehicles. Overviews of the two CSS's developed are reviewed as well as the overall system after their integration with the existing facility. Also the latest version of the F-18 drop model control laws (REV D) is described and the changes needed for its incorporation in the digital and analog CSS's are discussed.

  16. f(R) gravity and chameleon theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brax, Philippe; Bruck, Carsten van de; Davis, Anne-Christine

    2008-11-15

    We analyze f(R) modifications of Einstein's gravity as dark energy models in the light of their connection with chameleon theories. Formulated as scalar-tensor theories, the f(R) theories imply the existence of a strong coupling of the scalar field to matter. This would violate all experimental gravitational tests on deviations from Newton's law. Fortunately, the existence of a matter dependent mass and a thin-shell effect allows one to alleviate these constraints. The thin-shell condition also implies strong restrictions on the cosmological dynamics of the f(R) theories. As a consequence, we find that the equation of state of dark energy is constrainedmore » to be extremely close to -1 in the recent past. We also examine the potential effects of f(R) theories in the context of the Eoet-wash experiments. We show that the requirement of a thin shell for the test bodies is not enough to guarantee a null result on deviations from Newton's law. As long as dark energy accounts for a sizeable fraction of the total energy density of the Universe, the constraints that we deduce also forbid any measurable deviation of the dark energy equation of state from -1. All in all, we find that both cosmological and laboratory tests imply that f(R) models are almost coincident with a {lambda}CDM model at the background level.« less

  17. The Separation of Between-person and Within-person Components of Individual Change Over Time: A Latent Curve Model with Structured Residuals

    PubMed Central

    Curran, Patrick J.; Howard, Andrea L.; Bainter, Sierra; Lane, Stephanie T.; McGinley, James S.

    2014-01-01

    Objective Although recent statistical and computational developments allow for the empirical testing of psychological theories in ways not previously possible, one particularly vexing challenge remains: how to optimally model the prospective, reciprocal relations between two constructs as they developmentally unfold over time. Several analytic methods currently exist that attempt to model these types of relations, and each approach is successful to varying degrees. However, none provide the unambiguous separation of between-person and within-person components of stability and change over time, components that are often hypothesized to exist in the psychological sciences. The goal of our paper is to propose and demonstrate a novel extension of the multivariate latent curve model to allow for the disaggregation of these effects. Method We begin with a review of the standard latent curve models and describe how these primarily capture between-person differences in change. We then extend this model to allow for regression structures among the time-specific residuals to capture within-person differences in change. Results We demonstrate this model using an artificial data set generated to mimic the developmental relation between alcohol use and depressive symptomatology spanning five repeated measures. Conclusions We obtain a specificity of results from the proposed analytic strategy that are not available from other existing methodologies. We conclude with potential limitations of our approach and directions for future research. PMID:24364798

  18. DoD Comprehensive Military Unmanned Aerial Vehicle Smart Device Ground Control Station Threat Model

    DTIC Science & Technology

    2015-04-01

    design , imple- mentation, and test evaluation were interviewed to evaluate the existing gaps in the DoD processes for cybersecurity. This group exposed...such as antenna design and signal reception have made satellite communication networks a viable solution for smart devices on the battlefield...DoD Comprehensive Military Unmanned AERIAL VEHICLE SMART DEVICE GROUND CONTROL STATION THREAT MODEL  Image designed by Diane Fleischer Report

  19. Rapid Automated Aircraft Simulation Model Updating from Flight Data

    NASA Technical Reports Server (NTRS)

    Brian, Geoff; Morelli, Eugene A.

    2011-01-01

    Techniques to identify aircraft aerodynamic characteristics from flight measurements and compute corrections to an existing simulation model of a research aircraft were investigated. The purpose of the research was to develop a process enabling rapid automated updating of aircraft simulation models using flight data and apply this capability to all flight regimes, including flight envelope extremes. The process presented has the potential to improve the efficiency of envelope expansion flight testing, revision of control system properties, and the development of high-fidelity simulators for pilot training.

  20. An evaluation of proposed acoustic treatments for the NASA LaRC 4 x 7 meter wind tunnel

    NASA Technical Reports Server (NTRS)

    Abrahamson, A. L.

    1985-01-01

    The NASA LaRC 4 x 7 Meter Wind Tunnel is an existing facility specially designed for powered low speed (V/STOL) testing of large scale fixed wing and rotorcraft models. The enhancement of the facility for scale model acoustic testing is examined. The results are critically reviewed and comparisons are drawn with a similar wind tunnel (the DNW Facility in the Netherlands). Discrepancies observed in the comparison stimulated a theoretical investigation using the acoustic finite element ADAM System, of the ways in which noise propagating around the tunnel circuit radiates into the open test section. The reasons for the discrepancies noted above are clarified and assists in the selection of acoustic treatment options for the facility.

  1. Equivalence principle and bound kinetic energy.

    PubMed

    Hohensee, Michael A; Müller, Holger; Wiringa, R B

    2013-10-11

    We consider the role of the internal kinetic energy of bound systems of matter in tests of the Einstein equivalence principle. Using the gravitational sector of the standard model extension, we show that stringent limits on equivalence principle violations in antimatter can be indirectly obtained from tests using bound systems of normal matter. We estimate the bound kinetic energy of nucleons in a range of light atomic species using Green's function Monte Carlo calculations, and for heavier species using a Woods-Saxon model. We survey the sensitivities of existing and planned experimental tests of the equivalence principle, and report new constraints at the level of between a few parts in 10(6) and parts in 10(8) on violations of the equivalence principle for matter and antimatter.

  2. Thermal/structural design verification strategies for large space structures

    NASA Technical Reports Server (NTRS)

    Benton, David

    1988-01-01

    Requirements for space structures of increasing size, complexity, and precision have engendered a search for thermal design verification methods that do not impose unreasonable costs, that fit within the capabilities of existing facilities, and that still adequately reduce technical risk. This requires a combination of analytical and testing methods. This requires two approaches. The first is to limit thermal testing to sub-elements of the total system only in a compact configuration (i.e., not fully deployed). The second approach is to use a simplified environment to correlate analytical models with test results. These models can then be used to predict flight performance. In practice, a combination of these approaches is needed to verify the thermal/structural design of future very large space systems.

  3. A Cost Prediction Model for Electronic Systems Flight Test Costs.

    DTIC Science & Technology

    1983-09-01

    development. It was found that a significant cost estimating relationship (CER) exists between costs and the characteristics of the flight test design. Using...University In Partial Fulfillment of the Requirements for the Degree of Master of Science in Systems Management By C ., Thomas 3. DuPre’, BS Accession...SYSTEMS MANAGEMENT DATE: 28 September 1983 COMMITTEE CHA. ICEAfER it TABLE OF CONTENTS Page LIST OF TABLES ......................... vi LIST OF FIGURES

  4. Statistical Modeling for Radiation Hardness Assurance

    NASA Technical Reports Server (NTRS)

    Ladbury, Raymond L.

    2014-01-01

    We cover the models and statistics associated with single event effects (and total ionizing dose), why we need them, and how to use them: What models are used, what errors exist in real test data, and what the model allows us to say about the DUT will be discussed. In addition, how to use other sources of data such as historical, heritage, and similar part and how to apply experience, physics, and expert opinion to the analysis will be covered. Also included will be concepts of Bayesian statistics, data fitting, and bounding rates.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robb, Kevin R.; Jain, Prashant K.; Hazelwood, Thomas J.

    Fluoride salt cooled high-temperature reactor (FHR) concepts include pumps for forced circulation of the primary and secondary coolants. As part of a cooperative research and development agreement between the Shanghai Institute of Applied Physics and the Oak Ridge National Laboratory (ORNL), a research project was initiated to aid in the development of pumps for high-temperature salts. The objectives of the task included characterization of the behavior of an existing ORNL LSTL pump; design and test a modified impeller and volute for improved pump characteristics; and finally, provide lessons learned, recommendations, and guidelines for salt pump development and design. The pumpmore » included on the liquid salt test loop (LSTL) at ORNL served as a case study. This report summarizes the progress to date. The report is organized as follows. First, there is a review, focused on pumps, of the significant amount of work on salts at ORNL during the 1950s 1970s. The existing pump on the LSTL is then described. Plans for hot and cold testing of the pump are then discussed, including the design for a cold shakedown test stand and the required LSTL modifications for hot testing. Initial hydraulic and vibration modeling of the LSTL pump is documented. Later, test data from the LSTL will be used to validate the modeling approaches, which could then be used for future pump design efforts. Some initial insights and test data from the pump are then provided. Finally, some preliminary design goals and requirements for a future LSTL pump are provided as examples of salt pump design considerations.« less

  6. Advances in mosquito dynamics modeling

    NASA Astrophysics Data System (ADS)

    Wijaya, Karunia Putra; Götz, Thomas; Soewono, Edy

    2016-11-01

    It is preliminarily known that Aedes mosquitoes are very close to humans and their dwellings, also give rises to a broad spectrum of diseases: dengue, yellow fever, chikungunya. In this paper, we explore a multi-age-class model for mosquito population secondarily classified into indoor-outdoor dynamics. We accentuate a novel design for the model in which periodicity of the affecting time-varying environmental condition is taken into account. Application of the optimal control with collocated measure as apposed to the widely-used prototypic smooth time-continuous measure is also considered. Using two approaches: least-square and maximum likelihood, we estimate several involving undetermined parameters. We analyze the model enforceability to biological point of view such as existence, uniqueness, positivity and boundedness of solution trajectory, also existence and stability of (non)trivial periodic solution(s) by means of the basic mosquito offspring number. Some numerical tests are brought along at the rest of the paper as a compact realistic visualization of the model.

  7. Near-wall k-epsilon turbulence modeling

    NASA Technical Reports Server (NTRS)

    Mansour, N. N.; Kim, J.; Moin, P.

    1987-01-01

    The flow fields from a turbulent channel simulation are used to compute the budgets for the turbulent kinetic energy (k) and its dissipation rate (epsilon). Data from boundary layer simulations are used to analyze the dependence of the eddy-viscosity damping-function on the Reynolds number and the distance from the wall. The computed budgets are used to test existing near-wall turbulence models of the k-epsilon type. It was found that the turbulent transport models should be modified in the vicinity of the wall. It was also found that existing models for the different terms in the epsilon-budget are adequate in the region from the wall, but need modification near the wall. The channel flow is computed using a k-epsilon model with an eddy-viscosity damping function from the data and no damping functions in the epsilon-equation. These computations show that the k-profile can be adequately predicted, but to correctly predict the epsilon-profile, damping functions in the epsilon-equation are needed.

  8. Mathematical modeling of the aerodynamics of high-angle-of-attack maneuvers

    NASA Technical Reports Server (NTRS)

    Schiff, L. B.; Tobak, M.; Malcolm, G. N.

    1980-01-01

    This paper is a review of the current state of aerodynamic mathematical modeling for aircraft motions at high angles of attack. The mathematical model serves to define a set of characteristic motions from whose known aerodynamic responses the aerodynamic response to an arbitrary high angle-of-attack flight maneuver can be predicted. Means are explored of obtaining stability parameter information in terms of the characteristic motions, whether by wind-tunnel experiments, computational methods, or by parameter-identification methods applied to flight-test data. A rationale is presented for selecting and verifying the aerodynamic mathematical model at the lowest necessary level of complexity. Experimental results describing the wing-rock phenomenon are shown to be accommodated within the most recent mathematical model by admitting the existence of aerodynamic hysteresis in the steady-state variation of the rolling moment with roll angle. Interpretation of the experimental results in terms of bifurcation theory reveals the general conditions under which aerodynamic hysteresis must exist.

  9. Analytical solutions for benchmarking cold regions subsurface water flow and energy transport models: one-dimensional soil thaw with conduction and advection

    USGS Publications Warehouse

    Kurylyk, Barret L.; McKenzie, Jeffrey M; MacQuarrie, Kerry T. B.; Voss, Clifford I.

    2014-01-01

    Numerous cold regions water flow and energy transport models have emerged in recent years. Dissimilarities often exist in their mathematical formulations and/or numerical solution techniques, but few analytical solutions exist for benchmarking flow and energy transport models that include pore water phase change. This paper presents a detailed derivation of the Lunardini solution, an approximate analytical solution for predicting soil thawing subject to conduction, advection, and phase change. Fifteen thawing scenarios are examined by considering differences in porosity, surface temperature, Darcy velocity, and initial temperature. The accuracy of the Lunardini solution is shown to be proportional to the Stefan number. The analytical solution results obtained for soil thawing scenarios with water flow and advection are compared to those obtained from the finite element model SUTRA. Three problems, two involving the Lunardini solution and one involving the classic Neumann solution, are recommended as standard benchmarks for future model development and testing.

  10. A constitutive model accounting for strain ageing effects on work-hardening. Application to a C-Mn steel

    NASA Astrophysics Data System (ADS)

    Ren, Sicong; Mazière, Matthieu; Forest, Samuel; Morgeneyer, Thilo F.; Rousselier, Gilles

    2017-12-01

    One of the most successful models for describing the Portevin-Le Chatelier effect in engineering applications is the Kubin-Estrin-McCormick model (KEMC). In the present work, the influence of dynamic strain ageing on dynamic recovery due to dislocation annihilation is introduced in order to improve the KEMC model. This modification accounts for additional strain hardening rate due to limited dislocation annihilation by the diffusion of solute atoms and dislocation pinning at low strain rate and/or high temperature. The parameters associated with this novel formulation are identified based on tensile tests for a C-Mn steel at seven temperatures ranging from 20 °C to 350 °C. The validity of the model and the improvement compared to existing models are tested using 2D and 3D finite element simulations of the Portevin-Le Chatelier effect in tension.

  11. A new adaptive L1-norm for optimal descriptor selection of high-dimensional QSAR classification model for anti-hepatitis C virus activity of thiourea derivatives.

    PubMed

    Algamal, Z Y; Lee, M H

    2017-01-01

    A high-dimensional quantitative structure-activity relationship (QSAR) classification model typically contains a large number of irrelevant and redundant descriptors. In this paper, a new design of descriptor selection for the QSAR classification model estimation method is proposed by adding a new weight inside L1-norm. The experimental results of classifying the anti-hepatitis C virus activity of thiourea derivatives demonstrate that the proposed descriptor selection method in the QSAR classification model performs effectively and competitively compared with other existing penalized methods in terms of classification performance on both the training and the testing datasets. Moreover, it is noteworthy that the results obtained in terms of stability test and applicability domain provide a robust QSAR classification model. It is evident from the results that the developed QSAR classification model could conceivably be employed for further high-dimensional QSAR classification studies.

  12. Preliminary Results from Electric Arc Furnace Off-Gas Enthalpy Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nimbalkar, Sachin U; Thekdi, Arvind; Keiser, James R

    2015-01-01

    This article describes electric arc furnace (EAF) off-gas enthalpy models developed at Oak Ridge National Laboratory (ORNL) to calculate overall heat availability (sensible and chemical enthalpy) and recoverable heat values (steam or power generation potential) for existing EAF operations and to test ORNL s new EAF waste heat recovery (WHR) concepts. ORNL s new EAF WHR concepts are: Regenerative Drop-out Box System and Fluidized Bed System. The two EAF off-gas enthalpy models described in this paper are: 1.Overall Waste Heat Recovery Model that calculates total heat availability in off-gases of existing EAF operations 2.Regenerative Drop-out Box System Model in whichmore » hot EAF off-gases alternately pass through one of two refractory heat sinks that store heat and then transfer it to another gaseous medium These models calculate the sensible and chemical enthalpy of EAF off-gases based on the off-gas chemical composition, temperature, and mass flow rate during tap to tap time, and variations in those parameters in terms of actual values over time. The models provide heat transfer analysis for the aforementioned concepts to confirm the overall system and major component sizing (preliminary) to assess the practicality of the systems. Real-time EAF off-gas composition (e.g., CO, CO2, H2, and H2O), volume flow, and temperature data from one EAF operation was used to test the validity and accuracy of the modeling work. The EAF off-gas data was used to calculate the sensible and chemical enthalpy of the EAF off-gases to generate steam and power. The article provides detailed results from the modeling work that are important to the success of ORNL s EAF WHR project. The EAF WHR project aims to develop and test new concepts and materials that allow cost-effective recovery of sensible and chemical heat from high-temperature gases discharged from EAFs.« less

  13. The Association between Parent Early Adult Drug Use Disorder and Later Observed Parenting Practices and Child Behavior Problems: Testing Alternate Models

    PubMed Central

    Bailey, Jennifer A.; Hill, Karl G.; Guttmannova, Katarina; Oesterle, Sabrina; Hawkins, J. David; Catalano, Richard F.; McMahon, Robert J.

    2012-01-01

    This study tested the association between parent illicit drug use disorder (DUD) in early adulthood and observed parenting practices at ages 27 – 28 and examined the following three, theoretically-derived models explaining this link: a) a disrupted parent adult functioning model, b) a pre-existing parent personality factor model, c) a disrupted adolescent family process model. Associations between study variables and child externalizing problems also were examined. Longitudinal data linking two generations were drawn from the Seattle Social Development Project (SSDP) and The SSDP Intergenerational Project (TIP), and included 167 parents and their 2- to 8-year-old child. Path modeling revealed that parent DUD in early adulthood predicted later observed low-skilled parenting, which was related to child externalizing problems. The pre-existing parent personality factor model was supported. Parent negative emotionality accounted for the association between parent early adult DUD and later parenting practices. Parent negative emotionality also was related directly to child externalizing behavior. Limited support for the disrupted transition to adulthood model was found. The disrupted adolescent family process model was not supported. Results suggest that problem drug use that occurs early in adulthood may affect later parenting skills, independent of subsequent parent drug use. Findings highlight the importance of parent negative emotionality in influencing their own problem behavior, their interactions with their child, and their child’s problem behavior. Prevention and treatment programs targeting young adult substance use, poor parenting practices, and child behavior problems should address parent personality factors that may contribute to these behaviors. PMID:22799581

  14. QQ-SNV: single nucleotide variant detection at low frequency by comparing the quality quantiles.

    PubMed

    Van der Borght, Koen; Thys, Kim; Wetzels, Yves; Clement, Lieven; Verbist, Bie; Reumers, Joke; van Vlijmen, Herman; Aerssens, Jeroen

    2015-11-10

    Next generation sequencing enables studying heterogeneous populations of viral infections. When the sequencing is done at high coverage depth ("deep sequencing"), low frequency variants can be detected. Here we present QQ-SNV (http://sourceforge.net/projects/qqsnv), a logistic regression classifier model developed for the Illumina sequencing platforms that uses the quantiles of the quality scores, to distinguish true single nucleotide variants from sequencing errors based on the estimated SNV probability. To train the model, we created a dataset of an in silico mixture of five HIV-1 plasmids. Testing of our method in comparison to the existing methods LoFreq, ShoRAH, and V-Phaser 2 was performed on two HIV and four HCV plasmid mixture datasets and one influenza H1N1 clinical dataset. For default application of QQ-SNV, variants were called using a SNV probability cutoff of 0.5 (QQ-SNV(D)). To improve the sensitivity we used a SNV probability cutoff of 0.0001 (QQ-SNV(HS)). To also increase specificity, SNVs called were overruled when their frequency was below the 80(th) percentile calculated on the distribution of error frequencies (QQ-SNV(HS-P80)). When comparing QQ-SNV versus the other methods on the plasmid mixture test sets, QQ-SNV(D) performed similarly to the existing approaches. QQ-SNV(HS) was more sensitive on all test sets but with more false positives. QQ-SNV(HS-P80) was found to be the most accurate method over all test sets by balancing sensitivity and specificity. When applied to a paired-end HCV sequencing study, with lowest spiked-in true frequency of 0.5%, QQ-SNV(HS-P80) revealed a sensitivity of 100% (vs. 40-60% for the existing methods) and a specificity of 100% (vs. 98.0-99.7% for the existing methods). In addition, QQ-SNV required the least overall computation time to process the test sets. Finally, when testing on a clinical sample, four putative true variants with frequency below 0.5% were consistently detected by QQ-SNV(HS-P80) from different generations of Illumina sequencers. We developed and successfully evaluated a novel method, called QQ-SNV, for highly efficient single nucleotide variant calling on Illumina deep sequencing virology data.

  15. Statistical model specification and power: recommendations on the use of test-qualified pooling in analysis of experimental data

    PubMed Central

    Colegrave, Nick

    2017-01-01

    A common approach to the analysis of experimental data across much of the biological sciences is test-qualified pooling. Here non-significant terms are dropped from a statistical model, effectively pooling the variation associated with each removed term with the error term used to test hypotheses (or estimate effect sizes). This pooling is only carried out if statistical testing on the basis of applying that data to a previous more complicated model provides motivation for this model simplification; hence the pooling is test-qualified. In pooling, the researcher increases the degrees of freedom of the error term with the aim of increasing statistical power to test their hypotheses of interest. Despite this approach being widely adopted and explicitly recommended by some of the most widely cited statistical textbooks aimed at biologists, here we argue that (except in highly specialized circumstances that we can identify) the hoped-for improvement in statistical power will be small or non-existent, and there is likely to be much reduced reliability of the statistical procedures through deviation of type I error rates from nominal levels. We thus call for greatly reduced use of test-qualified pooling across experimental biology, more careful justification of any use that continues, and a different philosophy for initial selection of statistical models in the light of this change in procedure. PMID:28330912

  16. Model-Scale Aerodynamic Performance Testing of Proposed Modifications to the NASA Langley Low Speed Aeroacoustic Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Booth, Earl R., Jr.; Coston, Calvin W., Jr.

    2005-01-01

    Tests were performed on a 1/20th-scale model of the Low Speed Aeroacoustic Wind Tunnel to determine the performance effects of insertion of acoustic baffles in the tunnel inlet, replacement of the existing collector with a new collector design in the open jet test section, and addition of flow splitters to the acoustic baffle section downstream of the test section. As expected, the inlet baffles caused a reduction in facility performance. About half of the performance loss was recovered by addition the flow splitters to the downstream baffles. All collectors tested reduced facility performance. However, test chamber recirculation flow was reduced by the new collector designs and shielding of some of the microphones was reduced owing to the smaller size of the new collector. Overall performance loss in the facility is expected to be a 5 percent top flow speed reduction, but the facility will meet OSHA limits for external noise levels and recirculation in the test section will be reduced.

  17. Linear score tests for variance components in linear mixed models and applications to genetic association studies.

    PubMed

    Qu, Long; Guennel, Tobias; Marshall, Scott L

    2013-12-01

    Following the rapid development of genome-scale genotyping technologies, genetic association mapping has become a popular tool to detect genomic regions responsible for certain (disease) phenotypes, especially in early-phase pharmacogenomic studies with limited sample size. In response to such applications, a good association test needs to be (1) applicable to a wide range of possible genetic models, including, but not limited to, the presence of gene-by-environment or gene-by-gene interactions and non-linearity of a group of marker effects, (2) accurate in small samples, fast to compute on the genomic scale, and amenable to large scale multiple testing corrections, and (3) reasonably powerful to locate causal genomic regions. The kernel machine method represented in linear mixed models provides a viable solution by transforming the problem into testing the nullity of variance components. In this study, we consider score-based tests by choosing a statistic linear in the score function. When the model under the null hypothesis has only one error variance parameter, our test is exact in finite samples. When the null model has more than one variance parameter, we develop a new moment-based approximation that performs well in simulations. Through simulations and analysis of real data, we demonstrate that the new test possesses most of the aforementioned characteristics, especially when compared to existing quadratic score tests or restricted likelihood ratio tests. © 2013, The International Biometric Society.

  18. Test and Analysis of Foam Impacting a 6x6 Inch RCC Flat Panel

    NASA Technical Reports Server (NTRS)

    Lessard, Wendy B.

    2006-01-01

    This report presents the testing and analyses of a foam projectile impacting onto thirteen 6x6 inch flat panels at a 90 degrees incidence angle. The panels tested in this investigation were fabricated of Reinforced-Carbon-Carbon material and were used to aid in the validation of an existing material model, MAT58. The computational analyses were performed using LS-DYNA, which is a physics-based, nonlinear, transient, finite element code used for analyzing material responses subjected to high impact forces and other dynamic conditions. The test results were used to validate LS-DYNA predictions and to determine the threshold of damage generated by the MAT58 cumulative damage material model. The threshold of damage parameter represents any external or internal visible RCC damage detectable by nondestructive evaluation techniques.

  19. Printability of alloys for additive manufacturing

    PubMed Central

    Mukherjee, T.; Zuback, J. S.; De, A.; DebRoy, T.

    2016-01-01

    Although additive manufacturing (AM), or three dimensional (3D) printing, provides significant advantages over existing manufacturing techniques, metallic parts produced by AM are susceptible to distortion, lack of fusion defects and compositional changes. Here we show that the printability, or the ability of an alloy to avoid these defects, can be examined by developing and testing appropriate theories. A theoretical scaling analysis is used to test vulnerability of various alloys to thermal distortion. A theoretical kinetic model is used to examine predisposition of different alloys to AM induced compositional changes. A well-tested numerical heat transfer and fluid flow model is used to compare susceptibilities of various alloys to lack of fusion defects. These results are tested and validated with independent experimental data. The findings presented in this paper are aimed at achieving distortion free, compositionally sound and well bonded metallic parts. PMID:26796864

  20. Dynamic flashing yellow arrow (FYA): a study on variable left-turn mode operational and safety impacts phase II - model expansion and testing : [summary].

    DOT National Transportation Integrated Search

    2016-05-01

    In phase two of this project, the UCF team further developed the DSS to automate selection of FYA left-turn modes based on traffic volumes at intersections acquired in real time from existing sensors.

  1. Effects of the Family Environment: Gene-Environment Interaction and Passive Gene-Environment Correlation

    ERIC Educational Resources Information Center

    Price, Thomas S.; Jaffee, Sara R.

    2008-01-01

    The classical twin study provides a useful resource for testing hypotheses about how the family environment influences children's development, including how genes can influence sensitivity to environmental effects. However, existing statistical models do not account for the possibility that children can inherit exposure to family environments…

  2. A Longitudinal Study of Occupational Aspirations and Attainments of Iowa Young Adults.

    ERIC Educational Resources Information Center

    Yoesting, Dean R.; And Others

    The causal linkage between socioeconomic status, occupational and educational aspiration, and attainment was examined in this attempt to test an existing theoretical model which used socioeconomic status as a major input variable, with significant other influence as a crucial intervening variable between socioeconomic status and aspiration. The…

  3. Administrative Compensation: A Preliminary Investigation of the Male/Female Salary Differential.

    ERIC Educational Resources Information Center

    Pounder, Diana

    This paper examines typical explanations offered for the existence of an "earnings gap" between male and female educational administrators, evaluates the relevance of these explanations, and proposes and tests a model process for detecting gender bias in the compensation of school administrators. Common explanations for male/female wage…

  4. Colour Model for Outdoor Machine Vision for Tropical Regions and its Comparison with the CIE Model

    NASA Astrophysics Data System (ADS)

    Sahragard, Nasrolah; Ramli, Abdul Rahman B.; Hamiruce Marhaban, Mohammad; Mansor, Shattri B.

    2011-02-01

    Accurate modeling of daylight and surface reflectance are very useful for most outdoor machine vision applications specifically those which are based on color recognition. Existing daylight CIE model has drawbacks that limit its ability to predict the color of incident light. These limitations include lack of considering ambient light, effects of light reflected off the ground, and context specific information. Previously developed color model is only tested for a few geographical places in North America and its accountability is under question for other places in the world. Besides, existing surface reflectance models are not easily applied to outdoor images. A reflectance model with combined diffuse and specular reflection in normalized HSV color space could be used to predict color. In this paper, a new daylight color model showing the color of daylight for a broad range of sky conditions is developed which will suit weather conditions of tropical places such as Malaysia. A comparison of this daylight color model and daylight CIE model will be discussed. The colors of matte and specular surfaces have been estimated by use of the developed color model and surface reflection function in this paper. The results are shown to be highly reliable.

  5. The zonally averaged transport characteristics of the atmosphere as determined by a general circulation model

    NASA Technical Reports Server (NTRS)

    Plumb, R. A.

    1985-01-01

    Two dimensional modeling has become an established technique for the simulation of the global structure of trace constituents. Such models are simpler to formulate and cheaper to operate than three dimensional general circulation models, while avoiding some of the gross simplifications of one dimensional models. Nevertheless, the parameterization of eddy fluxes required in a 2-D model is not a trivial problem. This fact has apparently led some to interpret the shortcomings of existing 2-D models as indicating that the parameterization procedure is wrong in principle. There are grounds to believe that these shortcomings result primarily from incorrect implementations of the predictions of eddy transport theory and that a properly based parameterization may provide a good basis for atmospheric modeling. The existence of these GCM-derived coefficients affords an unprecedented opportunity to test the validity of the flux-gradient parameterization. To this end, a zonally averaged (2-D) model was developed, using these coefficients in the transport parameterization. Results from this model for a number of contrived tracer experiments were compared with the parent GCM. The generally good agreement substantially validates the flus-gradient parameterization, and thus the basic principle of 2-D modeling.

  6. SPEEDUP{trademark} ion exchange column model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hang, T.

    2000-03-06

    A transient model to describe the process of loading a solute onto the granular fixed bed in an ion exchange (IX) column has been developed using the SpeedUp{trademark} software package. SpeedUp offers the advantage of smooth integration into other existing SpeedUp flowsheet models. The mathematical algorithm of a porous particle diffusion model was adopted to account for convection, axial dispersion, film mass transfer, and pore diffusion. The method of orthogonal collocation on finite elements was employed to solve the governing transport equations. The model allows the use of a non-linear Langmuir isotherm based on an effective binary ionic exchange process.more » The SpeedUp column model was tested by comparing to the analytical solutions of three transport problems from the ion exchange literature. In addition, a sample calculation of a train of three crystalline silicotitanate (CST) IX columns in series was made using both the SpeedUp model and Purdue University's VERSE-LC code. All test cases showed excellent agreement between the SpeedUp model results and the test data. The model can be readily used for SuperLig{trademark} ion exchange resins, once the experimental data are complete.« less

  7. A Model of BGA Thermal Fatigue Life Prediction Considering Load Sequence Effects

    PubMed Central

    Hu, Weiwei; Li, Yaqiu; Sun, Yufeng; Mosleh, Ali

    2016-01-01

    Accurate testing history data is necessary for all fatigue life prediction approaches, but such data is always deficient especially for the microelectronic devices. Additionally, the sequence of the individual load cycle plays an important role in physical fatigue damage. However, most of the existing models based on the linear damage accumulation rule ignore the sequence effects. This paper proposes a thermal fatigue life prediction model for ball grid array (BGA) packages to take into consideration the load sequence effects. For the purpose of improving the availability and accessibility of testing data, a new failure criterion is discussed and verified by simulation and experimentation. The consequences for the fatigue underlying sequence load conditions are shown. PMID:28773980

  8. Analytical and Experimental Assessment of Seismic Vulnerability of Beam-Column Joints without Transverse Reinforcement in Concrete Buildings

    NASA Astrophysics Data System (ADS)

    Hassan, Wael Mohammed

    Beam-column joints in concrete buildings are key components to ensure structural integrity of building performance under seismic loading. Earthquake reconnaissance has reported the substantial damage that can result from inadequate beam-column joints. In some cases, failure of older-type corner joints appears to have led to building collapse. Since the 1960s, many advances have been made to improve seismic performance of building components, including beam-column joints. New design and detailing approaches are expected to produce new construction that will perform satisfactorily during strong earthquake shaking. Much less attention has been focused on beam-column joints of older construction that may be seismically vulnerable. Concrete buildings constructed prior to developing details for ductility in the 1970s normally lack joint transverse reinforcement. The available literature concerning the performance of such joints is relatively limited, but concerns about performance exist. The current study aimed to improve understanding and assessment of seismic performance of unconfined exterior and corner beam-column joints in existing buildings. An extensive literature survey was performed, leading to development of a database of about a hundred tests. Study of the data enabled identification of the most important parameters and the effect of each parameter on the seismic performance. The available analytical models and guidelines for strength and deformability assessment of unconfined joints were surveyed and evaluated. In particular, The ASCE 41 existing building document proved to be substantially conservative in joint shear strength estimation. Upon identifying deficiencies in these models, two new joint shear strength models, a bond capacity model, and two axial capacity models designed and tailored specifically for unconfined beam-column joints were developed. The proposed models strongly correlated with previous test results. In the laboratory testing phase of the current study, four full-scale corner beam-column joint subassemblies, with slab included, were designed, built, instrumented, tested, and analyzed. The specimens were tested under unidirectional and bidirectional displacement-controlled quasi-static loading that incorporated varying axial loads that simulated overturning seismic moment effects. The axial loads varied between tension and high compression loads reaching about 50% of the column axial capacity. The test parameters were axial load level, loading history, joint aspect ratio, and beam reinforcement ratio. The test results proved that high axial load increases joint shear strength and decreases the deformability of joints failing in pure shear failure mode without beam yielding. On the contrary, high axial load did not affect the strength of joints failing in shear after significant beam yielding; however, it substantially increased their displacement ductility. Joint aspect ratio proved to be instrumental in deciding joint shear strength; that is the deeper the joint the lower the shear strength. Bidirectional loading reduced the apparent strength of the joint in the uniaxial principal axes. However, circular shear strength interaction is an appropriate approximation to predict the biaxial strength. The developed shear strength models predicted successfully the strength of test specimens. Based on the literature database investigation, the shear and axial capacity models developed and the test results of the current study, an analytical finite element component model based on a proposed joint shear stress-rotation backbone constitutive curve was developed to represent the behavior of unconfined beam-column joints in computer numerical simulations of concrete frame buildings. The proposed finite element model included the effect of axial load, mode of joint failure, joint aspect ratio and axial capacity of joint. The proposed backbone curve along with the developed joint element exhibited high accuracy in simulating the test response of the current test specimens as well as previous test joints. Finally, a parametric study was conducted to assess the axial failure vulnerability of unconfined beam-column joints based on the developed shear and axial capacity models. This parametric study compared the axial failure potential of unconfined beam-column joint with that of shear critical columns to provide a preliminary insight into the axial collapse vulnerability of older-type buildings during intense ground shaking.

  9. Development of a Modeling Framework to Support Control Investigations of Sailcraft Missions A First Cut: ABLE Sailcraft Dynamics Model

    NASA Technical Reports Server (NTRS)

    Sarathy, Sriprakash

    2005-01-01

    Solar Sailcraft, the stuff of dreams of the H.G. Wells generation, is now a rapidly maturing reality. The promise of unlimited propulsive power by harnessing stellar radiation is close to realization. Currently, efforts are underway to build, prototype and test two configurations. These sails are designed to meet a 20m sail requirement, under guidance of the In-Space Propulsion (ISP) technology program office at MSFC. While these sails will not fly , they are the first steps in improving our understanding of the processes and phenomena at work. As part of the New Millennium Program (NMP) the ST9 technology validation mission hopes to launch and fly a solar sail by 2010 or sooner. Though the Solar Sail community has been studying and validating various concepts over two decades, it was not until recent breakthroughs in structural and material technology, has made possible to build sails that could be launched. With real sails that can be tested (albeit under earth conditions), the real task of engineering a viable spacecraft has finally commenced. Since it is not possible to accurately or practically recreate the actual operating conditions of the sailcraft (zero-G, vacuum and extremely low temperatures), much of the work has focused on developing accurate models that can be used to predict behavior in space, and for sails that are 6-10 times the size of currently existing sails. Since these models can be validated only with real test data under "earth" conditions, the process of modeling and the identification of uncertainty due to model assumptions and scope need to be closely considered. Sailcraft models that exist currently, are primarily focused on detailed physical representations at the component level, these are intended to support prototyping efforts. System level models that cut across different sail configurations and control concepts while maintaining a consistent approach are non-existent. Much effort has been focused on the areas of thrust performance, solar radiation prediction, and sail membrane behavior vis-a-vis their reflective geometry, such as wrinkling/folding/furling as it pertains to thrust prediction. A parallel effort has been conducted on developing usable models for developing attitude control systems (ACS), for different sail configurations in different regimes. There has been very little by way of a system wide exploration of the impact of the various control schemes, thrust prediction models for different sail configurations being considered.

  10. A scan statistic for binary outcome based on hypergeometric probability model, with an application to detecting spatial clusters of Japanese encephalitis.

    PubMed

    Zhao, Xing; Zhou, Xiao-Hua; Feng, Zijian; Guo, Pengfei; He, Hongyan; Zhang, Tao; Duan, Lei; Li, Xiaosong

    2013-01-01

    As a useful tool for geographical cluster detection of events, the spatial scan statistic is widely applied in many fields and plays an increasingly important role. The classic version of the spatial scan statistic for the binary outcome is developed by Kulldorff, based on the Bernoulli or the Poisson probability model. In this paper, we apply the Hypergeometric probability model to construct the likelihood function under the null hypothesis. Compared with existing methods, the likelihood function under the null hypothesis is an alternative and indirect method to identify the potential cluster, and the test statistic is the extreme value of the likelihood function. Similar with Kulldorff's methods, we adopt Monte Carlo test for the test of significance. Both methods are applied for detecting spatial clusters of Japanese encephalitis in Sichuan province, China, in 2009, and the detected clusters are identical. Through a simulation to independent benchmark data, it is indicated that the test statistic based on the Hypergeometric model outweighs Kulldorff's statistics for clusters of high population density or large size; otherwise Kulldorff's statistics are superior.

  11. Online Calibration of Polytomous Items Under the Generalized Partial Credit Model

    PubMed Central

    Zheng, Yi

    2016-01-01

    Online calibration is a technology-enhanced architecture for item calibration in computerized adaptive tests (CATs). Many CATs are administered continuously over a long term and rely on large item banks. To ensure test validity, these item banks need to be frequently replenished with new items, and these new items need to be pretested before being used operationally. Online calibration dynamically embeds pretest items in operational tests and calibrates their parameters as response data are gradually obtained through the continuous test administration. This study extends existing formulas, procedures, and algorithms for dichotomous item response theory models to the generalized partial credit model, a popular model for items scored in more than two categories. A simulation study was conducted to investigate the developed algorithms and procedures under a variety of conditions, including two estimation algorithms, three pretest item selection methods, three seeding locations, two numbers of score categories, and three calibration sample sizes. Results demonstrated acceptable estimation accuracy of the two estimation algorithms in some of the simulated conditions. A variety of findings were also revealed for the interacted effects of included factors, and recommendations were made respectively. PMID:29881063

  12. Analytical and Experimental Evaluation of Digital Control Systems for the Semi-Span Super-Sonic Transport (S4T) Wind Tunnel Model

    NASA Technical Reports Server (NTRS)

    Wieseman, Carol D.; Christhilf, David; Perry, Boyd, III

    2012-01-01

    An important objective of the Semi-Span Super-Sonic Transport (S4T) wind tunnel model program was the demonstration of Flutter Suppression (FS), Gust Load Alleviation (GLA), and Ride Quality Enhancement (RQE). It was critical to evaluate the stability and robustness of these control laws analytically before testing them and experimentally while testing them to ensure safety of the model and the wind tunnel. MATLAB based software was applied to evaluate the performance of closed-loop systems in terms of stability and robustness. Existing software tools were extended to use analytical representations of the S4T and the control laws to analyze and evaluate the control laws prior to testing. Lessons were learned about the complex windtunnel model and experimental testing. The open-loop flutter boundary was determined from the closed-loop systems. A MATLAB/Simulink Simulation developed under the program is available for future work to improve the CPE process. This paper is one of a series of that comprise a special session, which summarizes the S4T wind-tunnel program.

  13. Oral aniracetam treatment in C57BL/6J mice without pre-existing cognitive dysfunction reveals no changes in learning, memory, anxiety or stereotypy

    PubMed Central

    Reynolds, Conner D.; Jefferson, Taylor S.; Volquardsen, Meagan; Pandian, Ashvini; Smith, Gregory D.; Holley, Andrew J.; Lugo, Joaquin N.

    2017-01-01

    Background: The piracetam analog, aniracetam, has recently received attention for its cognition enhancing potential, with minimal reported side effects.  Previous studies report the drug to be effective in both human and non-human models with pre-existing cognitive dysfunction, but few studies have evaluated its efficacy in healthy subjects. A previous study performed in our laboratory found no cognitive enhancing effects of oral aniracetam administration 1-hour prior to behavioral testing in naïve C57BL/6J mice. Methods: The current study aims to further evaluate this drug by administration of aniracetam 30 minutes prior to testing in order to optimize any cognitive enhancing effects. In this study, all naïve C57BL/6J mice were tested in tasks of delayed fear conditioning, novel object recognition, rotarod, open field, elevated plus maze, and marble burying. Results: Across all tasks, animals in the treatment group failed to show enhanced learning when compared to controls. Conclusions: These results provide further evidence suggesting that aniracetam conveys no therapeutic benefit to subjects without pre-existing cognitive dysfunction. PMID:29946420

  14. Evolution of phase singularities of vortex beams propagating in atmospheric turbulence.

    PubMed

    Ge, Xiao-Lu; Wang, Ben-Yi; Guo, Cheng-Shan

    2015-05-01

    Optical vortex beams propagating through atmospheric turbulence are studied by numerical modeling, and the phase singularities of the vortices existing in the turbulence-distorted beams are calculated. It is found that the algebraic sum of topological charges (TCs) of all the phase singularities existing in test aperture is approximately equal to the TC of the input vortex beam. This property provides us a possible approach for determining the TC of the vortex beam propagating through the atmospheric turbulence, which could have potential application in optical communication using optical vortices.

  15. Boundary cooled rocket engines for space storable propellants

    NASA Technical Reports Server (NTRS)

    Kesselring, R. C.; Mcfarland, B. L.; Knight, R. M.; Gurnitz, R. N.

    1972-01-01

    An evaluation of an existing analytical heat transfer model was made to develop the technology of boundary film/conduction cooled rocket thrust chambers to the space storable propellant combination oxygen difluoride/diborane. Critical design parameters were identified and their importance determined. Test reduction methods were developed to enable data obtained from short duration hot firings with a thin walled (calorimeter) chamber to be used quantitatively evaluate the heat absorbing capability of the vapor film. The modification of the existing like-doublet injector was based on the results obtained from the calorimeter firings.

  16. Using Bayesian regression to test hypotheses about relationships between parameters and covariates in cognitive models.

    PubMed

    Boehm, Udo; Steingroever, Helen; Wagenmakers, Eric-Jan

    2018-06-01

    An important tool in the advancement of cognitive science are quantitative models that represent different cognitive variables in terms of model parameters. To evaluate such models, their parameters are typically tested for relationships with behavioral and physiological variables that are thought to reflect specific cognitive processes. However, many models do not come equipped with the statistical framework needed to relate model parameters to covariates. Instead, researchers often revert to classifying participants into groups depending on their values on the covariates, and subsequently comparing the estimated model parameters between these groups. Here we develop a comprehensive solution to the covariate problem in the form of a Bayesian regression framework. Our framework can be easily added to existing cognitive models and allows researchers to quantify the evidential support for relationships between covariates and model parameters using Bayes factors. Moreover, we present a simulation study that demonstrates the superiority of the Bayesian regression framework to the conventional classification-based approach.

  17. From individual to population level effects of toxicants in the tubicifid Branchiura sowerbyi using threshold effect models in a Bayesian framework.

    PubMed

    Ducrot, Virginie; Billoir, Elise; Péry, Alexandre R R; Garric, Jeanne; Charles, Sandrine

    2010-05-01

    Effects of zinc were studied in the freshwater worm Branchiura sowerbyi using partial and full life-cycle tests. Only newborn and juveniles were sensitive to zinc, displaying effects on survival, growth, and age at first brood at environmentally relevant concentrations. Threshold effect models were proposed to assess toxic effects on individuals. They were fitted to life-cycle test data using Bayesian inference and adequately described life-history trait data in exposed organisms. The daily asymptotic growth rate of theoretical populations was then simulated with a matrix population model, based upon individual-level outputs. Population-level outputs were in accordance with existing literature for controls. Working in a Bayesian framework allowed incorporating parameter uncertainty in the simulation of the population-level response to zinc exposure, thus increasing the relevance of test results in the context of ecological risk assessment.

  18. Replicates in high dimensions, with applications to latent variable graphical models.

    PubMed

    Tan, Kean Ming; Ning, Yang; Witten, Daniela M; Liu, Han

    2016-12-01

    In classical statistics, much thought has been put into experimental design and data collection. In the high-dimensional setting, however, experimental design has been less of a focus. In this paper, we stress the importance of collecting multiple replicates for each subject in this setting. We consider learning the structure of a graphical model with latent variables, under the assumption that these variables take a constant value across replicates within each subject. By collecting multiple replicates for each subject, we are able to estimate the conditional dependence relationships among the observed variables given the latent variables. To test the null hypothesis of conditional independence between two observed variables, we propose a pairwise decorrelated score test. Theoretical guarantees are established for parameter estimation and for this test. We show that our proposal is able to estimate latent variable graphical models more accurately than some existing proposals, and apply the proposed method to a brain imaging dataset.

  19. Human Birth Weight and Reproductive Immunology: Testing for Interactions between Maternal and Offspring KIR and HLA-C Genes.

    PubMed

    Clark, Michelle M; Chazara, Olympe; Sobel, Eric M; Gjessing, Håkon K; Magnus, Per; Moffett, Ashley; Sinsheimer, Janet S

    2016-01-01

    Maternal and offspring cell contact at the site of placentation presents a plausible setting for maternal-fetal genotype (MFG) interactions affecting fetal growth. We test hypotheses regarding killer cell immunoglobulin-like receptor (KIR) and HLA-C MFG effects on human birth weight by extending the quantitative MFG (QMFG) test. Until recently, association testing for MFG interactions had limited applications. To improve the ability to test for these interactions, we developed the extended QMFG test, a linear mixed-effect model that can use multi-locus genotype data from families. We demonstrate the extended QMFG test's statistical properties. We also show that if an offspring-only model is fit when MFG effects exist, associations can be missed or misattributed. Furthermore, imprecisely modeling the effects of both KIR and HLA-C could result in a failure to replicate if these loci's allele frequencies differ among populations. To further illustrate the extended QMFG test's advantages, we apply the extended QMFG test to a UK cohort study and the Norwegian Mother and Child Cohort (MoBa) study. We find a significant KIR-HLA-C interaction effect on birth weight. More generally, the QMFG test can detect genetic associations that may be missed by standard genome-wide association studies for quantitative traits. © 2017 S. Karger AG, Basel.

  20. Improved tsunami impact assessments: validation, comparison and the integration of hydrodynamic modeling

    NASA Astrophysics Data System (ADS)

    Tarbotton, C.; Walters, R. A.; Goff, J. R.; Dominey-Howes, D.; Turner, I. L.

    2012-12-01

    As communities become increasingly aware of the risks posed by tsunamis, it is important to develop methods for predicting the damage they can cause to the built environment. This will provide the information needed to make informed decisions regarding land-use, building codes, and evacuation. At present, a number of tsunami-building vulnerability assessment models are available, however, the relative infrequency and destructive nature of tsunamis has long made it difficult to obtain the data necessary to adequately validate and compare them. Further complicating matters is that the inundation of a tsunami in the built environment is very difficult model, as is the response of a building to the hydraulic forces that a tsunami generates. Variations in building design and condition will significantly affect a building's susceptibility to damage. Likewise, factors affecting the flow conditions at a building (i.e. surrounding structures and topography), will greatly affect its exposure. This presents significant challenges for practitioners, as they are often left in the dark on how to use hazard modeling and vulnerability assessment techniques together to conduct the community-scale impact studies required for tsunami planning. This paper presents the results of an in-depth case study of Yuriage, Miyagi Prefecture - a coastal city in Japan that was badly damaged by the 2011 Tohoku tsunami. The aim of the study was twofold: 1) To test and compare existing tsunami vulnerability assessment models and 2) To more effectively utilize hydrodynamic models in the context of tsunami impact studies. Following the 2011 Tohoku event, an unprecedented quantity of field data, imagery and video emerged. Yuriage in particular, features a comprehensive set of street level Google Street View imagery, available both before and after the event. This has enabled the collection of a large dataset describing the characteristics of the buildings existing before the event as well the subsequent damage that they sustained during. These data together with the detailed results from hydrodynamic models have been used to provide the building, damage and hazard data necessary to rigorously test and compare existing vulnerability assessments techniques. The result is a much-improved understanding of the capabilities of existing vulnerability assessment techniques, as well as important improvements to their assessment framework This provides much needed guidance to practitioners on how to conduct tsunami impact assessments in the future. Furthermore, the study introduces some new methods of integrating hydrodynamic models into vulnerability assessment models, offering guidance on how to more effectively model tsunami inundation in the built environment.

  1. Composite Overwrapped Pressure Vessels (COPV) Stress Rupture Test: Part 2. Part 2

    NASA Technical Reports Server (NTRS)

    Russell, Richard; Flynn, Howard; Forth, Scott; Greene, Nathanael; Kezirian, Michael; Varanauski, Don; Leifeste, Mark; Yoder, Tommy; Woodworth, Warren

    2010-01-01

    One of the major concerns for the aging Space Shuttle fleet is the stress rupture life of composite overwrapped pressure vessels (COPVs). Stress rupture life of a COPY has been defined as the minimum time during which the composite maintains structural integrity considering the combined effects of stress levels and time. To assist in the evaluation of the aging COPVs in the Orbiter fleet an analytical reliability model was developed. The actual data used to construct this model was from testing of COPVs constructed of similar, but not exactly same materials and pressure cycles as used on Orbiter vessels. Since no actual Orbiter COPV stress rupture data exists the Space Shuttle Program decided to run a stress rupture test to compare to model predictions. Due to availability of spares, the testing was unfortunately limited to one 40" vessel. The stress rupture test was performed at maximum operating pressure at an elevated temperature to accelerate aging. The test was performed in two phases. The first phase, 130 F, a moderately accelerated test designed to achieve the midpoint of the model predicted point reliability. A more aggressive second phase, performed at 160 F, was designed to determine if the test article will exceed the 95% confidence interval ofthe model. In phase 3, the vessel pressure was increased to above maximum operating pressure while maintaining the phase 2 temperature. After reaching enough effectives hours to reach the 99.99% confidence level of the model phase 4 testing began when the temperature was increased to greater than 170 F. The vessel was maintained at phase 4 conditions until it failed after over 3 million effect hours. This paper will discuss the results of this test, it's implications and possible follow-on testing.

  2. Towards a framework for testing general relativity with extreme-mass-ratio-inspiral observations

    NASA Astrophysics Data System (ADS)

    Chua, A. J. K.; Hee, S.; Handley, W. J.; Higson, E.; Moore, C. J.; Gair, J. R.; Hobson, M. P.; Lasenby, A. N.

    2018-07-01

    Extreme-mass-ratio-inspiral observations from future space-based gravitational-wave detectors such as LISA will enable strong-field tests of general relativity with unprecedented precision, but at prohibitive computational cost if existing statistical techniques are used. In one such test that is currently employed for LIGO black hole binary mergers, generic deviations from relativity are represented by N deformation parameters in a generalized waveform model; the Bayesian evidence for each of its 2N combinatorial submodels is then combined into a posterior odds ratio for modified gravity over relativity in a null-hypothesis test. We adapt and apply this test to a generalized model for extreme-mass-ratio inspirals constructed on deformed black hole spacetimes, and focus our investigation on how computational efficiency can be increased through an evidence-free method of model selection. This method is akin to the algorithm known as product-space Markov chain Monte Carlo, but uses nested sampling and improved error estimates from a rethreading technique. We perform benchmarking and robustness checks for the method, and find order-of-magnitude computational gains over regular nested sampling in the case of synthetic data generated from the null model.

  3. Towards a framework for testing general relativity with extreme-mass-ratio-inspiral observations

    NASA Astrophysics Data System (ADS)

    Chua, A. J. K.; Hee, S.; Handley, W. J.; Higson, E.; Moore, C. J.; Gair, J. R.; Hobson, M. P.; Lasenby, A. N.

    2018-04-01

    Extreme-mass-ratio-inspiral observations from future space-based gravitational-wave detectors such as LISA will enable strong-field tests of general relativity with unprecedented precision, but at prohibitive computational cost if existing statistical techniques are used. In one such test that is currently employed for LIGO black-hole binary mergers, generic deviations from relativity are represented by N deformation parameters in a generalised waveform model; the Bayesian evidence for each of its 2N combinatorial submodels is then combined into a posterior odds ratio for modified gravity over relativity in a null-hypothesis test. We adapt and apply this test to a generalised model for extreme-mass-ratio inspirals constructed on deformed black-hole spacetimes, and focus our investigation on how computational efficiency can be increased through an evidence-free method of model selection. This method is akin to the algorithm known as product-space Markov chain Monte Carlo, but uses nested sampling and improved error estimates from a rethreading technique. We perform benchmarking and robustness checks for the method, and find order-of-magnitude computational gains over regular nested sampling in the case of synthetic data generated from the null model.

  4. Gaia challenging performances verification: combination of spacecraft models and test results

    NASA Astrophysics Data System (ADS)

    Ecale, Eric; Faye, Frédéric; Chassat, François

    2016-08-01

    To achieve the ambitious scientific objectives of the Gaia mission, extremely stringent performance requirements have been given to the spacecraft contractor (Airbus Defence and Space). For a set of those key-performance requirements (e.g. end-of-mission parallax, maximum detectable magnitude, maximum sky density or attitude control system stability), this paper describes how they are engineered during the whole spacecraft development process, with a focus on the end-to-end performance verification. As far as possible, performances are usually verified by end-to-end tests onground (i.e. before launch). However, the challenging Gaia requirements are not verifiable by such a strategy, principally because no test facility exists to reproduce the expected flight conditions. The Gaia performance verification strategy is therefore based on a mix between analyses (based on spacecraft models) and tests (used to directly feed the models or to correlate them). Emphasis is placed on how to maximize the test contribution to performance verification while keeping the test feasible within an affordable effort. In particular, the paper highlights the contribution of the Gaia Payload Module Thermal Vacuum test to the performance verification before launch. Eventually, an overview of the in-flight payload calibration and in-flight performance verification is provided.

  5. Testing the effect of specific socioeconomic factors on the ischemic mortality rate. The case of Greece

    PubMed Central

    Mouza, AM

    2008-01-01

    In this paper we present a model to evaluate the effect of certain majors socioeconomic factors (such as alcohol and fat consumption, cigarettes smoking, unemployment rate as a proxy for uncertainty which results frustration, number of passenger cars as a proxy for physical exercise and per capita GDP as a proxy for nutrition quality), to the ischemic mortality rate. Since the existing research works on this field, suffer from the proper model testing, we analytically present all the tests necessary to justify the reliability of the result obtained. For this purpose, after specifying and estimating the model, we applied the specification error test, the linearity, multicollinearity and heteroscedasticity tests, the autocorrelation and stability tests and the ARCH effect test. Finally, we present the aggregate efect of the above socioeconomic factors. In brief, we found that an increase of cigarettes smoked, of fat and alcohol consumption and the number of passenger cars will result to a relevant increase regarding mortality. The latter one is also affected by the changes in unemployment rate. On the other hand, an increase of personal disposable income may negatively affect mortality, by almost the same portion. PMID:18923751

  6. Testing the effect of specific socioeconomic factors on the ischemic mortality rate. The case of Greece.

    PubMed

    Mouza, A M

    2008-01-01

    In this paper we present a model to evaluate the effect of certain majors socioeconomic factors (such as alcohol and fat consumption, cigarettes smoking, unemployment rate as a proxy for uncertainty which results frustration, number of passenger cars as a proxy for physical exercise and per capita GDP as a proxy for nutrition quality), to the ischemic mortality rate. Since the existing research works on this field, suffer from the proper model testing, we analytically present all the tests necessary to justify the reliability of the result obtained. For this purpose, after specifying and estimating the model, we applied the specification error test, the linearity, multicollinearity and heteroscedasticity tests, the autocorrelation and stability tests and the ARCH effect test. Finally, we present the aggregate effect of the above socioeconomic factors. In brief, we found that an increase of cigarettes smoked, of fat and alcohol consumption and the number of passenger cars will result to a relevant increase regarding mortality. The latter one is also affected by the changes in unemployment rate. On the other hand, an increase of personal disposable income may negatively affect mortality, by almost the same portion.

  7. Semi-automated Modular Program Constructor for physiological modeling: Building cell and organ models.

    PubMed

    Jardine, Bartholomew; Raymond, Gary M; Bassingthwaighte, James B

    2015-01-01

    The Modular Program Constructor (MPC) is an open-source Java based modeling utility, built upon JSim's Mathematical Modeling Language (MML) ( http://www.physiome.org/jsim/) that uses directives embedded in model code to construct larger, more complicated models quickly and with less error than manually combining models. A major obstacle in writing complex models for physiological processes is the large amount of time it takes to model the myriad processes taking place simultaneously in cells, tissues, and organs. MPC replaces this task with code-generating algorithms that take model code from several different existing models and produce model code for a new JSim model. This is particularly useful during multi-scale model development where many variants are to be configured and tested against data. MPC encodes and preserves information about how a model is built from its simpler model modules, allowing the researcher to quickly substitute or update modules for hypothesis testing. MPC is implemented in Java and requires JSim to use its output. MPC source code and documentation are available at http://www.physiome.org/software/MPC/.

  8. Scoring the correlation of genes by their shared properties using OScal, an improved overlap quantification model.

    PubMed

    Liu, Hui; Liu, Wei; Lin, Ying; Liu, Teng; Ma, Zhaowu; Li, Mo; Zhang, Hong-Mei; Kenneth Wang, Qing; Guo, An-Yuan

    2015-05-27

    Scoring the correlation between two genes by their shared properties is a common and basic work in biological study. A prospective way to score this correlation is to quantify the overlap between the two sets of homogeneous properties of the two genes. However the proper model has not been decided, here we focused on studying the quantification of overlap and proposed a more effective model after theoretically compared 7 existing models. We defined three characteristic parameters (d, R, r) of an overlap, which highlight essential differences among the 7 models and grouped them into two classes. Then the pros and cons of the two groups of model were fully examined by their solution space in the (d, R, r) coordinate system. Finally we proposed a new model called OScal (Overlap Score calculator), which was modified on Poisson distribution (one of 7 models) to avoid its disadvantages. Tested in assessing gene relation using different data, OScal performs better than existing models. In addition, OScal is a basic mathematic model, with very low computation cost and few restrictive conditions, so it can be used in a wide-range of research areas to measure the overlap or similarity of two entities.

  9. The classification of anxiety and hysterical states. Part I. Historical review and empirical delineation.

    PubMed

    Sheehan, D V; Sheehan, K H

    1982-08-01

    The history of the classification of anxiety, hysterical, and hypochondriacal disorders is reviewed. Problems in the ability of current classification schemes to predict, control, and describe the relationship between the symptoms and other phenomena are outlined. Existing classification schemes failed the first test of a good classification model--that of providing categories that are mutually exclusive. The independence of these diagnostic categories from each other does not appear to hold up on empirical testing. In the absence of inherently mutually exclusive categories, further empirical investigation of these classes is obstructed since statistically valid analysis of the nominal data and any useful multivariate analysis would be difficult if not impossible. It is concluded that the existing classifications are unsatisfactory and require some fundamental reconceptualization.

  10. Using Biowin, Bayes, and batteries to predict ready biodegradability.

    PubMed

    Boethling, Robert S; Lynch, David G; Jaworska, Joanna S; Tunkel, Jay L; Thom, Gary C; Webb, Simon

    2004-04-01

    Whether or not a given chemical substance is readily biodegradable is an important piece of information in risk screening for both new and existing chemicals. Despite the relatively low cost of Organization for Economic Cooperation and Development tests, data are often unavailable and biodegradability must be estimated. In this paper, we focus on the predictive value of selected Biowin models and model batteries using Bayesian analysis. Posterior probabilities, calculated based on performance with the model training sets using Bayes' theorem, were closely matched by actual performance with an expanded set of 374 premanufacture notice (PMN) substances. Further analysis suggested that a simple battery consisting of Biowin3 (survey ultimate biodegradation model) and Biowin5 (Ministry of International Trade and Industry [MITI] linear model) would have enhanced predictive power in comparison to individual models. Application of the battery to PMN substances showed that performance matched expectation. This approach significantly reduced both false positives for ready biodegradability and the overall misclassification rate. Similar results were obtained for a set of 63 pharmaceuticals using a battery consisting of Biowin3 and Biowin6 (MITI nonlinear model). Biodegradation data for PMNs tested in multiple ready tests or both inherent and ready biodegradation tests yielded additional insights that may be useful in risk screening.

  11. A two-dimensional hydrodynamic model of a tidal estuary

    USGS Publications Warehouse

    Walters, Roy A.; Cheng, Ralph T.

    1979-01-01

    A finite element model is described which is used in the computation of tidal currents in an estuary. This numerical model is patterned after an existing algorithm and has been carefully tested in rectangular and curve-sided channels with constant and variable depth. One of the common uncertainties in this class of two-dimensional hydrodynamic models is the treatment of the lateral boundary conditions. Special attention is paid specifically to addressing this problem. To maintain continuity within the domain of interest, ‘smooth’ curve-sided elements must be used at all shoreline boundaries. The present model uses triangular, isoparametric elements with quadratic basis functions for the two velocity components and a linear basis function for water surface elevation. An implicit time integration is used and the model is unconditionally stable. The resultant governing equations are nonlinear owing to the advective and the bottom friction terms and are solved iteratively at each time step by the Newton-Raphson method. Model test runs have been made in the southern portion of San Francisco Bay, California (South Bay) as well as in the Bay west of Carquinez Strait. Owing to the complex bathymetry, the hydrodynamic characteristics of the Bay system are dictated by the generally shallow basins which contain deep, relict river channels. Great care must be exercised to ensure that the conservation equations remain locally as well as globally accurate. Simulations have been made over several representative tidal cycles using this finite element model, and the results compare favourably with existing data. In particular, the standing wave in South Bay and the progressive wave in the northern reach are well represented.

  12. BinQuasi: a peak detection method for ChIP-sequencing data with biological replicates.

    PubMed

    Goren, Emily; Liu, Peng; Wang, Chao; Wang, Chong

    2018-04-19

    ChIP-seq experiments that are aimed at detecting DNA-protein interactions require biological replication to draw inferential conclusions, however there is no current consensus on how to analyze ChIP-seq data with biological replicates. Very few methodologies exist for the joint analysis of replicated ChIP-seq data, with approaches ranging from combining the results of analyzing replicates individually to joint modeling of all replicates. Combining the results of individual replicates analyzed separately can lead to reduced peak classification performance compared to joint modeling. Currently available methods for joint analysis may fail to control the false discovery rate at the nominal level. We propose BinQuasi, a peak caller for replicated ChIP-seq data, that jointly models biological replicates using a generalized linear model framework and employs a one-sided quasi-likelihood ratio test to detect peaks. When applied to simulated data and real datasets, BinQuasi performs favorably compared to existing methods, including better control of false discovery rate than existing joint modeling approaches. BinQuasi offers a flexible approach to joint modeling of replicated ChIP-seq data which is preferable to combining the results of replicates analyzed individually. Source code is freely available for download at https://cran.r-project.org/package=BinQuasi, implemented in R. pliu@iastate.edu or egoren@iastate.edu. Supplementary material is available at Bioinformatics online.

  13. Pressurization of cryogens - A review of current technology and its applicability to low-gravity conditions

    NASA Technical Reports Server (NTRS)

    Van Dresar, N. T.

    1992-01-01

    A review of technology, history, and current status for pressurized expulsion of cryogenic tankage is presented. Use of tank pressurization to expel cryogenic fluid will continue to be studied for future spacecraft applications over a range of operating conditions in the low-gravity environment. The review examines experimental test results and analytical model development for quiescent and agitated conditions in normal-gravity followed by a discussion of pressurization and expulsion in low-gravity. Validated, 1-D, finite difference codes exist for the prediction of pressurant mass requirements within the range of quiescent normal-gravity test data. To date, the effects of liquid sloshing have been characterized by tests in normal-gravity, but analytical models capable of predicting pressurant gas requirements remain unavailable. Efforts to develop multidimensional modeling capabilities in both normal and low-gravity have recently occurred. Low-gravity cryogenic fluid transfer experiments are needed to obtain low-gravity pressurized expulsion data. This data is required to guide analytical model development and to verify code performance.

  14. Pressurization of cryogens: A review of current technology and its applicability to low-gravity conditions

    NASA Technical Reports Server (NTRS)

    Vandresar, N. T.

    1992-01-01

    A review of technology, history, and current status for pressurized expulsion of cryogenic tankage is presented. Use of tank pressurization to expel cryogenic fluids will continue to be studied for future spacecraft applications over a range of operating conditions in the low-gravity environment. The review examines experimental test results and analytical model development for quiescent and agitated conditions in normal-gravity, followed by a discussion of pressurization and expulsion in low-gravity. Validated, 1-D, finite difference codes exist for the prediction of pressurant mass requirements within the range of quiescent normal-gravity test data. To date, the effects of liquid sloshing have been characterized by tests in normal-gravity, but analytical models capable of predicting pressurant gas requirements remain unavailable. Efforts to develop multidimensional modeling capabilities in both normal and low-gravity have recently occurred. Low-gravity cryogenic fluid transfer experiments are needed to obtain low-gravity pressurized expulsion data. This data is required to guide analytical model development and to verify code performance.

  15. The influence of pre-existing rib fractures on Global Human Body Models Consortium thorax response in frontal and oblique impact.

    PubMed

    Zaseck, Lauren Wood; Chen, Cong; Hu, Jingwen; Reed, Matthew P; Rupp, Jonathan

    2018-03-01

    Many post-mortem human subjects (PMHS) considered for use in biomechanical impact tests have pre-existing rib fractures (PERFs), usually resulting from cardiopulmonary resuscitation. These specimens are typically excluded from impact studies with the assumption that the fractures will alter the thoracic response to loading. We previously used the Global Human Body Models Consortium 50th percentile whole-body finite element model (GHBMC M50-O) to demonstrate that up to three lateral or bilateral PERFs do not meaningfully influence the response of the GHBMC thorax to lateral loading. This current study used the GHBMC M50-O to explore the influence of PERFs on thorax response in frontal and oblique loading. Up to six PERFs were simulated on the anterior or lateral rib regions, and the model was subjected to frontal or oblique cylindrical impactor, frontal seatbelt, or frontal seatbelt + airbag loading. Changes in thorax force-compression responses due to PERFs were generally minor, with the greatest alterations seen in models with six PERFs on one side of the ribcage. The observed changes, however, were small relative to mid-size male corridors for the loading conditions simulated. PERFs altered rib strain patterns, but the changes did not translate to changes in global thoracic response. Within the limits of model fidelity, the results suggest that PMHS with up to six PERFs may be appropriate for use in frontal or oblique impact testing. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. RDHWT/MARIAH II Hypersonic Wind Tunnel Research Program

    DTIC Science & Technology

    2008-09-01

    Diagnostics Dr. Gary Brown – Gas Dynamics Dr. Ihab Girgis – Modeling Dr. Dennis Mansfield – Experimental Ring Technical Services Dr. Leon Ring – Systems...wind tunnel (MSHWT) with Mach 8 to 15, true -temperature flight test capabilities. This research program was initiated in fiscal year (FY) 1998 and is...Force test capabilities that exist today. Performance goals of the MSHWT are true temperature, Mach 8 to 15, dynamic pressure of 500 to 2000 psf (24 to

  17. Testing the Predictive Power of Coulomb Stress on Aftershock Sequences

    NASA Astrophysics Data System (ADS)

    Woessner, J.; Lombardi, A.; Werner, M. J.; Marzocchi, W.

    2009-12-01

    Empirical and statistical models of clustered seismicity are usually strongly stochastic and perceived to be uninformative in their forecasts, since only marginal distributions are used, such as the Omori-Utsu and Gutenberg-Richter laws. In contrast, so-called physics-based aftershock models, based on seismic rate changes calculated from Coulomb stress changes and rate-and-state friction, make more specific predictions: anisotropic stress shadows and multiplicative rate changes. We test the predictive power of models based on Coulomb stress changes against statistical models, including the popular Short Term Earthquake Probabilities and Epidemic-Type Aftershock Sequences models: We score and compare retrospective forecasts on the aftershock sequences of the 1992 Landers, USA, the 1997 Colfiorito, Italy, and the 2008 Selfoss, Iceland, earthquakes. To quantify predictability, we use likelihood-based metrics that test the consistency of the forecasts with the data, including modified and existing tests used in prospective forecast experiments within the Collaboratory for the Study of Earthquake Predictability (CSEP). Our results indicate that a statistical model performs best. Moreover, two Coulomb model classes seem unable to compete: Models based on deterministic Coulomb stress changes calculated from a given fault-slip model, and those based on fixed receiver faults. One model of Coulomb stress changes does perform well and sometimes outperforms the statistical models, but its predictive information is diluted, because of uncertainties included in the fault-slip model. Our results suggest that models based on Coulomb stress changes need to incorporate stochastic features that represent model and data uncertainty.

  18. DebriSat Fragment Characterization System and Processing Status

    NASA Technical Reports Server (NTRS)

    Rivero, M.; Shiotani, B.; M. Carrasquilla; Fitz-Coy, N.; Liou, J. C.; Sorge, M.; Huynh, T.; Opiela, J.; Krisko, P.; Cowardin, H.

    2016-01-01

    The DebriSat project is a continuing effort sponsored by NASA and DoD to update existing break-up models using data obtained from hypervelocity impact tests performed to simulate on-orbit collisions. After the impact tests, a team at the University of Florida has been working to characterize the fragments in terms of their mass, size, shape, color and material content. The focus of the post-impact effort has been the collection of 2 mm and larger fragments resulting from the hypervelocity impact test. To date, in excess of 125K fragments have been recovered which is approximately 40K more than the 85K fragments predicted by the existing models. While the fragment collection activities continue, there has been a transition to the characterization of the recovered fragments. Since the start of the characterization effort, the focus has been on the use of automation to (i) expedite the fragment characterization process and (ii) minimize the effects of human subjectivity on the results; e.g., automated data entry processes were developed and implemented to minimize errors during transcription of the measurement data. At all steps of the process, however, there is human oversight to ensure the integrity of the data. Additionally, repeatability and reproducibility tests have been developed and implemented to ensure that the instrumentations used in the characterization process are accurate and properly calibrated.

  19. Well test mathematical model for fractures network in tight oil reservoirs

    NASA Astrophysics Data System (ADS)

    Diwu, Pengxiang; Liu, Tongjing; Jiang, Baoyi; Wang, Rui; Yang, Peidie; Yang, Jiping; Wang, Zhaoming

    2018-02-01

    Well test, especially build-up test, has been applied widely in the development of tight oil reservoirs, since it is the only available low cost way to directly quantify flow ability and formation heterogeneity parameters. However, because of the fractures network near wellbore, generated from artificial fracturing linking up natural factures, traditional infinite and finite conductivity fracture models usually result in significantly deviation in field application. In this work, considering the random distribution of natural fractures, physical model of fractures network is proposed, and it shows a composite model feature in the large scale. Consequently, a nonhomogeneous composite mathematical model is established with threshold pressure gradient. To solve this model semi-analytically, we proposed a solution approach including Laplace transform and virtual argument Bessel function, and this method is verified by comparing with existing analytical solution. The matching data of typical type curves generated from semi-analytical solution indicates that the proposed physical and mathematical model can describe the type curves characteristic in typical tight oil reservoirs, which have up warping in late-term rather than parallel lines with slope 1/2 or 1/4. It means the composite model could be used into pressure interpretation of artificial fracturing wells in tight oil reservoir.

  20. Modeling of the UAE Wind Turbine for Refinement of FAST{_}AD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jonkman, J. M.

    The Unsteady Aerodynamics Experiment (UAE) research wind turbine was modeled both aerodynamically and structurally in the FAST{_}AD wind turbine design code, and its response to wind inflows was simulated for a sample of test cases. A study was conducted to determine why wind turbine load magnitude discrepancies-inconsistencies in aerodynamic force coefficients, rotor shaft torque, and out-of-plane bending moments at the blade root across a range of operating conditions-exist between load predictions made by FAST{_}AD and other modeling tools and measured loads taken from the actual UAE wind turbine during the NASA-Ames wind tunnel tests. The acquired experimental test data representmore » the finest, most accurate set of wind turbine aerodynamic and induced flow field data available today. A sample of the FAST{_}AD model input parameters most critical to the aerodynamics computations was also systematically perturbed to determine their effect on load and performance predictions. Attention was focused on the simpler upwind rotor configuration, zero yaw error test cases. Inconsistencies in input file parameters, such as aerodynamic performance characteristics, explain a noteworthy fraction of the load prediction discrepancies of the various modeling tools.« less

  1. Testing fault growth models with low-temperature thermochronology in the northwest Basin and Range, USA

    USGS Publications Warehouse

    Curry, Magdalena A. E.; Barnes, Jason B.; Colgan, Joseph P.

    2016-01-01

    Common fault growth models diverge in predicting how faults accumulate displacement and lengthen through time. A paucity of field-based data documenting the lateral component of fault growth hinders our ability to test these models and fully understand how natural fault systems evolve. Here we outline a framework for using apatite (U-Th)/He thermochronology (AHe) to quantify the along-strike growth of faults. To test our framework, we first use a transect in the normal fault-bounded Jackson Mountains in the Nevada Basin and Range Province, then apply the new framework to the adjacent Pine Forest Range. We combine new and existing cross sections with 18 new and 16 existing AHe cooling ages to determine the spatiotemporal variability in footwall exhumation and evaluate models for fault growth. Three age-elevation transects in the Pine Forest Range show that rapid exhumation began along the range-front fault between approximately 15 and 11 Ma at rates of 0.2–0.4 km/Myr, ultimately exhuming approximately 1.5–5 km. The ages of rapid exhumation identified at each transect lie within data uncertainty, indicating concomitant onset of faulting along strike. We show that even in the case of growth by fault-segment linkage, the fault would achieve its modern length within 3–4 Myr of onset. Comparison with the Jackson Mountains highlights the inadequacies of spatially limited sampling. A constant fault-length growth model is the best explanation for our thermochronology results. We advocate that low-temperature thermochronology can be further utilized to better understand and quantify fault growth with broader implications for seismic hazard assessments and the coevolution of faulting and topography.

  2. Towards a suite of test cases and a pycomodo library to assess and improve numerical methods in ocean models

    NASA Astrophysics Data System (ADS)

    Garnier, Valérie; Honnorat, Marc; Benshila, Rachid; Boutet, Martial; Cambon, Gildas; Chanut, Jérome; Couvelard, Xavier; Debreu, Laurent; Ducousso, Nicolas; Duhaut, Thomas; Dumas, Franck; Flavoni, Simona; Gouillon, Flavien; Lathuilière, Cyril; Le Boyer, Arnaud; Le Sommer, Julien; Lyard, Florent; Marsaleix, Patrick; Marchesiello, Patrick; Soufflet, Yves

    2016-04-01

    The COMODO group (http://www.comodo-ocean.fr) gathers developers of global and limited-area ocean models (NEMO, ROMS_AGRIF, S, MARS, HYCOM, S-TUGO) with the aim to address well-identified numerical issues. In order to evaluate existing models, to improve numerical approaches and methods or concept (such as effective resolution) to assess the behavior of numerical model in complex hydrodynamical regimes and to propose guidelines for the development of future ocean models, a benchmark suite that covers both idealized test cases dedicated to targeted properties of numerical schemes and more complex test case allowing the evaluation of the kernel coherence is proposed. The benchmark suite is built to study separately, then together, the main components of an ocean model : the continuity and momentum equations, the advection-diffusion of the tracers, the vertical coordinate design and the time stepping algorithms. The test cases are chosen for their simplicity of implementation (analytic initial conditions), for their capacity to focus on a (few) scheme or part of the kernel, for the availability of analytical solutions or accurate diagnoses and lastly to simulate a key oceanic processus in a controlled environment. Idealized test cases allow to verify properties of numerical schemes advection-diffusion of tracers, - upwelling, - lock exchange, - baroclinic vortex, - adiabatic motion along bathymetry, and to put into light numerical issues that remain undetected in realistic configurations - trajectory of barotropic vortex, - interaction current - topography. When complexity in the simulated dynamics grows up, - internal wave, - unstable baroclinic jet, the sharing of the same experimental designs by different existing models is useful to get a measure of the model sensitivity to numerical choices (Soufflet et al., 2016). Lastly, test cases help in understanding the submesoscale influence on the dynamics (Couvelard et al., 2015). Such a benchmark suite is an interesting bed to continue research in numerical approaches as well as an efficient tool to maintain any oceanic code and assure the users a stamped model in a certain range of hydrodynamical regimes. Thanks to a common netCDF format, this suite is completed with a python library that encompasses all the tools and metrics used to assess the efficiency of the numerical methods. References - Couvelard X., F. Dumas, V. Garnier, A.L. Ponte, C. Talandier, A.M. Treguier (2015). Mixed layer formation and restratification in presence of mesoscale and submesoscale turbulence. Ocean Modelling, Vol 96-2, p 243-253. doi:10.1016/j.ocemod.2015.10.004. - Soufflet Y., P. Marchesiello, F. Lemarié, J. Jouanno, X. Capet, L. Debreu , R. Benshila (2016). On effective resolution in ocean models. Ocean Modelling, in press. doi:10.1016/j.ocemod.2015.12.004

  3. Testing antismoking messages for Air Force trainees.

    PubMed

    Popova, Lucy; Linde, Brittany D; Bursac, Zoran; Talcott, G Wayne; Modayil, Mary V; Little, Melissa A; Ling, Pamela M; Glantz, Stanton A; Klesges, Robert C

    2016-11-01

    Young adults in the military are aggressively targeted by tobacco companies and are at high risk of tobacco use. Existing antismoking advertisements developed for the general population might be effective in educating young adults in the military. This study evaluated the effects of different themes of existing antismoking advertisements on perceived harm and intentions to use cigarettes and other tobacco products among Air Force trainees. In a pretest-post-test experiment, 782 Airmen were randomised to view antismoking advertisements in 1 of 6 conditions: anti-industry, health effects+anti-industry, sexual health, secondhand smoke, environment+anti-industry or control. We assessed the effect of different conditions on changes in perceived harm and intentions to use cigarettes, electronic cigarettes, smokeless tobacco, hookah and cigarillos from pretest to post-test with multivariable linear regression models (perceived harm) and zero-inflated Poisson regression model (intentions). Antismoking advertisements increased perceived harm of various tobacco products and reduced intentions to use. Advertisements featuring negative effects of tobacco on health and sexual performance coupled with revealing tobacco industry manipulations had the most consistent pattern of effects on perceived harm and intentions. Antismoking advertisements produced for the general public might also be effective with a young adult military population and could have spillover effects on perceptions of harm and intentions to use other tobacco products besides cigarettes. Existing antismoking advertising may be a cost-effective tool to educate young adults in the military. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  4. Wavelet transform approach for fitting financial time series data

    NASA Astrophysics Data System (ADS)

    Ahmed, Amel Abdoullah; Ismail, Mohd Tahir

    2015-10-01

    This study investigates a newly developed technique; a combined wavelet filtering and VEC model, to study the dynamic relationship among financial time series. Wavelet filter has been used to annihilate noise data in daily data set of NASDAQ stock market of US, and three stock markets of Middle East and North Africa (MENA) region, namely, Egypt, Jordan, and Istanbul. The data covered is from 6/29/2001 to 5/5/2009. After that, the returns of generated series by wavelet filter and original series are analyzed by cointegration test and VEC model. The results show that the cointegration test affirms the existence of cointegration between the studied series, and there is a long-term relationship between the US, stock markets and MENA stock markets. A comparison between the proposed model and traditional model demonstrates that, the proposed model (DWT with VEC model) outperforms traditional model (VEC model) to fit the financial stock markets series well, and shows real information about these relationships among the stock markets.

  5. Constraints on the anisotropic contributions to velocity discontinuities at ∼60 km depth beneath the Pacific

    PubMed Central

    Harmon, Nicholas

    2017-01-01

    Abstract Strong, sharp, negative seismic discontinuities, velocity decreases with depth, are observed beneath the Pacific seafloor at ∼60 km depth. It has been suggested that these are caused by an increase in radial anisotropy with depth, which occurs in global surface wave models. Here we test this hypothesis in two ways. We evaluate whether an increase in surface wave radial anisotropy with depth is robust with synthetic resolution tests. We do this by fitting an example surface wave data set near the East Pacific Rise. We also estimate the apparent isotropic seismic velocity discontinuities that could be caused by changes in radial anisotropy in S‐to‐P and P‐to‐S receiver functions and SS precursors using synthetic seismograms. We test one model where radial anisotropy is caused by olivine alignment and one model where it is caused by compositional layering. The result of our surface wave inversion suggests strong shallow azimuthal anisotropy beneath 0–10 Ma seafloor, which would also have a radial anisotropy signature. An increase in radial anisotropy with depth at 60 km depth is not well‐resolved in surface wave models, and could be artificially observed. Shallow isotropy underlain by strong radial anisotropy could explain moderate apparent velocity drops (<6%) in SS precursor imaging, but not receiver functions. The effect is diminished if strong anisotropy also exists at 0–60 km depth as suggested by surface waves. Overall, an increase in radial anisotropy with depth may not exist at 60 km beneath the oceans and does not explain the scattered wave observations. PMID:29097907

  6. Constraints on the anisotropic contributions to velocity discontinuities at ∼60 km depth beneath the Pacific.

    PubMed

    Rychert, Catherine A; Harmon, Nicholas

    2017-08-01

    Strong, sharp, negative seismic discontinuities, velocity decreases with depth, are observed beneath the Pacific seafloor at ∼60 km depth. It has been suggested that these are caused by an increase in radial anisotropy with depth, which occurs in global surface wave models. Here we test this hypothesis in two ways. We evaluate whether an increase in surface wave radial anisotropy with depth is robust with synthetic resolution tests. We do this by fitting an example surface wave data set near the East Pacific Rise. We also estimate the apparent isotropic seismic velocity discontinuities that could be caused by changes in radial anisotropy in S-to-P and P-to-S receiver functions and SS precursors using synthetic seismograms. We test one model where radial anisotropy is caused by olivine alignment and one model where it is caused by compositional layering. The result of our surface wave inversion suggests strong shallow azimuthal anisotropy beneath 0-10 Ma seafloor, which would also have a radial anisotropy signature. An increase in radial anisotropy with depth at 60 km depth is not well-resolved in surface wave models, and could be artificially observed. Shallow isotropy underlain by strong radial anisotropy could explain moderate apparent velocity drops (<6%) in SS precursor imaging, but not receiver functions. The effect is diminished if strong anisotropy also exists at 0-60 km depth as suggested by surface waves. Overall, an increase in radial anisotropy with depth may not exist at 60 km beneath the oceans and does not explain the scattered wave observations.

  7. Finite element modelling of crash response of composite aerospace sub-floor structures

    NASA Astrophysics Data System (ADS)

    McCarthy, M. A.; Harte, C. G.; Wiggenraad, J. F. M.; Michielsen, A. L. P. J.; Kohlgrüber, D.; Kamoulakos, A.

    Composite energy-absorbing structures for use in aircraft are being studied within a European Commission research programme (CRASURV - Design for Crash Survivability). One of the aims of the project is to evaluate the current capabilities of crashworthiness simulation codes for composites modelling. This paper focuses on the computational analysis using explicit finite element analysis, of a number of quasi-static and dynamic tests carried out within the programme. It describes the design of the structures, the analysis techniques used, and the results of the analyses in comparison to the experimental test results. It has been found that current multi-ply shell models are capable of modelling the main energy-absorbing processes at work in such structures. However some deficiencies exist, particularly in modelling fabric composites. Developments within the finite element code are taking place as a result of this work which will enable better representation of composite fabrics.

  8. Analytical prediction of the interior noise for cylindrical models of aircraft fuselages for prescribed exterior noise fields. Phase 2: Models for sidewall trim, stiffened structures and cabin acoustics with floor partition

    NASA Technical Reports Server (NTRS)

    Pope, L. D.; Wilby, E. G.

    1982-01-01

    An airplane interior noise prediction model is developed to determine the important parameters associated with sound transmission into the interiors of airplanes, and to identify apropriate noise control methods. Models for stiffened structures, and cabin acoustics with floor partition are developed. Validation studies are undertaken using three test articles: a ring stringer stiffened cylinder, an unstiffened cylinder with floor partition, and ring stringer stiffened cylinder with floor partition and sidewall trim. The noise reductions of the three test articles are computed using the heoretical models and compared to measured values. A statistical analysis of the comparison data indicates that there is no bias in the predictions although a substantial random error exists so that a discrepancy of more than five or six dB can be expected for about one out of three predictions.

  9. Effect of shroud geometry on the effectiveness of a short mixing stack gas eductor model

    NASA Astrophysics Data System (ADS)

    Kavalis, A. E.

    1983-06-01

    An existing apparatus for testing models of gas eductor systems using high temperature primary flow was modified to provide improved control and performance over a wide range of gas temperature and flow rates. Secondary flow pumping, temperature and pressure data were recorded for two gas eductor system models. The first, previously tested under hot flow conditions, consists of a primary plate with four tilted-angled nozzles and a slotted, shrouded mixing stack with two diffuser rings (overall L/D = 1.5). A portable pyrometer with a surface probe was used for the second model in order to identify any hot spots at the external surface of the mixing stack, shroud and diffuser rings. The second model is shown to have almost the same mixing and pumping performance with the first one but to exhibit much lower shroud and diffuser surface temperatures.

  10. High-order sliding-mode control for blood glucose regulation in the presence of uncertain dynamics.

    PubMed

    Hernández, Ana Gabriela Gallardo; Fridman, Leonid; Leder, Ron; Andrade, Sergio Islas; Monsalve, Cristina Revilla; Shtessel, Yuri; Levant, Arie

    2011-01-01

    The success of blood glucose automatic regulation depends on the robustness of the control algorithm used. It is a difficult task to perform due to the complexity of the glucose-insulin regulation system. The variety of model existing reflects the great amount of phenomena involved in the process, and the inter-patient variability of the parameters represent another challenge. In this research a High-Order Sliding-Mode Control is proposed. It is applied to two well known models, Bergman Minimal Model, and Sorensen Model, to test its robustness with respect to uncertain dynamics, and patients' parameter variability. The controller designed based on the simulations is tested with the specific Bergman Minimal Model of a diabetic patient whose parameters were identified from an in vivo assay. To minimize the insulin infusion rate, and avoid the hypoglycemia risk, the glucose target is a dynamical profile.

  11. Load-Flow in Multiphase Distribution Networks: Existence, Uniqueness, Non-Singularity, and Linear Models

    DOE PAGES

    Bernstein, Andrey; Wang, Cong; Dall'Anese, Emiliano; ...

    2018-01-01

    This paper considers unbalanced multiphase distribution systems with generic topology and different load models, and extends the Z-bus iterative load-flow algorithm based on a fixed-point interpretation of the AC load-flow equations. Explicit conditions for existence and uniqueness of load-flow solutions are presented. These conditions also guarantee convergence of the load-flow algorithm to the unique solution. The proposed methodology is applicable to generic systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. Further, a sufficient condition for themore » non-singularity of the load-flow Jacobian is proposed. Finally, linear load-flow models are derived, and their approximation accuracy is analyzed. Theoretical results are corroborated through experiments on IEEE test feeders.« less

  12. Graph-based real-time fault diagnostics

    NASA Technical Reports Server (NTRS)

    Padalkar, S.; Karsai, G.; Sztipanovits, J.

    1988-01-01

    A real-time fault detection and diagnosis capability is absolutely crucial in the design of large-scale space systems. Some of the existing AI-based fault diagnostic techniques like expert systems and qualitative modelling are frequently ill-suited for this purpose. Expert systems are often inadequately structured, difficult to validate and suffer from knowledge acquisition bottlenecks. Qualitative modelling techniques sometimes generate a large number of failure source alternatives, thus hampering speedy diagnosis. In this paper we present a graph-based technique which is well suited for real-time fault diagnosis, structured knowledge representation and acquisition and testing and validation. A Hierarchical Fault Model of the system to be diagnosed is developed. At each level of hierarchy, there exist fault propagation digraphs denoting causal relations between failure modes of subsystems. The edges of such a digraph are weighted with fault propagation time intervals. Efficient and restartable graph algorithms are used for on-line speedy identification of failure source components.

  13. Moral Enhancement Should Target Self-Interest and Cognitive Capacity.

    PubMed

    Ahlskog, Rafael

    2017-01-01

    Current suggestions for capacities that should be targeted for moral enhancement has centered on traits like empathy, fairness or aggression. The literature, however, lacks a proper model for understanding the interplay and complexity of moral capacities, which limits the practicability of proposed interventions. In this paper, I integrate some existing knowledge on the nature of human moral behavior and present a formal model of prosocial motivation. The model provides two important results regarding the most friction-free route to moral enhancement. First, we should consider decreasing self-interested motivation rather than increasing prosociality directly. Second, this should be complemented with cognitive enhancement. These suggestions are tested against existing and emerging evidence on cognitive capacity, mindfulness meditation and the effects of psychedelic drugs and are found to have sufficient grounding for further theoretical and empirical exploration. Furthermore, moral effects of the latter two are hypothesized to result from a diminished sense of self with subsequent reductions in self-interest.

  14. Load-Flow in Multiphase Distribution Networks: Existence, Uniqueness, Non-Singularity, and Linear Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, Andrey; Wang, Cong; Dall'Anese, Emiliano

    This paper considers unbalanced multiphase distribution systems with generic topology and different load models, and extends the Z-bus iterative load-flow algorithm based on a fixed-point interpretation of the AC load-flow equations. Explicit conditions for existence and uniqueness of load-flow solutions are presented. These conditions also guarantee convergence of the load-flow algorithm to the unique solution. The proposed methodology is applicable to generic systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. Further, a sufficient condition for themore » non-singularity of the load-flow Jacobian is proposed. Finally, linear load-flow models are derived, and their approximation accuracy is analyzed. Theoretical results are corroborated through experiments on IEEE test feeders.« less

  15. Stress and Damage in Polymer Matrix Composite Materials Due to Material Degradation at High Temperatures

    NASA Technical Reports Server (NTRS)

    McManus, Hugh L.; Chamis, Christos C.

    1996-01-01

    This report describes analytical methods for calculating stresses and damage caused by degradation of the matrix constituent in polymer matrix composite materials. Laminate geometry, material properties, and matrix degradation states are specified as functions of position and time. Matrix shrinkage and property changes are modeled as functions of the degradation states. The model is incorporated into an existing composite mechanics computer code. Stresses, strains, and deformations at the laminate, ply, and micro levels are calculated, and from these calculations it is determined if there is failure of any kind. The rationale for the model (based on published experimental work) is presented, its integration into the laminate analysis code is outlined, and example results are given, with comparisons to existing material and structural data. The mechanisms behind the changes in properties and in surface cracking during long-term aging of polyimide matrix composites are clarified. High-temperature-material test methods are also evaluated.

  16. Ignition and Growth Modeling of Detonating LX-04 (85% HMX / 15% VITON) Using New and Previously Obtained Experimental Data

    NASA Astrophysics Data System (ADS)

    Tarver, Craig

    2017-06-01

    An Ignition and Growth reactive flow model for detonating LX-04 (85% HMX / 15% Viton) was developed using new and previously obtained experimental data on: cylinder test expansion; wave curvature; failure diameter; and laser interferometric copper and tantalum foil free surface velocities and LiF interface particle velocity histories. A reaction product JWL EOS generated by the CHEETAH code compared favorably with the existing, well normalized LX-04 product JWL when both were used with the Ignition and Growth model. Good agreement with all existing experimental data was obtained. Keywords: LX-04, HMX, detonation, Ignition and Growth PACS:82.33.Vx, 82.40.Fp This work was performed under the auspices of the U. S. Department of Energy by the Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344.

  17. Analytic Considerations and Design Basis for the IEEE Distribution Test Feeders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, K. P.; Mather, B. A.; Pal, B. C.

    For nearly 20 years the Test Feeder Working Group of the Distribution System Analysis Subcommittee has been developing openly available distribution test feeders for use by researchers. The purpose of these test feeders is to provide models of distribution systems that reflect the wide diversity in design and their various analytic challenges. Because of their utility and accessibility, the test feeders have been used for a wide range of research, some of which has been outside the original scope of intended uses. This paper provides an overview of the existing distribution feeder models and clarifies the specific analytic challenges thatmore » they were originally designed to examine. Additionally, the paper will provide guidance on which feeders are best suited for various types of analysis. The purpose of this paper is to provide the original intent of the Working Group and to provide the information necessary so that researchers may make an informed decision on which of the test feeders are most appropriate for their work.« less

  18. Analytic Considerations and Design Basis for the IEEE Distribution Test Feeders

    DOE PAGES

    Schneider, K. P.; Mather, B. A.; Pal, B. C.; ...

    2017-10-10

    For nearly 20 years the Test Feeder Working Group of the Distribution System Analysis Subcommittee has been developing openly available distribution test feeders for use by researchers. The purpose of these test feeders is to provide models of distribution systems that reflect the wide diversity in design and their various analytic challenges. Because of their utility and accessibility, the test feeders have been used for a wide range of research, some of which has been outside the original scope of intended uses. This paper provides an overview of the existing distribution feeder models and clarifies the specific analytic challenges thatmore » they were originally designed to examine. Additionally, the paper will provide guidance on which feeders are best suited for various types of analysis. The purpose of this paper is to provide the original intent of the Working Group and to provide the information necessary so that researchers may make an informed decision on which of the test feeders are most appropriate for their work.« less

  19. Model systems for defining initiation, promotion, and progression of skin neoplasms.

    PubMed

    Boutwell, R K

    1989-01-01

    A number of items that must be considered in designing and choosing a suitable model for initiation and promotion testing have been described. Although these items may seem complex, tests for initiation and promotion are, in reality, quite simple and provide a rational approach to carcinogen testing. Several tests have been described here and Eastin, elsewhere in this book, describes the validation of a simple and highly recommended test. The processes involved in initiation and promotion are qualitatively different. The criteria for concern about possible human hazards as well as for regulation of initiators and promoters should be based on these qualitative differences. Realistic appraisal of the risk must be based on the level and nature of the potential hazard. In particular, it must be recognized that promoting action is reversible, that a threshold exists, and that promotion is readily inhibited. Therefore, animal tests that differentiate between potential initiators and promoters are essential to enable a logical assessment of human risk and the implementation of appropriate protective measures based on scientific facts.

  20. A reliability as an independent variable (RAIV) methodology for optimizing test planning for liquid rocket engines

    NASA Astrophysics Data System (ADS)

    Strunz, Richard; Herrmann, Jeffrey W.

    2011-12-01

    The hot fire test strategy for liquid rocket engines has always been a concern of space industry and agency alike because no recognized standard exists. Previous hot fire test plans focused on the verification of performance requirements but did not explicitly include reliability as a dimensioning variable. The stakeholders are, however, concerned about a hot fire test strategy that balances reliability, schedule, and affordability. A multiple criteria test planning model is presented that provides a framework to optimize the hot fire test strategy with respect to stakeholder concerns. The Staged Combustion Rocket Engine Demonstrator, a program of the European Space Agency, is used as example to provide the quantitative answer to the claim that a reduced thrust scale demonstrator is cost beneficial for a subsequent flight engine development. Scalability aspects of major subsystems are considered in the prior information definition inside the Bayesian framework. The model is also applied to assess the impact of an increase of the demonstrated reliability level on schedule and affordability.

  1. Locally refined block-centred finite-difference groundwater models: Evaluation of parameter sensitivity and the consequences for inverse modelling

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2002-01-01

    Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and the performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are: (a) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed, and (b) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.

  2. Locally refined block-centered finite-difference groundwater models: Evaluation of parameter sensitivity and the consequences for inverse modelling and predictions

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2002-01-01

    Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are (1) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed and (2) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.

  3. Dynamic prediction in functional concurrent regression with an application to child growth.

    PubMed

    Leroux, Andrew; Xiao, Luo; Crainiceanu, Ciprian; Checkley, William

    2018-04-15

    In many studies, it is of interest to predict the future trajectory of subjects based on their historical data, referred to as dynamic prediction. Mixed effects models have traditionally been used for dynamic prediction. However, the commonly used random intercept and slope model is often not sufficiently flexible for modeling subject-specific trajectories. In addition, there may be useful exposures/predictors of interest that are measured concurrently with the outcome, complicating dynamic prediction. To address these problems, we propose a dynamic functional concurrent regression model to handle the case where both the functional response and the functional predictors are irregularly measured. Currently, such a model cannot be fit by existing software. We apply the model to dynamically predict children's length conditional on prior length, weight, and baseline covariates. Inference on model parameters and subject-specific trajectories is conducted using the mixed effects representation of the proposed model. An extensive simulation study shows that the dynamic functional regression model provides more accurate estimation and inference than existing methods. Methods are supported by fast, flexible, open source software that uses heavily tested smoothing techniques. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  4. Full-Scale Crash Test and Finite Element Simulation of a Composite Prototype Helicopter

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.; Fasanella, Edwin L.; Boitnott, Richard L.; Lyle, Karen H.

    2003-01-01

    A full-scale crash test of a prototype composite helicopter was performed at the Impact Dynamics Research Facility at NASA Langley Research Center in 1999 to obtain data for validation of a finite element crash simulation. The helicopter was the flight test article built by Sikorsky Aircraft during the Advanced Composite Airframe Program (ACAP). The composite helicopter was designed to meet the stringent Military Standard (MIL-STD-1290A) crashworthiness criteria and was outfitted with two crew and two troop seats and four anthropomorphic dummies. The test was performed at 38-ft/s vertical and 32.5-ft/s horizontal velocity onto a rigid surface. An existing modal-vibration model of the Sikorsky ACAP helicopter was converted into a model suitable for crash simulation. A two-stage modeling approach was implemented and an external user-defined subroutine was developed to represent the complex landing gear response. The crash simulation was executed with a nonlinear, explicit transient dynamic finite element code. Predictions of structural deformation and failure, the sequence of events, and the dynamic response of the airframe structure were generated and the numerical results were correlated with the experimental data to validate the simulation. The test results, the model development, and the test-analysis correlation are described.

  5. The estimation of uniaxial compressive strength conversion factor of trona and interbeds from point load tests and numerical modeling

    NASA Astrophysics Data System (ADS)

    Ozturk, H.; Altinpinar, M.

    2017-07-01

    The point load (PL) test is generally used for estimation of uniaxial compressive strength (UCS) of rocks because of its economic advantages and simplicity in testing. If the PL index of a specimen is known, the UCS can be estimated using conversion factors. Several conversion factors have been proposed by various researchers and they are dependent upon the rock type. In the literature, conversion factors on different sedimentary, igneous and metamorphic rocks can be found, but no study exists on trona. In this study, laboratory UCS and field PL tests were carried out on trona and interbeds of volcano-sedimentary rocks. Based on these tests, PL to UCS conversion factors of trona and interbeds are proposed. The tests were modeled numerically using a distinct element method (DEM) software, particle flow code (PFC), in an attempt to guide researchers having various types of modeling problems (excavation, cavern design, hydraulic fracturing, etc.) of the abovementioned rock types. Average PFC parallel bond contact model micro properties for the trona and interbeds were determined within this study so that future researchers can use them to avoid the rigorous PFC calibration procedure. It was observed that PFC overestimates the tensile strength of the rocks by a factor that ranges from 22 to 106.

  6. Near-Field Magnetic Dipole Moment Analysis

    NASA Technical Reports Server (NTRS)

    Harris, Patrick K.

    2003-01-01

    This paper describes the data analysis technique used for magnetic testing at the NASA Goddard Space Flight Center (GSFC). Excellent results have been obtained using this technique to convert a spacecraft s measured magnetic field data into its respective magnetic dipole moment model. The model is most accurate with the earth s geomagnetic field cancelled in a spherical region bounded by the measurement magnetometers with a minimum radius large enough to enclose the magnetic source. Considerably enhanced spacecraft magnetic testing is offered by using this technique in conjunction with a computer-controlled magnetic field measurement system. Such a system, with real-time magnetic field display capabilities, has been incorporated into other existing magnetic measurement facilities and is also used at remote locations where transport to a magnetics test facility is impractical.

  7. Digital resolver for helicopter model blade motion analysis

    NASA Technical Reports Server (NTRS)

    Daniels, T. S.; Berry, J. D.; Park, S.

    1992-01-01

    The paper reports the development and initial testing of a digital resolver to replace existing analog signal processing instrumentation. Radiometers, mounted directly on one of the fully articulated blades, are electrically connected through a slip ring to analog signal processing circuitry. The measured signals are periodic with azimuth angle and are resolved into harmonic components, with 0 deg over the tail. The periodic nature of the helicopter blade motion restricts the frequency content of each flapping and yaw signal to the fundamental and harmonics of the rotor rotational frequency. A minicomputer is employed to collect these data and then plot them graphically in real time. With this and other information generated by the instrumentation, a helicopter test pilot can then adjust the helicopter model's controls to achieve the desired aerodynamic test conditions.

  8. Operating a terrestrial Internet router onboard and alongside a small satellite

    NASA Astrophysics Data System (ADS)

    Wood, L.; da Silva Curiel, A.; Ivancic, W.; Hodgson, D.; Shell, D.; Jackson, C.; Stewart, D.

    2006-07-01

    After twenty months of flying, testing and demonstrating a Cisco mobile access router, originally designed for terrestrial use, onboard the low-Earth-orbiting UK-DMC satellite as part of a larger merged ground/space IP-based internetwork, we use our experience to examine the benefits and drawbacks of integration and standards reuse for small satellite missions. Benefits include ease of operation and the ability to leverage existing systems and infrastructure designed for general use with a large set of latent capabilities to draw on when needed, as well as the familiarity that comes from reuse of existing, known, and well-understood security and operational models. Drawbacks include cases where integration work was needed to bridge the gaps in assumptions between different systems, and where performance considerations outweighed the benefits of reuse of pre-existing file transfer protocols. We find similarities with the terrestrial IP networks whose technologies have been taken to small satellites—and also some significant differences between the two in operational models and assumptions that must be borne in mind.

  9. COSMOS: accurate detection of somatic structural variations through asymmetric comparison between tumor and normal samples

    PubMed Central

    Yamagata, Koichi; Yamanishi, Ayako; Kokubu, Chikara; Takeda, Junji; Sese, Jun

    2016-01-01

    An important challenge in cancer genomics is precise detection of structural variations (SVs) by high-throughput short-read sequencing, which is hampered by the high false discovery rates of existing analysis tools. Here, we propose an accurate SV detection method named COSMOS, which compares the statistics of the mapped read pairs in tumor samples with isogenic normal control samples in a distinct asymmetric manner. COSMOS also prioritizes the candidate SVs using strand-specific read-depth information. Performance tests on modeled tumor genomes revealed that COSMOS outperformed existing methods in terms of F-measure. We also applied COSMOS to an experimental mouse cell-based model, in which SVs were induced by genome engineering and gamma-ray irradiation, followed by polymerase chain reaction-based confirmation. The precision of COSMOS was 84.5%, while the next best existing method was 70.4%. Moreover, the sensitivity of COSMOS was the highest, indicating that COSMOS has great potential for cancer genome analysis. PMID:26833260

  10. Publishing and sharing of hydrologic models through WaterHUB

    NASA Astrophysics Data System (ADS)

    Merwade, V.; Ruddell, B. L.; Song, C.; Zhao, L.; Kim, J.; Assi, A.

    2011-12-01

    Most hydrologists use hydrologic models to simulate the hydrologic processes to understand hydrologic pathways and fluxes for research, decision making and engineering design. Once these tasks are complete including publication of results, the models generally are not published or made available to the public for further use and improvement. Although publication or sharing of models is not required for journal publications, sharing of models may open doors for new collaborations, and avoids duplication of efforts if other researchers are interested in simulating a particular watershed for which a model already exists. For researchers, who are interested in sharing models, there are limited avenues to publishing their models to the wider community. Towards filling this gap, a prototype cyberinfrastructure (CI), called WaterHUB, is developed for sharing hydrologic data and modeling tools in an interactive environment. To test the utility of WaterHUB for sharing hydrologic models, a system to publish and share SWAT (Soil Water Assessment Tool) is developed. Users can utilize WaterHUB to search and download existing SWAT models, and also upload new SWAT models. Metadata such as the name of the watershed, name of the person or agency who developed the model, simulation period, time step, and list of calibrated parameters also published with individual model.

  11. Hydrostratigraphic interpretation of test-hole and surface geophysical data, Elkhorn and Loup River Basins, Nebraska, 2008 to 2011

    USGS Publications Warehouse

    Hobza, Christopher M.; Bedrosian, Paul A.; Bloss, Benjamin R.

    2012-01-01

    The Elkhorn-Loup Model (ELM) was begun in 2006 to understand the effect of various groundwater-management scenarios on surface-water resources. During phase one of the ELM study, a lack of subsurface geological information was identified as a data gap. Test holes drilled to the base of the aquifer in the ELM study area are spaced as much as 25 miles apart, especially in areas of the western Sand Hills. Given the variable character of the hydrostratigraphic units that compose the High Plains aquifer system, substantial variation in aquifer thickness and characteristics can exist between test holes. To improve the hydrogeologic understanding of the ELM study area, the U.S. Geological Survey, in cooperation with the Nebraska Department of Natural Resources, multiple Natural Resources Districts participating in the ELM study, and the University of Nebraska-Lincoln Conservation and Survey Division, described the subsurface lithology at six test holes drilled in 2010 and concurrently collected borehole geophysical data to identify the base of the High Plains aquifer system. A total of 124 time-domain electromagnetic (TDEM) soundings of resistivity were collected at and between selected test-hole locations during 2008-11 as a quick, non-invasive means of identifying the base of the High Plains aquifer system. Test-hole drilling and geophysical logging indicated the base-of-aquifer elevation was less variable in the central ELM area than in previously reported results from the western part of the ELM study area, where deeper paleochannels were eroded into the Brule Formation. In total, more than 435 test holes were examined and compared with the modeled-TDEM soundings. Even where present, individual stratigraphic units could not always be identified in modeled-TDEM sounding results if sufficient resistivity contrast was not evident; however, in general, the base of aquifer [top of the aquifer confining unit (ACU)] is one of the best-resolved results from the TDEM-based models, and estimates of the base-of-aquifer elevation are in good accordance with those from existing test-hole data. Differences between ACU elevations based on modeled-TDEM and test-hole data ranged from 2 to 113 feet (0.6 to 34 meters). The modeled resistivity results reflect the eastward thinning of Miocene-age and older stratigraphic units, and generally allowed confident identification of the accompanying change in the stratigraphic unit forming the ACU. The differences in elevation of the top of the Ogallala, estimated on the basis of the modeled-TDEM resistivity, and the test-hole data ranged from 11 to 251 feet (3.4 to 77 meters), with two-thirds of model results being within 60 feet of the test-hole contact elevation. The modeled-TDEM soundings also provided information regarding the distribution of Plio-Pleistocene gravel deposits, which had an average thickness of 100 feet (30 meters) in the study area; however, in many cases the contact between the Plio-Pleistocene deposits and the overlying Quaternary deposits cannot be reliably distinguished using TDEM soundings alone because of insufficient thickness or resistivity contrast.

  12. Test results of a 40-kW Stirling engine and comparison with the NASA Lewis computer code predictions

    NASA Technical Reports Server (NTRS)

    Allen, David J.; Cairelli, James E.

    1988-01-01

    A Stirling engine was tested without auxiliaries at Nasa-Lewis. Three different regenerator configurations were tested with hydrogen. The test objectives were: (1) to obtain steady-state and dynamic engine data, including indicated power, for validation of an existing computer model for this engine; and (2) to evaluate structurally the use of silicon carbide regenerators. This paper presents comparisons of the measured brake performance, indicated mean effective pressure, and cyclic pressure variations from those predicted by the code. The silicon carbide foam generators appear to be structurally suitable, but the foam matrix showed severely reduced performance.

  13. Flow stress model in metal cutting

    NASA Technical Reports Server (NTRS)

    Black, J. T.

    1978-01-01

    A model for the plastic deformation that occurs in metal cutting, based on dislocation mechanics, is presented. The model explains the fundamental deformation structure that develops during machining and is based on the well known Cottrell-Stokes Law, wherein the flow stress is partitioned into two parts; an athermal part which occurs in the shear fronts (or shear bands); and a thermal part which occurs in the lamella regions. The deformation envokes the presence of a cellular dislocation distribution which always exists in the material ahead of the shear process. This 'alien' dislocation distribution either exists in the metal prior to cutting or is produced by the compressive stress field which operates in front of the shear process. The magnitude of the flow stress and direction of the shear are shown to be correlated to the stacking fault energy of the metal being cut. The model is tested with respect to energy consumption rates and found to be consistent with observed values.

  14. Pain sensitivity mediates the relationship between stress and headache intensity in chronic tension-type headache

    PubMed Central

    Cathcart, Stuart; Bhullar, Navjot; Immink, Maarten; Della Vedova, Chris; Hayball, John

    2012-01-01

    BACKGROUND: A central model for chronic tension-type headache (CTH) posits that stress contributes to headache, in part, by aggravating existing hyperalgesia in CTH sufferers. The prediction from this model that pain sensitivity mediates the relationship between stress and headache activity has not yet been examined. OBJECTIVE: To determine whether pain sensitivity mediates the relationship between stress and prospective headache activity in CTH sufferers. METHOD: Self-reported stress, pain sensitivity and prospective headache activity were measured in 53 CTH sufferers recruited from the general population. Pain sensitivity was modelled as a mediator between stress and headache activity, and tested using a nonparametric bootstrap analysis. RESULTS: Pain sensitivity significantly mediated the relationship between stress and headache intensity. CONCLUSIONS: The results of the present study support the central model for CTH, which posits that stress contributes to headache, in part, by aggravating existing hyperalgesia in CTH sufferers. Implications for the mechanisms and treatment of CTH are discussed. PMID:23248808

  15. Transforming GIS data into functional road models for large-scale traffic simulation.

    PubMed

    Wilkie, David; Sewall, Jason; Lin, Ming C

    2012-06-01

    There exists a vast amount of geographic information system (GIS) data that model road networks around the world as polylines with attributes. In this form, the data are insufficient for applications such as simulation and 3D visualization-tools which will grow in power and demand as sensor data become more pervasive and as governments try to optimize their existing physical infrastructure. In this paper, we propose an efficient method for enhancing a road map from a GIS database to create a geometrically and topologically consistent 3D model to be used in real-time traffic simulation, interactive visualization of virtual worlds, and autonomous vehicle navigation. The resulting representation provides important road features for traffic simulations, including ramps, highways, overpasses, legal merge zones, and intersections with arbitrary states, and it is independent of the simulation methodologies. We test the 3D models of road networks generated by our algorithm on real-time traffic simulation using both macroscopic and microscopic techniques.

  16. Modeling, Production, and Testing of an Echogenic Needle for Ultrasound-Guided Nerve Blocks.

    PubMed

    Bigeleisen, Paul E; Hess, Aaron; Zhu, Richard; Krediet, Annelot

    2016-06-01

    We have designed, produced, and tested an echogenic needle based on a sawtooth pattern where the height of the tooth was 1.25 times the wavelength of the ultrasound transducer. A numeric solution to the time-independent wave equation (Helmholtz equation) was used to create a model of backscattering from a needle. A 21-gauge stainless steel prototype was manufactured and tested in a water bath. Backscattering from the needle was compared to theoretical predications from our model. Based on these results, an 18-gauge prototype needle was fabricated from stainless steel and tested in a pig cadaver. This needle was compared to a commercial 18-gauge echogenic needle (Pajunk Medical Systems, Tucker, GA) by measuring the brightness of the needle relative to the background of sonograms of a needle in a pig cadaver. The backscattering from the 21-gauge prototype needle reproduced the qualitative predictions of our model. At 30° and 45° of insonation, our prototype performed equivalently to the Pajunk needle. At 60°, our prototype was significantly brighter than the Pajunk needle (P = .017). In conclusion, we chose a model for the design of an echogenic needle and modeled it on the basis of a solution to the Helmholtz equation. A prototype needle was tested in a water bath and compared to the model prediction. After verification of our model, we designed an 18-gauge needle, which performed better than an existing echogenic needle (Pajunk) at 60° of insonation. Our needle will require further testing in human trials. © 2016 by the American Institute of Ultrasound in Medicine.

  17. Predicting FLDs Using a Multiscale Modeling Scheme

    NASA Astrophysics Data System (ADS)

    Wu, Z.; Loy, C.; Wang, E.; Hegadekatte, V.

    2017-09-01

    The measurement of a single forming limit diagram (FLD) requires significant resources and is time consuming. We have developed a multiscale modeling scheme to predict FLDs using a combination of limited laboratory testing, crystal plasticity (VPSC) modeling, and dual sequential-stage finite element (ABAQUS/Explicit) modeling with the Marciniak-Kuczynski (M-K) criterion to determine the limit strain. We have established a means to work around existing limitations in ABAQUS/Explicit by using an anisotropic yield locus (e.g., BBC2008) in combination with the M-K criterion. We further apply a VPSC model to reduce the number of laboratory tests required to characterize the anisotropic yield locus. In the present work, we show that the predicted FLD is in excellent agreement with the measured FLD for AA5182 in the O temper. Instead of 13 different tests as for a traditional FLD determination within Novelis, our technique uses just four measurements: tensile properties in three orientations; plane strain tension; biaxial bulge; and the sheet crystallographic texture. The turnaround time is consequently far less than for the traditional laboratory measurement of the FLD.

  18. Examining the Link between Stress Events and Prosocial Behavior in Adolescents: More Ordinary Magic?

    ERIC Educational Resources Information Center

    Larson, Andrea; Moses, Tally

    2017-01-01

    Scholarship regarding adolescent resilience has typically defined resilience as the absence of negative outcomes rather than the existence of positive outcomes. This study drew on the challenge model of resilience, which anticipates a curvilinear relationship between stress exposure and adaptive functioning, to test whether adolescents reporting…

  19. Expectancy-Value and Cognitive Process Outcomes in Mathematics Learning: A Structural Equation Analysis

    ERIC Educational Resources Information Center

    Phan, Huy P.

    2014-01-01

    Existing research has yielded evidence to indicate that the expectancy-value theoretical model predicts students' learning in various achievement contexts. Achievement values and self-efficacy expectations, for example, have been found to exert positive effects on cognitive process and academic achievement outcomes. We tested a conceptual model…

  20. Influence of Assessment for Learning Professional Development in Rural Georgia Public Schools

    ERIC Educational Resources Information Center

    Cole, Marianne

    2010-01-01

    This study investigated the effect of two models of professional development concerning Assessment for Learning on teacher perception of the effectiveness of Assessment for Learning strategies and student achievement as measured by standardized Georgia End of Course Tests. The study hypothesized that a positive relationship exists between teacher…

Top