Avian seasonal productivity is often modeled as a time-limited stochastic process. Many mathematical formulations have been proposed, including individual based models, continuous-time differential equations, and discrete Markov models. All such models typically include paramete...
Locke, Jill; Fuller, Erin Rotheram; Kasari, Connie
2014-01-01
This study examined the social impact of being a typical peer model as part of a social skills intervention for children with autism spectrum disorder (ASD). Participants were drawn from a randomized-controlled-treatment trial that examined the effects of targeted interventions on the social networks of 60 elementary-aged children with ASD. Results demonstrated that typical peer models had higher social network centrality, received friendships, friendship quality, and less loneliness than non-peer models. Peer models were also more likely to be connected with children with ASD than non-peer models at baseline and exit. These results suggest that typical peers can be socially connected to children with ASD, as well as other classmates, and maintain a strong and positive role within the classroom. PMID:22215436
A Review of Recent Aeroelastic Analysis Methods for Propulsion at NASA Lewis Research Center
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Bakhle, Milind A.; Srivastava, R.; Mehmed, Oral; Stefko, George L.
1993-01-01
This report reviews aeroelastic analyses for propulsion components (propfans, compressors and turbines) being developed and used at NASA LeRC. These aeroelastic analyses include both structural and aerodynamic models. The structural models include a typical section, a beam (with and without disk flexibility), and a finite-element blade model (with plate bending elements). The aerodynamic models are based on the solution of equations ranging from the two-dimensional linear potential equation to the three-dimensional Euler equations for multibladed configurations. Typical calculated results are presented for each aeroelastic model. Suggestions for further research are made. Many of the currently available aeroelastic models and analysis methods are being incorporated in a unified computer program, APPLE (Aeroelasticity Program for Propulsion at LEwis).
APPLE - An aeroelastic analysis system for turbomachines and propfans
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Bakhle, Milind A.; Srivastava, R.; Mehmed, Oral
1992-01-01
This paper reviews aeroelastic analysis methods for propulsion elements (advanced propellers, compressors and turbines) being developed and used at NASA Lewis Research Center. These aeroelastic models include both structural and aerodynamic components. The structural models include the typical section model, the beam model with and without disk flexibility, and the finite element blade model with plate bending elements. The aerodynamic models are based on the solution of equations ranging from the two-dimensional linear potential equation for a cascade to the three-dimensional Euler equations for multi-blade configurations. Typical results are presented for each aeroelastic model. Suggestions for further research are indicated. All the available aeroelastic models and analysis methods are being incorporated into a unified computer program named APPLE (Aeroelasticity Program for Propulsion at LEwis).
Assessing College Students' Understanding of Acid Base Chemistry Concepts
ERIC Educational Resources Information Center
Wan, Yanjun Jean
2014-01-01
Typically most college curricula include three acid base models: Arrhenius', Bronsted-Lowry's, and Lewis'. Although Lewis' acid base model is generally thought to be the most sophisticated among these three models, and can be further applied in reaction mechanisms, most general chemistry curricula either do not include Lewis' acid base model, or…
Liu, Zhiya; Song, Xiaohong; Seger, Carol A.
2015-01-01
We examined whether the degree to which a feature is uniquely characteristic of a category can affect categorization above and beyond the typicality of the feature. We developed a multiple feature value category structure with different dimensions within which feature uniqueness and typicality could be manipulated independently. Using eye tracking, we found that the highest attentional weighting (operationalized as number of fixations, mean fixation time, and the first fixation of the trial) was given to a dimension that included a feature that was both unique and highly typical of the category. Dimensions that included features that were highly typical but not unique, or were unique but not highly typical, received less attention. A dimension with neither a unique nor a highly typical feature received least attention. On the basis of these results we hypothesized that subjects categorized via a rule learning procedure in which they performed an ordered evaluation of dimensions, beginning with unique and strongly typical dimensions, and in which earlier dimensions received higher weighting in the decision. This hypothesis accounted for performance on transfer stimuli better than simple implementations of two other common theories of category learning, exemplar models and prototype models, in which all dimensions were evaluated in parallel and received equal weighting. PMID:26274332
Liu, Zhiya; Song, Xiaohong; Seger, Carol A
2015-01-01
We examined whether the degree to which a feature is uniquely characteristic of a category can affect categorization above and beyond the typicality of the feature. We developed a multiple feature value category structure with different dimensions within which feature uniqueness and typicality could be manipulated independently. Using eye tracking, we found that the highest attentional weighting (operationalized as number of fixations, mean fixation time, and the first fixation of the trial) was given to a dimension that included a feature that was both unique and highly typical of the category. Dimensions that included features that were highly typical but not unique, or were unique but not highly typical, received less attention. A dimension with neither a unique nor a highly typical feature received least attention. On the basis of these results we hypothesized that subjects categorized via a rule learning procedure in which they performed an ordered evaluation of dimensions, beginning with unique and strongly typical dimensions, and in which earlier dimensions received higher weighting in the decision. This hypothesis accounted for performance on transfer stimuli better than simple implementations of two other common theories of category learning, exemplar models and prototype models, in which all dimensions were evaluated in parallel and received equal weighting.
NASA Technical Reports Server (NTRS)
Meyer, T. G.; Hill, J. T.; Weber, R. M.
1988-01-01
A viscoplastic material model for the high temperature turbine airfoil material B1900 + Hf was developed and was demonstrated in a three dimensional finite element analysis of a typical turbine airfoil. The demonstration problem is a simulated flight cycle and includes the appropriate transient thermal and mechanical loads typically experienced by these components. The Walker viscoplastic material model was shown to be efficient, stable and easily used. The demonstration is summarized and the performance of the material model is evaluated.
Analysis of typical fault-tolerant architectures using HARP
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Bechta Dugan, Joanne; Trivedi, Kishor S.; Rothmann, Elizabeth M.; Smith, W. Earl
1987-01-01
Difficulties encountered in the modeling of fault-tolerant systems are discussed. The Hybrid Automated Reliability Predictor (HARP) approach to modeling fault-tolerant systems is described. The HARP is written in FORTRAN, consists of nearly 30,000 lines of codes and comments, and is based on behavioral decomposition. Using the behavioral decomposition, the dependability model is divided into fault-occurrence/repair and fault/error-handling models; the characteristics and combining of these two models are examined. Examples in which the HARP is applied to the modeling of some typical fault-tolerant systems, including a local-area network, two fault-tolerant computer systems, and a flight control system, are presented.
Organism and population-level ecological models for chemical risk assessment
Ecological risk assessment typically focuses on animal populations as endpoints for regulatory ecotoxicology. Scientists at USEPA are developing models for animal populations exposed to a wide range of chemicals from pesticides to emerging contaminants. Modeled taxa include aquat...
Experiment and simulation for CSI: What are the missing links?
NASA Technical Reports Server (NTRS)
Belvin, W. Keith; Park, K. C.
1989-01-01
Viewgraphs on experiment and simulation for control structure interaction (CSI) are presented. Topics covered include: control structure interaction; typical control/structure interaction system; CSI problem classification; actuator/sensor models; modeling uncertainty; noise models; real-time computations; and discrete versus continuous.
A Twin Factor Mixture Modeling Approach to Childhood Temperament: Differential Heritability
Scott, Brandon G.; Lemery-Chalfant, Kathryn; Clifford, Sierra; Tein, Jenn-Yun; Stoll, Ryan; Goldsmith, H. Hill
2016-01-01
Twin factor mixture modeling was used to identify temperament profiles, while simultaneously estimating a latent factor model for each profile with a sample of 787 twin pairs (Mage =7.4 years; SD = .84; 49% female; 88.3% Caucasian), using mother- and father-reported temperament. A 4-profile, 1-factor model fit the data well. Profiles included ‘Regulated, Typical Reactive’, ‘Well-regulated, Positive Reactive’, ‘Regulated, Surgent’, and ‘Dysregulated, Negative Reactive.’ All profiles were heritable, with heritability lower and shared environment also contributing to membership in the ‘Regulated, Typical Reactive’ and ‘Dysregulated, Negative Reactive’ profiles. PMID:27291568
One- and two-objective approaches to an area-constrained habitat reserve site selection problem
Stephanie Snyder; Charles ReVelle; Robert Haight
2004-01-01
We compare several ways to model a habitat reserve site selection problem in which an upper bound on the total area of the selected sites is included. The models are cast as optimization coverage models drawn from the location science literature. Classic covering problems typically include a constraint on the number of sites that can be selected. If potential reserve...
A Cognitive Diagnosis Model for Cognitively Based Multiple-Choice Options
ERIC Educational Resources Information Center
de la Torre, Jimmy
2009-01-01
Cognitive or skills diagnosis models are discrete latent variable models developed specifically for the purpose of identifying the presence or absence of multiple fine-grained skills. However, applications of these models typically involve dichotomous or dichotomized data, including data from multiple-choice (MC) assessments that are scored as…
NASA Technical Reports Server (NTRS)
Jackson, C. M., Jr.; Summerfield, D. G. (Inventor)
1974-01-01
The design and development of a wind tunnel model equipped with pressure measuring devices are discussed. The pressure measuring orifices are integrally constructed in the wind tunnel model and do not contribute to distortions of the aerodynamic surface. The construction of a typical model is described and a drawing of the device is included.
On Vieta's Formulas and the Determination of a Set of Positive Integers by Their Sum and Product
ERIC Educational Resources Information Center
Valahas, Theodoros; Boukas, Andreas
2011-01-01
In Years 9 and 10 of secondary schooling students are typically introduced to quadratic expressions and functions and related modelling, algebra, and graphing. This includes work on the expansion and factorisation of quadratic expressions (typically with integer values of coefficients), graphing quadratic functions, finding the roots of quadratic…
EXPERIMENTAL MODELS FOR THE STUDY OF ORAL CLEFTS
Toxicology and teratology studies routinely utilize animal models to determine the potential for chemical and physical agents to produce reproductive and developmental toxicity, including birth defects such as cleft palate. The standardized teratology screen typically tests co...
Appliance Servicing Program Guide.
ERIC Educational Resources Information Center
Georgia Univ., Athens. Dept. of Vocational Education.
This program guide presents the standard appliance servicing technician curriculum for technical institutes in Georgia. The general information section contains the following: purpose and objectives; program description, including admissions, typical job titles, and accreditation and certification; and curriculum model, including standard…
Human Language Technology: Opportunities and Challenges
2005-01-01
because of the connections to and reliance on signal processing. Audio diarization critically includes indexing of speakers [12], since speaker ...to reduce inter- speaker variability in training. Standard techniques include vocal-tract length normalization, adaptation of acoustic models using...maximum likelihood linear regression (MLLR), and speaker -adaptive training based on MLLR. The acoustic models are mixtures of Gaussians, typically with
Model-Based Reasoning in Upper-division Lab Courses
NASA Astrophysics Data System (ADS)
Lewandowski, Heather
2015-05-01
Modeling, which includes developing, testing, and refining models, is a central activity in physics. Well-known examples from AMO physics include everything from the Bohr model of the hydrogen atom to the Bose-Hubbard model of interacting bosons in a lattice. Modeling, while typically considered a theoretical activity, is most fully represented in the laboratory where measurements of real phenomena intersect with theoretical models, leading to refinement of models and experimental apparatus. However, experimental physicists use models in complex ways and the process is often not made explicit in physics laboratory courses. We have developed a framework to describe the modeling process in physics laboratory activities. The framework attempts to abstract and simplify the complex modeling process undertaken by expert experimentalists. The framework can be applied to understand typical processes such the modeling of the measurement tools, modeling ``black boxes,'' and signal processing. We demonstrate that the framework captures several important features of model-based reasoning in a way that can reveal common student difficulties in the lab and guide the development of curricula that emphasize modeling in the laboratory. We also use the framework to examine troubleshooting in the lab and guide students to effective methods and strategies.
ERIC Educational Resources Information Center
Al Otaiba, Stephanie; Connor, Carol M.; Folsom, Jessica S.; Wanzek, Jeanne; Greulich, Luana; Schatschneider, Christopher; Wagner, Richard K.
2014-01-01
This randomized controlled experiment compared the efficacy of two response-to-intervention (RTI) models--typical RTI and dynamic RTI--and included 34 first-grade classrooms (n = 522 students) across 10 socioeconomically and culturally diverse schools. Typical RTI was designed to follow the two-stage RTI decision rules that wait to assess response…
ERIC Educational Resources Information Center
Georgia Univ., Athens. Dept. of Vocational Education.
This program quide presents the biotechnology curriculum for technical institutes in Georgia. The general information section contains the following: purpose and objectives; program description, including admissions, typical job titles, and accreditation and certification; and curriculum model, including standard curriculum sequence and lists of…
Civil Engineering Technology Program Guide.
ERIC Educational Resources Information Center
Georgia Univ., Athens. Dept. of Vocational Education.
This program guide presents civil engineering technology curriculum for technical institutes in Georgia. The general information section contains the following: purpose and objectives; program description, including admissions, typical job titles, and accreditation and certification; and curriculum model, including standard curriculum sequence and…
A Teaching Aid for Physiologists--Simulation of Kidney Function
ERIC Educational Resources Information Center
Packer, J. S.; Packer, J. E.
1977-01-01
Presented is the development of a simulation model of the facultative water transfer mechanism of the mammalian kidney. Discussion topics include simulation philosophy, simulation facilities, the model, and programming the model as a teaching aid. Graphs illustrate typical program displays. A listing of references concludes the article. (MA)
Multiplicity Control in Structural Equation Modeling: Incorporating Parameter Dependencies
ERIC Educational Resources Information Center
Smith, Carrie E.; Cribbie, Robert A.
2013-01-01
When structural equation modeling (SEM) analyses are conducted, significance tests for all important model relationships (parameters including factor loadings, covariances, etc.) are typically conducted at a specified nominal Type I error rate ([alpha]). Despite the fact that many significance tests are often conducted in SEM, rarely is…
Compression of strings with approximate repeats.
Allison, L; Edgoose, T; Dix, T I
1998-01-01
We describe a model for strings of characters that is loosely based on the Lempel Ziv model with the addition that a repeated substring can be an approximate match to the original substring; this is close to the situation of DNA, for example. Typically there are many explanations for a given string under the model, some optimal and many suboptimal. Rather than commit to one optimal explanation, we sum the probabilities over all explanations under the model because this gives the probability of the data under the model. The model has a small number of parameters and these can be estimated from the given string by an expectation-maximization (EM) algorithm. Each iteration of the EM algorithm takes O(n2) time and a few iterations are typically sufficient. O(n2) complexity is impractical for strings of more than a few tens of thousands of characters and a faster approximation algorithm is also given. The model is further extended to include approximate reverse complementary repeats when analyzing DNA strings. Tests include the recovery of parameter estimates from known sources and applications to real DNA strings.
Planning for robust reserve networks using uncertainty analysis
Moilanen, A.; Runge, M.C.; Elith, Jane; Tyre, A.; Carmel, Y.; Fegraus, E.; Wintle, B.A.; Burgman, M.; Ben-Haim, Y.
2006-01-01
Planning land-use for biodiversity conservation frequently involves computer-assisted reserve selection algorithms. Typically such algorithms operate on matrices of species presence?absence in sites, or on species-specific distributions of model predicted probabilities of occurrence in grid cells. There are practically always errors in input data?erroneous species presence?absence data, structural and parametric uncertainty in predictive habitat models, and lack of correspondence between temporal presence and long-run persistence. Despite these uncertainties, typical reserve selection methods proceed as if there is no uncertainty in the data or models. Having two conservation options of apparently equal biological value, one would prefer the option whose value is relatively insensitive to errors in planning inputs. In this work we show how uncertainty analysis for reserve planning can be implemented within a framework of information-gap decision theory, generating reserve designs that are robust to uncertainty. Consideration of uncertainty involves modifications to the typical objective functions used in reserve selection. Search for robust-optimal reserve structures can still be implemented via typical reserve selection optimization techniques, including stepwise heuristics, integer-programming and stochastic global search.
Sources of uncertainty in flood inundation maps
Bales, J.D.; Wagner, C.R.
2009-01-01
Flood inundation maps typically have been used to depict inundated areas for floods having specific exceedance levels. The uncertainty associated with the inundation boundaries is seldom quantified, in part, because all of the sources of uncertainty are not recognized and because data available to quantify uncertainty seldom are available. Sources of uncertainty discussed in this paper include hydrologic data used for hydraulic model development and validation, topographic data, and the hydraulic model. The assumption of steady flow, which typically is made to produce inundation maps, has less of an effect on predicted inundation at lower flows than for higher flows because more time typically is required to inundate areas at high flows than at low flows. Difficulties with establishing reasonable cross sections that do not intersect and that represent water-surface slopes in tributaries contribute additional uncertainties in the hydraulic modelling. As a result, uncertainty in the flood inundation polygons simulated with a one-dimensional model increases with distance from the main channel.
Antiparticle cloud temperatures for antihydrogen experiments
NASA Astrophysics Data System (ADS)
Bianconi, A.; Charlton, M.; Lodi Rizzini, E.; Mascagna, V.; Venturelli, L.
2017-07-01
A simple rate-equation description of the heating and cooling of antiparticle clouds under conditions typical of those found in antihydrogen formation experiments is developed and analyzed. We include single-particle collisional, radiative, and cloud expansion effects and, from the modeling calculations, identify typical cooling phenomena and trends and relate these to the underlying physics. Some general rules of thumb of use to experimenters are derived.
Organism and population-level ecological models for ...
Ecological risk assessment typically focuses on animal populations as endpoints for regulatory ecotoxicology. Scientists at USEPA are developing models for animal populations exposed to a wide range of chemicals from pesticides to emerging contaminants. Modeled taxa include aquatic and terrestrial invertebrates, fish, amphibians, and birds, and employ a wide range of methods, from matrix-based projection models to mechanistic bioenergetics models and spatially explicit population models. not applicable
Projecting state-level air pollutant emissions using an integrated assessment model: GCAM-USA.
Integrated Assessment Models (IAMs) characterize the interactions among human and earth systems. IAMs typically have been applied to investigate future energy, land use, and emission pathways at global to continental scales. Recent directions in IAM development include enhanced t...
Order reduction for a model of marine bacteriophage evolution
NASA Astrophysics Data System (ADS)
Pagliarini, Silvia; Korobeinikov, Andrei
2017-02-01
A typical mechanistic model of viral evolution necessary includes several time scales which can differ by orders of magnitude. Such a diversity of time scales makes analysis of these models difficult. Reducing the order of a model is highly desirable when handling such a model. A typical approach applied to such slow-fast (or singularly perturbed) systems is the time scales separation technique. Constructing the so-called quasi-steady-state approximation is the usual first step in applying the technique. While this technique is commonly applied, in some cases its straightforward application can lead to unsatisfactory results. In this paper we construct the quasi-steady-state approximation for a model of evolution of marine bacteriophages based on the Beretta-Kuang model. We show that for this particular model the quasi-steady-state approximation is able to produce only qualitative but not quantitative fit.
A call to improve methods for estimating tree biomass for regional and national assessments
Aaron R. Weiskittel; David W. MacFarlane; Philip J. Radtke; David L.R. Affleck; Hailemariam Temesgen; Christopher W. Woodall; James A. Westfall; John W. Coulston
2015-01-01
Tree biomass is typically estimated using statistical models. This review highlights five limitations of most tree biomass models, which include the following: (1) biomass data are costly to collect and alternative sampling methods are used; (2) belowground data and models are generally lacking; (3) models are often developed from small and geographically limited data...
Modelling Per Capita Water Demand Change to Support System Planning
NASA Astrophysics Data System (ADS)
Garcia, M. E.; Islam, S.
2016-12-01
Water utilities have a number of levers to influence customer water usage. These include levers to proactively slow demand growth over time such as building and landscape codes as well as levers to decrease demands quickly in response to water stress including price increases, education campaigns, water restrictions, and incentive programs. Even actions aimed at short term reductions can result in long term water usage declines when substantial changes are made in water efficiency, as in incentives for fixture replacement or turf removal, or usage patterns such as permanent lawn watering restrictions. Demand change is therefore linked to hydrological conditions and to the effects of past management decisions - both typically included in water supply planning models. Yet, demand is typically incorporated exogenously using scenarios or endogenously using only price, though utilities also use rules and incentives issued in response to water stress and codes specifying standards for new construction to influence water usage. Explicitly including these policy levers in planning models enables concurrent testing of infrastructure and policy strategies and illuminates interactions between the two. The City of Las Vegas is used as a case study to develop and demonstrate this modeling approach. First, a statistical analysis of system data was employed to rule out alternate hypotheses of per capita demand decrease such as changes in population density and economic structure. Next, four demand sub-models were developed including one baseline model in which demand is a function of only price. The sub-models were then calibrated and tested using monthly data from 1997 to 2012. Finally, the best performing sub-model was integrated with a full supply and demand model. The results highlight the importance of both modeling water demand dynamics endogenously and taking a broader view of the variables influencing demand change.
NASA Technical Reports Server (NTRS)
Ott, L.; Putman, B.; Collatz, J.; Gregg, W.
2012-01-01
Column CO2 observations from current and future remote sensing missions represent a major advancement in our understanding of the carbon cycle and are expected to help constrain source and sink distributions. However, data assimilation and inversion methods are challenged by the difference in scale of models and observations. OCO-2 footprints represent an area of several square kilometers while NASA s future ASCENDS lidar mission is likely to have an even smaller footprint. In contrast, the resolution of models used in global inversions are typically hundreds of kilometers wide and often cover areas that include combinations of land, ocean and coastal areas and areas of significant topographic, land cover, and population density variations. To improve understanding of scales of atmospheric CO2 variability and representativeness of satellite observations, we will present results from a global, 10-km simulation of meteorology and atmospheric CO2 distributions performed using NASA s GEOS-5 general circulation model. This resolution, typical of mesoscale atmospheric models, represents an order of magnitude increase in resolution over typical global simulations of atmospheric composition allowing new insight into small scale CO2 variations across a wide range of surface flux and meteorological conditions. The simulation includes high resolution flux datasets provided by NASA s Carbon Monitoring System Flux Pilot Project at half degree resolution that have been down-scaled to 10-km using remote sensing datasets. Probability distribution functions are calculated over larger areas more typical of global models (100-400 km) to characterize subgrid-scale variability in these models. Particular emphasis is placed on coastal regions and regions containing megacities and fires to evaluate the ability of coarse resolution models to represent these small scale features. Additionally, model output are sampled using averaging kernels characteristic of OCO-2 and ASCENDS measurement concepts to create realistic pseudo-datasets. Pseudo-data are averaged over coarse model grid cell areas to better understand the ability of measurements to characterize CO2 distributions and spatial gradients on both short (daily to weekly) and long (monthly to seasonal) time scales
An agent architecture for an integrated forest ecosystem management decision support system
Donald Nute; Walter D. Potter; Mayukh Dass; Astrid Glende; Frederick Maier; Hajime Uchiyama; Jin Wang; Mark Twery; Peter Knopp; Scott Thomasma; H. Michael Rauscher
2003-01-01
A wide variety of software tools are available to support decision in the management of forest ecosystems. These tools include databases, growth and yield models, wildlife models, silvicultural expert systems, financial models, geographical informations systems, and visualization tools. Typically, each of these tools has its own complex interface and data format. To...
Illustration of a Multilevel Model for Meta-Analysis
ERIC Educational Resources Information Center
de la Torre, Jimmy; Camilli, Gregory; Vargas, Sadako; Vernon, R. Fox
2007-01-01
In this article, the authors present a multilevel (or hierarchical linear) model that illustrates issues in the application of the model to data from meta-analytic studies. In doing so, several issues are discussed that typically arise in the course of a meta-analysis. These include the presence of non-zero between-study variability, how multiple…
Johansen, M P; Barnett, C L; Beresford, N A; Brown, J E; Černe, M; Howard, B J; Kamboj, S; Keum, D-K; Smodiš, B; Twining, J R; Vandenhove, H; Vives i Batlle, J; Wood, M D; Yu, C
2012-06-15
Radiological doses to terrestrial wildlife were examined in this model inter-comparison study that emphasised factors causing variability in dose estimation. The study participants used varying modelling approaches and information sources to estimate dose rates and tissue concentrations for a range of biota types exposed to soil contamination at a shallow radionuclide waste burial site in Australia. Results indicated that the dominant factor causing variation in dose rate estimates (up to three orders of magnitude on mean total dose rates) was the soil-to-organism transfer of radionuclides that included variation in transfer parameter values as well as transfer calculation methods. Additional variation was associated with other modelling factors including: how participants conceptualised and modelled the exposure configurations (two orders of magnitude); which progeny to include with the parent radionuclide (typically less than one order of magnitude); and dose calculation parameters, including radiation weighting factors and dose conversion coefficients (typically less than one order of magnitude). Probabilistic approaches to model parameterisation were used to encompass and describe variable model parameters and outcomes. The study confirms the need for continued evaluation of the underlying mechanisms governing soil-to-organism transfer of radionuclides to improve estimation of dose rates to terrestrial wildlife. The exposure pathways and configurations available in most current codes are limited when considering instances where organisms access subsurface contamination through rooting, burrowing, or using different localised waste areas as part of their habitual routines. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.
Computer model to simulate testing at the National Transonic Facility
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.; Owens, Lewis R., Jr.; Wahls, Richard A.; Hannon, Judith A.
1995-01-01
A computer model has been developed to simulate the processes involved in the operation of the National Transonic Facility (NTF), a large cryogenic wind tunnel at the Langley Research Center. The simulation was verified by comparing the simulated results with previously acquired data from three experimental wind tunnel test programs in the NTF. The comparisons suggest that the computer model simulates reasonably well the processes that determine the liquid nitrogen (LN2) consumption, electrical consumption, fan-on time, and the test time required to complete a test plan at the NTF. From these limited comparisons, it appears that the results from the simulation model are generally within about 10 percent of the actual NTF test results. The use of actual data acquisition times in the simulation produced better estimates of the LN2 usage, as expected. Additional comparisons are needed to refine the model constants. The model will typically produce optimistic results since the times and rates included in the model are typically the optimum values. Any deviation from the optimum values will lead to longer times or increased LN2 and electrical consumption for the proposed test plan. Computer code operating instructions and listings of sample input and output files have been included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Brian E.; Oppel III, Fred J.
2017-01-25
This package contains modules that model a visual sensor in Umbra. It is typically used to represent eyesight of characters in Umbra. This library also includes the sensor property, seeable, and an Active Denial sensor.
Woo, Se Joon; Ahn, Jeeyun; Morrison, Margaux A; Ahn, So Yeon; Lee, Jaebong; Kim, Ki Woong; DeAngelis, Margaret M; Park, Kyu Hyung
2015-01-01
To investigate the association of genetic and environmental factors, and their interactions in Korean patients with exudative age-related macular degeneration (AMD). A total of 314 robustly characterized exudative AMD patients, including 111 PCV (polypoidal choroidal vasculopathy) and 154 typical choroidal neovascularization (CNV), and 395 control subjects without any evidence of AMD were enrolled. Full ophthalmologic examinations including fluorescein angiography (FA), indocyanine green angiography (ICG) and optical coherence tomography (OCT) were done, according to which patients were divided into either PCV or typical CNV. Standardized questionnaires were used to collect information regarding underlying systemic diseases, dietary habits, smoking history and body mass index (BMI). A total of 86 SNPs from 31 candidate genes were analyzed. Genotype association and logistic regression analyses were done and stepwise regression models to best predict disease for each AMD subtype were constructed. Age, spherical equivalent, myopia, and ever smoking were associated with exudative AMD. Age, hypertension, hyperlipidemia, spherical equivalent, and myopia were risk factors for typical CNV, while increased education and ever smoking were significantly associated with PCV (p<.05 for all). Four SNPs, ARMS2/HTRA1 rs10490924, rs11200638, and rs2736911, and CFH rs800292, showed association with exudative AMD. Two of these SNPs, ARMS2/HTRA1 rs10490924 and rs11200638, showed significant association with typical CNV and PCV specifically. There were no significant interactions between environmental and genetic factors. The most predictive disease model for exudative AMD included age, spherical equivalent, smoking, CFH rs800292, and ARMS2 rs10490924 while that for typical CNV included age, hyperlipidemia, spherical equivalent, and ARMS2 rs10490924. Smoking, spherical equivalent, and ARMS2 rs10490924 were the most predictive variables for PCV. When comparing PCV cases to CNV cases, age, BMI, and education were the most predictive risk factors of PCV. Only one locus, the ARMS2/HTRA1 was a significant genetic risk factor for Korean exudative AMD, including its subtypes, PCV and typical CNV. Stepwise regression revealed that CFH was important to risk of exudative AMD in general but not to any specific subtype. While increased education was a unique risk factor to PCV when compared to CNV, this association was independent of refractive error in this homogenous population from South Korea. No significant interactions between environmental and genetic risk factors were observed.
NASA Technical Reports Server (NTRS)
White, R. J.
1973-01-01
A detailed description of Guyton's model and modifications are provided. Also included are descriptions of several typical experiments which the model can simulate to illustrate the model's general utility. A discussion of the problems associated with the interfacing of the model to other models such as respiratory and thermal regulation models which is prime importance since these stimuli are not present in the current model is also included. A user's guide for the operation of the model on the Xerox Sigma 3 computer is provided and two programs are described. A verification plan and procedure for performing experiments is also presented.
Accuracy of an IFSAR-derived digital terrain model under a conifer forest canopy.
Hans-Erik Andersen; Stephen E. Reutebuch; Robert J. McGaughey
2005-01-01
Accurate digital terrain models (DTMs) are necessary for a variety of forest resource management applications, including watershed management, timber harvest planning, and fire management. Traditional methods for acquiring topographic data typically rely on aerial photogrammetry, where measurement of the terrain surface below forest canopy is difficult and error prone...
Variable Density Effects in Stochastic Lagrangian Models for Turbulent Combustion
2016-07-20
PDF methods in dealing with chemical reaction and convection are preserved irrespective of density variation. Since the density variation in a typical...combustion process may be as large as factor of seven, including variable- density effects in PDF methods is of significance. Conventionally, the...strategy of modelling variable density flows in PDF methods is similar to that used for second-moment closure models (SMCM): models are developed based on
Phase space effects on fast ion distribution function modeling in tokamaks
NASA Astrophysics Data System (ADS)
Podestà, M.; Gorelenkova, M.; Fredrickson, E. D.; Gorelenkov, N. N.; White, R. B.
2016-05-01
Integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities, ad-hoc models can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions. The kick model implemented in the tokamak transport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.
Phase space effects on fast ion distribution function modeling in tokamaks
White, R. B. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Podesta, M. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Gorelenkova, M. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Fredrickson, E. D. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Gorelenkov, N. N. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)
2016-06-01
Integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities, ad-hoc models can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions. The kick model implemented in the tokamak transport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.
Preliminary mixed-layer model results for FIRE marine stratocumulus IFO conditions
NASA Technical Reports Server (NTRS)
Barlow, R.; Nicholls, S.
1990-01-01
Some preliminary results from the Turton and Nicholls mixed layer model using typical FIRE boundary conditions are presented. The model includes entrainment and drizzle parametrizations as well as interactive long and shortwave radiation schemes. A constraint on the integrated turbulent kinetic energy balance ensures that the model remains energetically consistent at all times. The preliminary runs were used to identify the potentially important terms in the heat and moisture budgets of the cloud layer, and to assess the anticipated diurnal variability. These are compared with typical observations from the C130. Sensitivity studies also revealed the remarkable stability of these cloud sheets: a number of negative feedback mechanisms appear to operate to maintain the cloud over an extended time period. These are also discussed. The degree to which such a modelling approach can be used to explain observed features, the specification of boundary conditions and problems of interpretation in non-horizontally uniform conditions is also raised.
Methods for modeling cytoskeletal and DNA filaments
NASA Astrophysics Data System (ADS)
Andrews, Steven S.
2014-02-01
This review summarizes the models that researchers use to represent the conformations and dynamics of cytoskeletal and DNA filaments. It focuses on models that address individual filaments in continuous space. Conformation models include the freely jointed, Gaussian, angle-biased chain (ABC), and wormlike chain (WLC) models, of which the first three bend at discrete joints and the last bends continuously. Predictions from the WLC model generally agree well with experiment. Dynamics models include the Rouse, Zimm, stiff rod, dynamic WLC, and reptation models, of which the first four apply to isolated filaments and the last to entangled filaments. Experiments show that the dynamic WLC and reptation models are most accurate. They also show that biological filaments typically experience strong hydrodynamic coupling and/or constrained motion. Computer simulation methods that address filament dynamics typically compute filament segment velocities from local forces using the Langevin equation and then integrate these velocities with explicit or implicit methods; the former are more versatile and the latter are more efficient. Much remains to be discovered in biological filament modeling. In particular, filament dynamics in living cells are not well understood, and current computational methods are too slow and not sufficiently versatile. Although primarily a review, this paper also presents new statistical calculations for the ABC and WLC models. Additionally, it corrects several discrepancies in the literature about bending and torsional persistence length definitions, and their relations to flexural and torsional rigidities.
Computing singularities of perturbation series
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kvaal, Simen; Jarlebring, Elias; Michiels, Wim
2011-03-15
Many properties of current ab initio approaches to the quantum many-body problem, both perturbational and otherwise, are related to the singularity structure of the Rayleigh-Schroedinger perturbation series. A numerical procedure is presented that in principle computes the complete set of singularities, including the dominant singularity which limits the radius of convergence. The method approximates the singularities as eigenvalues of a certain generalized eigenvalue equation which is solved using iterative techniques. It relies on computation of the action of the Hamiltonian matrix on a vector and does not rely on the terms in the perturbation series. The method can be usefulmore » for studying perturbation series of typical systems of moderate size, for fundamental development of resummation schemes, and for understanding the structure of singularities for typical systems. Some illustrative model problems are studied, including a helium-like model with {delta}-function interactions for which Moeller-Plesset perturbation theory is considered and the radius of convergence found.« less
The impact of ancillary services in optimal DER investment decisions
Cardoso, Goncalo; Stadler, Michael; Mashayekh, Salman; ...
2017-04-25
Microgrid resource sizing problems typically include the analysis of a combination of value streams such as peak shaving, load shifting, or load scheduling, which support the economic feasibility of the microgrid deployment. However, microgrid benefits can go beyond these, and the ability to provide ancillary grid services such as frequency regulation or spinning and non-spinning reserves is well known, despite typically not being considered in resource sizing problems. This paper proposes the expansion of the Distributed Energy Resources Customer Adoption Model (DER-CAM), a state-of-the-art microgrid resource sizing model, to include revenue streams resulting from the participation in ancillary service markets.more » Results suggest that participation in such markets may not only influence the optimum resource sizing, but also the operational dispatch, with results being strongly influenced by the exact market requirements and clearing prices.« less
A dynamic vulnerability evaluation model to smart grid for the emergency response
NASA Astrophysics Data System (ADS)
Yu, Zhen; Wu, Xiaowei; Fang, Diange
2018-01-01
Smart grid shows more significant vulnerability to natural disasters and external destroy. According to the influence characteristics of important facilities suffered from typical kinds of natural disaster and external destroy, this paper built a vulnerability evaluation index system of important facilities in smart grid based on eight typical natural disasters, including three levels of static and dynamic indicators, totally forty indicators. Then a smart grid vulnerability evaluation method was proposed based on the index system, including determining the value range of each index, classifying the evaluation grade standard and giving the evaluation process and integrated index calculation rules. Using the proposed evaluation model, it can identify the most vulnerable parts of smart grid, and then help adopting targeted emergency response measures, developing emergency plans and increasing its capacity of disaster prevention and mitigation, which guarantee its safe and stable operation.
The impact of ancillary services in optimal DER investment decisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardoso, Goncalo; Stadler, Michael; Mashayekh, Salman
Microgrid resource sizing problems typically include the analysis of a combination of value streams such as peak shaving, load shifting, or load scheduling, which support the economic feasibility of the microgrid deployment. However, microgrid benefits can go beyond these, and the ability to provide ancillary grid services such as frequency regulation or spinning and non-spinning reserves is well known, despite typically not being considered in resource sizing problems. This paper proposes the expansion of the Distributed Energy Resources Customer Adoption Model (DER-CAM), a state-of-the-art microgrid resource sizing model, to include revenue streams resulting from the participation in ancillary service markets.more » Results suggest that participation in such markets may not only influence the optimum resource sizing, but also the operational dispatch, with results being strongly influenced by the exact market requirements and clearing prices.« less
Al Otaiba, Stephanie; Connor, Carol M; Folsom, Jessica S; Wanzek, Jeanne; Greulich, Luana; Schatschneider, Christopher; Wagner, Richard K
2014-10-01
This randomized controlled experiment compared the efficacy of two Response to Intervention (RTI) models - Typical RTI and Dynamic RTI - and included 34 first-grade classrooms ( n = 522 students) across 10 socio-economically and culturally diverse schools. Typical RTI was designed to follow the two-stage RTI decision rules that wait to assess response to Tier 1 in many districts, whereas Dynamic RTI provided Tier 2 or Tier 3 interventions immediately according to students' initial screening results. Interventions were identical across conditions except for when intervention began. Reading assessments included letter-sound, word, and passage reading, and teacher-reported severity of reading difficulties. An intent-to-treat analysis using multi-level modeling indicated an overall effect favoring the Dynamic RTI condition ( d = .36); growth curve analyses demonstrated that students in Dynamic RTI showed an immediate score advantage, and effects accumulated across the year. Analyses of standard score outcomes confirmed that students in the Dynamic condition who received Tier 2 and Tier 3 ended the study with significantly higher reading performance than students in the Typical condition. Implications for RTI implementation practice and for future research are discussed.
Al Otaiba, Stephanie; Connor, Carol M.; Folsom, Jessica S.; Wanzek, Jeanne; Greulich, Luana; Schatschneider, Christopher; Wagner, Richard K.
2014-01-01
This randomized controlled experiment compared the efficacy of two Response to Intervention (RTI) models – Typical RTI and Dynamic RTI - and included 34 first-grade classrooms (n = 522 students) across 10 socio-economically and culturally diverse schools. Typical RTI was designed to follow the two-stage RTI decision rules that wait to assess response to Tier 1 in many districts, whereas Dynamic RTI provided Tier 2 or Tier 3 interventions immediately according to students’ initial screening results. Interventions were identical across conditions except for when intervention began. Reading assessments included letter-sound, word, and passage reading, and teacher-reported severity of reading difficulties. An intent-to-treat analysis using multi-level modeling indicated an overall effect favoring the Dynamic RTI condition (d = .36); growth curve analyses demonstrated that students in Dynamic RTI showed an immediate score advantage, and effects accumulated across the year. Analyses of standard score outcomes confirmed that students in the Dynamic condition who received Tier 2 and Tier 3 ended the study with significantly higher reading performance than students in the Typical condition. Implications for RTI implementation practice and for future research are discussed. PMID:25530622
DOE Office of Scientific and Technical Information (OSTI.GOV)
Podestà, M., E-mail: mpodesta@pppl.gov; Gorelenkova, M.; Fredrickson, E. D.
Integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities, ad-hoc models can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions.more » The kick model implemented in the tokamak transport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.« less
Community-Based Decision-Making: Application of Web ...
Living, working, and going to school near roadways has been associated with a number of adverse health effects, including asthma exacerbation, cardiovascular impairment, and respiratory symptoms. In the United States, 30% - 45% of urban populations live or work in the near-road environment, with a greater percentage of minority and low-income residents living in areas with highly- trafficked roadways. Near-road studies typically use surrogates of exposure to evaluate potential causality of health effects, including proximity, traffic counts, or total length of roads within a given radius. In contrast, simplified models provide an opportunity to examine how changes in input parameters, such as vehicle counts or speeds, can affect air quality. Simplified or reduced-form models typically retain the same or similar algorithms most responsible for characterizing uncertainty in more sophisticated models. The Community Line Source modeling system (C-LINE) allows users to explore what-if scenarios such as increases in diesel trucks or total traffic; examine hot spot conditions and areas for further study; determine ideal monitor placement locations; or evaluate air quality changes due to traffic re-routing. This presentation describes the input parameters, analytical procedures, visualization routines, and software considerations for C-LINE, and an example application for Newport News, Virginia. Results include scenarios related to port development and resulting traffic
ERIC Educational Resources Information Center
Huang, Xiaoxia; Cribbs, Jennifer
2017-01-01
This study examined mathematics and science teachers' perceptions and use of four types of examples, including typical textbook examples (standard worked examples) and erroneous worked examples in the written form as well as mastery modelling examples and peer modelling examples involving the verbalization of the problem-solving process. Data…
Strengthening the weak link: Built Environment modelling for loss analysis
NASA Astrophysics Data System (ADS)
Millinship, I.
2012-04-01
Methods to analyse insured losses from a range of natural perils, including pricing by primary insurers and catastrophe modelling by reinsurers, typically lack sufficient exposure information. Understanding the hazard intensity in terms of spatial severity and frequency is only the first step towards quantifying the risk of a catastrophic event. For any given event we need to know: Are any structures affected? What type of buildings are they? How much damaged occurred? How much will the repairs cost? To achieve this, detailed exposure information is required to assess the likely damage and to effectively calculate the resultant loss. Modelling exposures in the Built Environment therefore plays as important a role in understanding re/insurance risk as characterising the physical hazard. Across both primary insurance books and aggregated reinsurance portfolios, the location of a property (a risk) and its monetary value is typically known. Exactly what that risk is in terms of detailed property descriptors including structure type and rebuild cost - and therefore its vulnerability to loss - is often omitted. This data deficiency is a primary source of variations between modelled losses and the actual claims value. Built Environment models are therefore required at a high resolution to describe building attributes that relate vulnerability to property damage. However, national-scale household-level datasets are often not computationally practical in catastrophe models and data must be aggregated. In order to provide more accurate risk analysis, we have developed and applied a methodology for Built Environment modelling for incorporation into a range of re/insurance applications, including operational models for different international regions and different perils and covering residential, commercial and industry exposures. Illustrated examples are presented, including exposure modelling suitable for aggregated reinsurance analysis for the UK and bespoke high resolution modelling for industrial sites in Germany. A range of attributes are included following detailed claims analysis and engineering research with property type, age and condition identified as important differentiators of damage from flood, wind and freeze events.
Effects of electrojet turbulence on a magnetosphere-ionosphere simulation of a geomagnetic storm
NASA Astrophysics Data System (ADS)
Wiltberger, M.; Merkin, V.; Zhang, B.; Toffoletto, F.; Oppenheim, M.; Wang, W.; Lyon, J. G.; Liu, J.; Dimant, Y.; Sitnov, M. I.; Stephens, G. K.
2017-05-01
Ionospheric conductance plays an important role in regulating the response of the magnetosphere-ionosphere system to solar wind driving. Typically, models of magnetosphere-ionosphere coupling include changes to ionospheric conductance driven by extreme ultraviolet ionization and electron precipitation. This paper shows that effects driven by the Farley-Buneman instability can also create significant enhancements in the ionospheric conductance, with substantial impacts on geospace. We have implemented a method of including electrojet turbulence (ET) effects into the ionospheric conductance model utilized within geospace simulations. Our particular implementation is tested with simulations of the Lyon-Fedder-Mobarry global magnetosphere model coupled with the Rice Convection Model of the inner magnetosphere. We examine the impact of including ET-modified conductances in a case study of the geomagnetic storm of 17 March 2013. Simulations with ET show a 13% reduction in the cross polar cap potential at the beginning of the storm and up to 20% increases in the Pedersen and Hall conductance. These simulation results show better agreement with Defense Meteorological Satellite Program observations, including capturing features of subauroral polarization streams. The field-aligned current (FAC) patterns show little differences during the peak of storm and agree well with Active Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE) reconstructions. Typically, the simulated FAC densities are stronger and at slightly higher latitudes than shown by AMPERE. The inner magnetospheric pressures derived from Tsyganenko-Sitnov empirical magnetic field model show that the inclusion of the ET effects increases the peak pressure and brings the results into better agreement with the empirical model.
Analysis of propellant feedline dynamics
NASA Technical Reports Server (NTRS)
Holster, J. L.; Astleford, W. J.; Gerlach, C. R.
1973-01-01
An analytical model and corresponding computer program for studying disturbances of liquid propellants in typical engine feedline systems were developed. The model includes the effects of steady turbulent mean flow, the influence of distributed compliances, the effects of local compliances, and various factors causing structural-hydraulic coupling. The computer program was set up such that the amplitude and phase of the terminal pressure/input excitation is calculated over any desired frequency range for an arbitrary assembly of various feedline components. A user's manual is included.
A new approach to modelling schistosomiasis transmission based on stratified worm burden.
Gurarie, D; King, C H; Wang, X
2010-11-01
Multiple factors affect schistosomiasis transmission in distributed meta-population systems including age, behaviour, and environment. The traditional approach to modelling macroparasite transmission often exploits the 'mean worm burden' (MWB) formulation for human hosts. However, typical worm distribution in humans is overdispersed, and classic models either ignore this characteristic or make ad hoc assumptions about its pattern (e.g., by assuming a negative binomial distribution). Such oversimplifications can give wrong predictions for the impact of control interventions. We propose a new modelling approach to macro-parasite transmission by stratifying human populations according to worm burden, and replacing MWB dynamics with that of 'population strata'. We developed proper calibration procedures for such multi-component systems, based on typical epidemiological and demographic field data, and implemented them using Wolfram Mathematica. Model programming and calibration proved to be straightforward. Our calibrated system provided good agreement with the individual level field data from the Msambweni region of eastern Kenya. The Stratified Worm Burden (SWB) approach offers many advantages, in that it accounts naturally for overdispersion and accommodates other important factors and measures of human infection and demographics. Future work will apply this model and methodology to evaluate innovative control intervention strategies, including expanded drug treatment programmes proposed by the World Health Organization and its partners.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Podesta, M.; Gorelenkova, M.; Fredrickson, E. D.
Here, integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities,ad-hocmodels can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions. Themore » kick model implemented in the tokamaktransport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.« less
Zoccolotti, Pierluigi; De Luca, Maria; Marinelli, Chiara V.; Spinelli, Donatella
2014-01-01
This study was aimed at predicting individual differences in text reading fluency. The basic proposal included two factors, i.e., the ability to decode letter strings (measured by discrete pseudo-word reading) and integration of the various sub-components involved in reading (measured by Rapid Automatized Naming, RAN). Subsequently, a third factor was added to the model, i.e., naming of discrete digits. In order to use homogeneous measures, all contributing variables considered the entire processing of the item, including pronunciation time. The model, which was based on commonality analysis, was applied to data from a group of 43 typically developing readers (11- to 13-year-olds) and a group of 25 chronologically matched dyslexic children. In typically developing readers, both orthographic decoding and integration of reading sub-components contributed significantly to the overall prediction of text reading fluency. The model prediction was higher (from ca. 37 to 52% of the explained variance) when we included the naming of discrete digits variable, which had a suppressive effect on pseudo-word reading. In the dyslexic readers, the variance explained by the two-factor model was high (69%) and did not change when the third factor was added. The lack of a suppression effect was likely due to the prominent individual differences in poor orthographic decoding of the dyslexic children. Analyses on data from both groups of children were replicated by using patches of colors as stimuli (both in the RAN task and in the discrete naming task) obtaining similar results. We conclude that it is possible to predict much of the variance in text-reading fluency using basic processes, such as orthographic decoding and integration of reading sub-components, even without taking into consideration higher-order linguistic factors such as lexical, semantic and contextual abilities. The approach validity of using proximal vs. distal causes to predict reading fluency is discussed. PMID:25477856
Laser interferometer space antenna dynamics and controls model
NASA Astrophysics Data System (ADS)
Maghami, Peiman G.; Tupper Hyde, T.
2003-05-01
A 19 degree-of-freedom (DOF) dynamics and controls model of a laser interferometer space antenna (LISA) spacecraft has been developed. This model is used to evaluate the feasibility of the dynamic pointing and positioning requirements of a typical LISA spacecraft. These requirements must be met for LISA to be able to successfully detect gravitational waves in the frequency band of interest (0.1-100 mHz). The 19-DOF model includes all rigid-body degrees of freedom. A number of disturbance sources, both internal and external, are included. Preliminary designs for the four control systems that comprise the LISA disturbance reduction system (DRS) have been completed and are included in the model. Simulation studies are performed to demonstrate that the LISA pointing and positioning requirements are feasible and can be met.
Synthetic, multi-layer, self-oscillating vocal fold model fabrication.
Murray, Preston R; Thomson, Scott L
2011-12-02
Sound for the human voice is produced via flow-induced vocal fold vibration. The vocal folds consist of several layers of tissue, each with differing material properties. Normal voice production relies on healthy tissue and vocal folds, and occurs as a result of complex coupling between aerodynamic, structural dynamic, and acoustic physical phenomena. Voice disorders affect up to 7.5 million annually in the United States alone and often result in significant financial, social, and other quality-of-life difficulties. Understanding the physics of voice production has the potential to significantly benefit voice care, including clinical prevention, diagnosis, and treatment of voice disorders. Existing methods for studying voice production include in vivo experimentation using human and animal subjects, in vitro experimentation using excised larynges and synthetic models, and computational modeling. Owing to hazardous and difficult instrument access, in vivo experiments are severely limited in scope. Excised larynx experiments have the benefit of anatomical and some physiological realism, but parametric studies involving geometric and material property variables are limited. Further, they are typically only able to be vibrated for relatively short periods of time (typically on the order of minutes). Overcoming some of the limitations of excised larynx experiments, synthetic vocal fold models are emerging as a complementary tool for studying voice production. Synthetic models can be fabricated with systematic changes to geometry and material properties, allowing for the study of healthy and unhealthy human phonatory aerodynamics, structural dynamics, and acoustics. For example, they have been used to study left-right vocal fold asymmetry, clinical instrument development, laryngeal aerodynamics, vocal fold contact pressure, and subglottal acoustics (a more comprehensive list can be found in Kniesburges et al.) Existing synthetic vocal fold models, however, have either been homogenous (one-layer models) or have been fabricated using two materials of differing stiffness (two-layer models). This approach does not allow for representation of the actual multi-layer structure of the human vocal folds that plays a central role in governing vocal fold flow-induced vibratory response. Consequently, one- and two-layer synthetic vocal fold models have exhibited disadvantages such as higher onset pressures than what are typical for human phonation (onset pressure is the minimum lung pressure required to initiate vibration), unnaturally large inferior-superior motion, and lack of a "mucosal wave" (a vertically-traveling wave that is characteristic of healthy human vocal fold vibration). In this paper, fabrication of a model with multiple layers of differing material properties is described. The model layers simulate the multi-layer structure of the human vocal folds, including epithelium, superficial lamina propria (SLP), intermediate and deep lamina propria (i.e., ligament; a fiber is included for anterior-posterior stiffness), and muscle (i.e., body) layers. Results are included that show that the model exhibits improved vibratory characteristics over prior one- and two-layer synthetic models, including onset pressure closer to human onset pressure, reduced inferior-superior motion, and evidence of a mucosal wave.
Hidden Markov models for estimating animal mortality from anthropogenic hazards
Carcasses searches are a common method for studying the risk of anthropogenic hazards to wildlife, including non-target poisoning and collisions with anthropogenic structures. Typically, numbers of carcasses found must be corrected for scavenging rates and imperfect detection. ...
Application of Consider Covariance to the Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Lundberg, John B.
1996-01-01
The extended Kalman filter (EKF) is the basis for many applications of filtering theory to real-time problems where estimates of the state of a dynamical system are to be computed based upon some set of observations. The form of the EKF may vary somewhat from one application to another, but the fundamental principles are typically unchanged among these various applications. As is the case in many filtering applications, models of the dynamical system (differential equations describing the state variables) and models of the relationship between the observations and the state variables are created. These models typically employ a set of constants whose values are established my means of theory or experimental procedure. Since the estimates of the state are formed assuming that the models are perfect, any modeling errors will affect the accuracy of the computed estimates. Note that the modeling errors may be errors of commission (errors in terms included in the model) or omission (errors in terms excluded from the model). Consequently, it becomes imperative when evaluating the performance of real-time filters to evaluate the effect of modeling errors on the estimates of the state.
Stall flutter analysis of propfans
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.
1988-01-01
Three semi-empirical aerodynamic stall models are compared with respect to their lift and moment hysteresis loop prediction, limit cycle behavior, easy implementation, and feasibility in developing the parameters required for stall flutter prediction of advanced turbines. For the comparison of aeroelastic response prediction including stall, a typical section model and a plate structural model are considered. The response analysis includes both plunging and pitching motions of the blades. In model A, a correction of the angle of attack is applied when the angle of attack exceeds the static stall angle. In model B, a synthesis procedure is used for angles of attack above static stall angles, and the time history effects are accounted for through the Wagner function.
In-cell overlay metrology by using optical metrology tool
NASA Astrophysics Data System (ADS)
Lee, Honggoo; Han, Sangjun; Hong, Minhyung; Kim, Seungyoung; Lee, Jieun; Lee, DongYoung; Oh, Eungryong; Choi, Ahlin; Park, Hyowon; Liang, Waley; Choi, DongSub; Kim, Nakyoon; Lee, Jeongpyo; Pandev, Stilian; Jeon, Sanghuck; Robinson, John C.
2018-03-01
Overlay is one of the most critical process control steps of semiconductor manufacturing technology. A typical advanced scheme includes an overlay feedback loop based on after litho optical imaging overlay metrology on scribeline targets. The after litho control loop typically involves high frequency sampling: every lot or nearly every lot. An after etch overlay metrology step is often included, at a lower sampling frequency, in order to characterize and compensate for bias. The after etch metrology step often involves CD-SEM metrology, in this case in-cell and ondevice. This work explores an alternative approach using spectroscopic ellipsometry (SE) metrology and a machine learning analysis technique. Advanced 1x nm DRAM wafers were prepared, including both nominal (POR) wafers with mean overlay offsets, as well as DOE wafers with intentional across wafer overlay modulation. After litho metrology was measured using optical imaging metrology, as well as after etch metrology using both SE and CD-SEM for comparison. We investigate 2 types of machine learning techniques with SE data: model-less and model-based, showing excellent performance for after etch in-cell on-device overlay metrology.
Nature of multiple-nucleus cluster galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merritt, D.
1984-05-01
In models for the evolution of galaxy clusters which include dynamical friction with the dark binding matter, the distribution of galaxies becomes more concentrated to the cluster center with time. In a cluster like Coma, this evolution could increase by a factor of approximately 3 the probability of finding a galaxy very close to the cluster center, without decreasing the typical velocity of such a galaxy significantly below the cluster mean. Such an enhancement is roughly what is needed to explain the large number of first-ranked cluster galaxies which are observed to have extra ''nuclei''; it is also consistent withmore » the high velocities typically measured for these ''nuclei.'' Unlike the cannibalism model, this model predicts that the majority of multiple-nucleus systems are transient phenomena, and not galaxies in the process of merging.« less
Effects of temperature and mass conservation on the typical chemical sequences of hydrogen oxidation
NASA Astrophysics Data System (ADS)
Nicholson, Schuyler B.; Alaghemandi, Mohammad; Green, Jason R.
2018-01-01
Macroscopic properties of reacting mixtures are necessary to design synthetic strategies, determine yield, and improve the energy and atom efficiency of many chemical processes. The set of time-ordered sequences of chemical species are one representation of the evolution from reactants to products. However, only a fraction of the possible sequences is typical, having the majority of the joint probability and characterizing the succession of chemical nonequilibrium states. Here, we extend a variational measure of typicality and apply it to atomistic simulations of a model for hydrogen oxidation over a range of temperatures. We demonstrate an information-theoretic methodology to identify typical sequences under the constraints of mass conservation. Including these constraints leads to an improved ability to learn the chemical sequence mechanism from experimentally accessible data. From these typical sequences, we show that two quantities defining the variational typical set of sequences—the joint entropy rate and the topological entropy rate—increase linearly with temperature. These results suggest that, away from explosion limits, data over a narrow range of thermodynamic parameters could be sufficient to extrapolate these typical features of combustion chemistry to other conditions.
Screening Models of Aquifer Heterogeneity Using the Flow Dimension
NASA Astrophysics Data System (ADS)
Walker, D. D.; Cello, P. A.; Roberts, R. M.; Valocchi, A. J.
2007-12-01
Despite advances in test interpretation and modeling, typical groundwater modeling studies only indirectly use the parameters and information inferred from hydraulic tests. In particular, the Generalized Radial Flow approach to test interpretation infers the flow dimension, a parameter describing the geometry of the flow field during a hydraulic test. Noninteger values of the flow dimension often are inferred for tests in highly heterogeneous aquifers, yet subsequent modeling studies typically ignore the flow dimension. Monte Carlo analyses of detailed numerical models of aquifer tests examine the flow dimension for several stochastic models of heterogeneous transmissivity, T(x). These include multivariate lognormal, fractional Brownian motion, a site percolation network, and discrete linear features with lengths distributed as power-law. The behavior of the simulated flow dimensions are compared to the flow dimensions observed for multiple aquifer tests in a fractured dolomite aquifer in the Great Lakes region of North America. The combination of multiple hydraulic tests, observed fracture patterns, and the Monte Carlo results are used to screen models of heterogeneity and their parameters for subsequent groundwater flow modeling.
Computer Center: It's Time to Take Inventory.
ERIC Educational Resources Information Center
Spain, James D.
1984-01-01
Describes typical instructional applications of computers. Areas considered include: (1) instructional simulations and animations; (2) data analysis; (3) drill and practice; (4) student evaluation; (5) development of computer models and simulations; (6) biometrics or biostatistics; and (7) direct data acquisition and analysis. (JN)
Dietary Patterns and Body Mass Index in Children with Autism and Typically Developing Children
Evans, E. Whitney; Must, Aviva; Anderson, Sarah E.; Curtin, Carol; Scampini, Renee; Maslin, Melissa; Bandini, Linda
2012-01-01
To determine whether dietary patterns (juice and sweetened non-dairy beverages, fruits, vegetables, fruits & vegetables, snack foods, and kid’s meals) and associations between dietary patterns and body mass index (BMI) differed between 53 children with autism spectrum disorders (ASD) and 58 typically developing children, ages 3 to 11, multivariate regression models including interaction terms were used. Children with ASD were found to consume significantly more daily servings of sweetened beverages (2.6 versus 1.7, p=0.03) and snack foods (4.0 versus 3.0, p=0.01) and significantly fewer daily servings of fruits and vegetables (3.1 versus 4.4, p=0.006) than typically developing children. There was no evidence of statistical interaction between any of the dietary patterns and BMI z-score with autism status. Among all children, fruits and vegetables (p=0.004) and fruits alone (p=0.005) were positively associated with BMI z-score in our multivariate models. Children with ASD consume more energy-dense foods than typically developing children; however, in our sample, only fruits and vegetables were positively associated with BMI z-score. PMID:22936951
NASA Astrophysics Data System (ADS)
Scales, Wayne; Bernhardt, Paul; McCarrick, Michael; Briczinski, Stanley; Mahmoudian, Alireza; Fu, Haiyang; Ranade Bordikar, Maitrayee; Samimi, Alireza
There has been significant interest in so-called narrowband Stimulated Electromagnetic Emission SEE over the past several years due to recent discoveries at the High Frequency Active Auroral Research Program HAARP facility near Gakone, Alaska. Narrowband SEE (NSEE) has been defined as spectral features in the SEE spectrum typically within 1 kHz of the transmitter (or pump) frequency. SEE is due to nonlinear processes leading to re-radiation at frequencies other than the pump wave frequency during heating the ionospheric plasma with high power HF radio waves. Although NSEE exhibits a richly complex structure, it has now been shown after a substantial number of observations at HAARP, that NSEE can be grouped into two basic classes. The first are those spectral features, associated with Stimulated Brillouin Scatter SBS, which typically occur when the pump frequency is not close to electron gyro-harmonic frequencies. Typically, these spectral features are within roughly 50 Hz of the pump wave frequency where it is to be noted that the O+ ion gyro-frequency is roughly 50 Hz. The second class of spectral features corresponds to the case when the pump wave frequency is typically within roughly 10 kHz of electron gyro-harmonic frequencies. In this case, spectral features ordered by harmonics of ion gyro-frequencies are typically observed, and termed Stimulated Ion Bernstein Scatter SIBS. There is also important parametric behavior on both classes of NSEE depending on the pump wave parameters including the field strength, antenna beam angle, and electron gyro-harmonic number. This presentation will first provide an overview of the recent NSEE experimental observations at HAARP. Both Stimulated Brillouin Scatter SBS and Stimulated Ion Bernstein Scatter SIBS observations will be discussed as well as their relationship to each other. Possible theoretical formulation in terms of parametric decay instabilities will be provided. Computer simulation model results will be presented to provide insight into associated higher order nonlinear effects including particle acceleration and wave-wave processes. Both theory and model results will be put into the context of the experimental observations. Finally, possible applications of NSEE will be pointed out including triggering diagnostics for artificial ionization layer formation, proton precipitation event diagnostics, and electron temperature measurements in the heated volume.
NASA Astrophysics Data System (ADS)
Virtanen, I. O. I.; Virtanen, I. I.; Pevtsov, A. A.; Yeates, A.; Mursula, K.
2017-07-01
Aims: We aim to use the surface flux transport model to simulate the long-term evolution of the photospheric magnetic field from historical observations. In this work we study the accuracy of the model and its sensitivity to uncertainties in its main parameters and the input data. Methods: We tested the model by running simulations with different values of meridional circulation and supergranular diffusion parameters, and studied how the flux distribution inside active regions and the initial magnetic field affected the simulation. We compared the results to assess how sensitive the simulation is to uncertainties in meridional circulation speed, supergranular diffusion, and input data. We also compared the simulated magnetic field with observations. Results: We find that there is generally good agreement between simulations and observations. Although the model is not capable of replicating fine details of the magnetic field, the long-term evolution of the polar field is very similar in simulations and observations. Simulations typically yield a smoother evolution of polar fields than observations, which often include artificial variations due to observational limitations. We also find that the simulated field is fairly insensitive to uncertainties in model parameters or the input data. Due to the decay term included in the model the effects of the uncertainties are somewhat minor or temporary, lasting typically one solar cycle.
Phase space effects on fast ion distribution function modeling in tokamaks
Podesta, M.; Gorelenkova, M.; Fredrickson, E. D.; ...
2016-04-14
Here, integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities,ad-hocmodels can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions. Themore » kick model implemented in the tokamaktransport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.« less
NASA Astrophysics Data System (ADS)
Albaid, Abdelhamid; Dine, Michael; Draper, Patrick
2015-12-01
Solutions to the strong CP problem typically introduce new scales associated with the spontaneous breaking of symmetries. Absent any anthropic argument for small overline{θ} , these scales require stabilization against ultraviolet corrections. Supersymmetry offers a tempting stabilization mechanism, since it can solve the "big" electroweak hierarchy problem at the same time. One family of solutions to strong CP, including generalized parity models, heavy axion models, and heavy η' models, introduces {Z}_2 copies of (part of) the Standard Model and an associated scale of {Z}_2 -breaking. We review why, without additional structure such as supersymmetry, the {Z}_2 -breaking scale is unacceptably tuned. We then study "SUZ2" models, supersymmetric theories with {Z}_2 copies of the MSSM. We find that the addition of SUSY typically destroys the {Z}_2 protection of overline{θ}=0 , even at tree level, once SUSY and {Z}_2 are broken. In theories like supersymmetric completions of the twin Higgs, where {Z}_2 addresses the little hierarchy problem but not strong CP, two axions can be used to relax overline{θ}.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Katherine H.; Cutler, Dylan S.; Olis, Daniel R.
REopt is a techno-economic decision support model used to optimize energy systems for buildings, campuses, communities, and microgrids. The primary application of the model is for optimizing the integration and operation of behind-the-meter energy assets. This report provides an overview of the model, including its capabilities and typical applications; inputs and outputs; economic calculations; technology descriptions; and model parameters, variables, and equations. The model is highly flexible, and is continually evolving to meet the needs of each analysis. Therefore, this report is not an exhaustive description of all capabilities, but rather a summary of the core components of the model.
How to assess the impact of a physical parameterization in simulations of moist convection?
NASA Astrophysics Data System (ADS)
Grabowski, Wojciech
2017-04-01
A numerical model capable in simulating moist convection (e.g., cloud-resolving model or large-eddy simulation model) consists of a fluid flow solver combined with required representations (i.e., parameterizations) of physical processes. The later typically include cloud microphysics, radiative transfer, and unresolved turbulent transport. Traditional approaches to investigate impacts of such parameterizations on convective dynamics involve parallel simulations with different parameterization schemes or with different scheme parameters. Such methodologies are not reliable because of the natural variability of a cloud field that is affected by the feedback between the physics and dynamics. For instance, changing the cloud microphysics typically leads to a different realization of the cloud-scale flow, and separating dynamical and microphysical impacts is difficult. This presentation will present a novel modeling methodology, the piggybacking, that allows studying the impact of a physical parameterization on cloud dynamics with confidence. The focus will be on the impact of cloud microphysics parameterization. Specific examples of the piggybacking approach will include simulations concerning the hypothesized deep convection invigoration in polluted environments, the validity of the saturation adjustment in modeling condensation in moist convection, and separation of physical impacts from statistical uncertainty in simulations applying particle-based Lagrangian microphysics, the super-droplet method.
Woody-Herbaceous Species Coexistence in Mulga Hillslopes: Modelling Structure and Function
NASA Astrophysics Data System (ADS)
Soltanjalili, M. J.; Saco, P. M.; Willgoose, G. R.
2016-12-01
The fundamental processes underlying the coexistence of woody and herbaceous species in arid and semi-arid areas have been a topic of intense research during the last few decades. Experimental and modelling studies have both supported and disputed alternative hypotheses explaining this phenomenon. Vegetation models including the key processes that drive coexistence can be used to understand vegetation pattern dynamics and structure under current climate conditions, and to predict changes under future conditions. Here we present work done towards linking the observations to modelling. The model captures woody-herbaceous coexistence along a rainfall gradient characteristic of typical conditions on Mulga ecosystems in Australia. The dynamic vegetation model simulates the spatial dynamics of overland flow, soil moisture and vegetation growth of two species. It incorporates key mechanisms for coexistence and pattern formation, including facilitation by evaporation reduction through shading, and infiltration feedbacks, local and non-local seed dispersal, competition for water uptake. Model outcomes, obtained including diflerent mechanisms, are qualitatively compared to typical vegetation cover patterns in the Australian Mulga bioregion where bush fire is very infrequent and the fate of vegetation cover is mostly determined by intra- and interspecies interactions. Through these comparisons, and by drawing on the large number of recent studies that have delivered new insights into the dynamics of such ecosystems, we identify main mechanisms that need an improved representation in the dynamic vegetation models. We show that a realistic parameterization of the model leads to results which are aligned with the observations reported in the literature. At the lower end of the rainfall gradient woody species coexist with herbaceous species within a sparse banded pattern, while at higher rainfall woody species tend to dominate the landscape.
NASA Technical Reports Server (NTRS)
Cashion, Kenneth D.; Whitehurst, Charles A.
1987-01-01
The activities of the Earth Resources Laboratoy (ERL) for the past seventeen years are reviewed with particular reference to four typical applications demonstrating the use of remotely sensed data in a geobased information system context. The applications discussed are: a fire control model for the Olympic National Park; wildlife habitat modeling; a resource inventory system including a potential soil erosion model; and a corridor analysis model for locating routes between geographical locations. Some future applications are also discussed.
Acoustic Modeling of Lightweight Structures: A Literature Review
NASA Astrophysics Data System (ADS)
Yang, Shasha; Shen, Cheng
2017-10-01
This paper gives an overview of acoustic modeling for three kinds of typical lightweight structures including double-leaf plate system, stiffened single (or double) plate and porous material. Classical models are citied to provide frame work of theoretical modeling for acoustic property of lightweight structures; important research advances derived by our research group and other authors are introduced to describe the current state of art for acoustic research. Finally, remaining problems and future research directions are concluded and prospected briefly
A Representation for Gaining Insight into Clinical Decision Models
Jimison, Holly B.
1988-01-01
For many medical domains uncertainty and patient preferences are important components of decision making. Decision theory is useful as a representation for such medical models in computer decision aids, but the methodology has typically had poor performance in the areas of explanation and user interface. The additional representation of probabilities and utilities as random variables serves to provide a framework for graphical and text insight into complicated decision models. The approach allows for efficient customization of a generic model that describes the general patient population of interest to a patient- specific model. Monte Carlo simulation is used to calculate the expected value of information and sensitivity for each model variable, thus providing a metric for deciding what to emphasize in the graphics and text summary. The computer-generated explanation includes variables that are sensitive with respect to the decision or that deviate significantly from what is typically observed. These techniques serve to keep the assessment and explanation of the patient's decision model concise, allowing the user to focus on the most important aspects for that patient.
DOT National Transportation Integrated Search
2014-10-01
Public bikesharingthe shared use of a bicycle fleetis an innovative transportation strategy that has recently emerged in : major cities around the world, including North America. Information technology (IT)-based bikesharing systems typically p...
ERIC Educational Resources Information Center
Blanck, Harvey F.
2012-01-01
Naturally occurring gravity currents include events such as air flowing through an open front door, a volcanic eruption's pyroclastic flow down a mountainside, and the spread of the Bhopal disaster's methyl isocyanate gas. Gravity currents typically have a small height-to-distance ratio. Plastic models were designed and constructed with a…
Initial development of an ablative leading edge for the space shuttle orbiter
NASA Technical Reports Server (NTRS)
Daforno, G.; Rose, L.; Graham, J.; Roy, P.
1974-01-01
A state-of-the-art preliminary design for typical wing areas is developed. Seven medium-density ablators (with/without honeycomb, flown on Apollo, Prime, X15A2) are evaluated. The screening tests include: (1) leading-edge models sequentially subjected to ascent heating, cold soak, entry heating, post-entry pressure fluctuations, and touchdown shock, and (2) virgin/charred models subjected to bondline strains. Two honeycomb reinforced 30 pcf elastomeric ablators were selected. Roughness/recession degradation of low speed aerodynamics appears acceptable. The design, including attachments, substructure and joints, is presented.
He, Xiaoning; Wu, Jing; Jiang, Yawen; Liu, Li; Ye, Wenyu; Xue, Haibo; Montgomery, William
2015-04-09
It is uncertain whether the extra acquisition costs of atypical antipsychotics over typical antipsychotics are offset by their other reduced resource use especially in hospital services in China. This study compared the psychiatric-related health care resource utilization and direct medical costs for patients with schizophrenia initiating atypical or typical antipsychotics in Tianjin, China. Data were obtained from the Tianjin Urban Employee Basic Medical Insurance database (2008-2010). Adult patients with schizophrenia with ≥1 prescription for antipsychotics after ≥90-day washout and 12-month continuous enrollment after first prescription was included. Psychiatric-related resource utilization and direct medical costs of the atypical and typical cohorts were estimated during the 12-month follow-up period. Logistic regressions, ordinary least square (OLS), and generalized linear models (GLM) were employed to estimate differences of resource utilization and costs between the two cohorts. One-to-one propensity score matching was conducted as a sensitivity analysis. 1131 patients initiating either atypical (N = 648) or typical antipsychotics (N = 483) were identified. Compared with the typical cohort, the atypical cohort had a lower likelihood of hospitalization (45.8% vs. 56.7%, P < 0.001; adjusted OR: 0.58, P < 0.001) over the follow-up period. Medication costs for the atypical cohort were higher than the typical cohort ($438 vs. $187, P < 0.001); however, their non-medication medical costs were significantly lower ($1223 vs. $1704, P < 0.001). The total direct medical costs were similar between the atypical and typical cohorts before ($1661 vs. $1892, P = 0.100) and after matching ($1711 vs. 1868, P = 0.341), consistent with the results from OLS and GLM models for matched cohorts. The atypical cohort had similar total direct medical costs compared to the typical cohort. Higher medication costs associated with atypical antipsychotics were offset by a reduction in non-medication medical costs, driven by fewer hospitalizations.
Comparison between a typical and a simplified model for blast load-induced structural response
NASA Astrophysics Data System (ADS)
Abd-Elhamed, A.; Mahmoud, S.
2017-02-01
As explosive blasts continue to cause severe damage as well as victims in both civil and military environments. There is a bad need for understanding the behavior of structural elements to such extremely short duration dynamic loads where it is of great concern nowadays. Due to the complexity of the typical blast pressure profile model and in order to reduce the modelling and computational efforts, the simplified triangle model for blast loads profile is used to analyze structural response. This simplified model considers only the positive phase and ignores the suction phase which characterizes the typical one in simulating blast loads. The closed from solution for the equation of motion under blast load as a forcing term modelled either typical or simplified models has been derived. The considered herein two approaches have been compared using the obtained results from simulation response analysis of a building structure under an applied blast load. The computed error in simulating response using the simplified model with respect to the typical one has been computed. In general, both simplified and typical models can perform the dynamic blast-load induced response of building structures. However, the simplified one shows a remarkably different response behavior as compared to the typical one despite its simplicity and the use of only positive phase for simulating the explosive loads. The prediction of the dynamic system responses using the simplified model is not satisfactory due to the obtained larger errors as compared to the system responses obtained using the typical one.
Electromagnetic Launch Vehicle Fairing and Acoustic Blanket Model of Received Power Using FEKO
NASA Technical Reports Server (NTRS)
Trout, Dawn H.; Stanley, James E.; Wahid, Parveen F.
2011-01-01
Evaluating the impact of radio frequency transmission in vehicle fairings is important to sensitive spacecraft. This paper employees the Multilevel Fast Multipole Method (MLFMM) feature of a commercial electromagnetic tool to model the fairing electromagnetic environment in the presence of an internal transmitter. This work is an extension of the perfect electric conductor model that was used to represent the bare aluminum internal fairing cavity. This fairing model includes typical acoustic blanketing commonly used in vehicle fairings. Representative material models within FEKO were successfully used to simulate the test case.
Nanopyroxene Grafting with β-Cyclodextrin Monomer for Wastewater Applications.
Nafie, Ghada; Vitale, Gerardo; Carbognani Ortega, Lante; Nassar, Nashaat N
2017-12-06
Emerging nanoparticle technology provides opportunities for environmentally friendly wastewater treatment applications, including those in the large liquid tailings containments in the Alberta oil sands. In this study, we synthesize β-cyclodextrin grafted nanopyroxenes to offer an ecofriendly platform for the selective removal of organic compounds typically present in these types of applications. We carry out computational modeling at the micro level through molecular mechanics and molecular dynamics simulations and laboratory experiments at the macro level to understand the interactions between the synthesized nanomaterials and two-model naphthenic acid molecules (cyclopentanecarboxylic and trans-4-pentylcyclohexanecarboxylic acids) typically existing in tailing ponds. The proof-of-concept computational modeling and experiments demonstrate that monomer grafted nanopyroxene or nano-AE of the sodium iron-silicate aegirine are found to be promising candidates for the removal of polar organic compounds from wastewater, among other applications. These nano-AE offer new possibilities for treating tailing ponds generated by the oil sands industry.
Trajectory Simulation of Meteors Assuming Mass Loss and Fragmentation
NASA Technical Reports Server (NTRS)
Allen, Gary A., Jr.; Prabhu, Dinesh K.; Saunders, David A
2015-01-01
Program used to simulate atmospheric flight trajectories of entry capsules [1] Includes models of atmospheres of different planetary destinations - Earth, Mars, Venus, Jupiter, Saturn, Uranus, Titan, ... Solves 3--degrees of freedom (3DoF) equations for a single body treated as a point mass. Also supports 6-DoF trajectory simula4on and Monte Carlo analyses. Uses Fehlberg--Runge--Kuna (4th-5th order) time integraion with automaic step size control. Includes rotating spheroidal planet with gravitational field having a J2 harmonic. Includes a variety of engineering aerodynamic and heat flux models. Capable of specifying events - heatshield jettison, parachute deployment, etc. - at predefined altitudes or Mach number. Has material thermal response models of typical aerospace materials integrated.
Dynamic Analyses Including Joints Of Truss Structures
NASA Technical Reports Server (NTRS)
Belvin, W. Keith
1991-01-01
Method for mathematically modeling joints to assess influences of joints on dynamic response of truss structures developed in study. Only structures with low-frequency oscillations considered; only Coulomb friction and viscous damping included in analysis. Focus of effort to obtain finite-element mathematical models of joints exhibiting load-vs.-deflection behavior similar to measured load-vs.-deflection behavior of real joints. Experiments performed to determine stiffness and damping nonlinearities typical of joint hardware. Algorithm for computing coefficients of analytical joint models based on test data developed to enable study of linear and nonlinear effects of joints on global structural response. Besides intended application to large space structures, applications in nonaerospace community include ground-based antennas and earthquake-resistant steel-framed buildings.
Down Syndrome Health Screening--The Fife Model
ERIC Educational Resources Information Center
Jones, Jill; Hathaway, Dorothy; Gilhooley, Mary; Leech, Amanda; MacLeod, Susan
2010-01-01
People with Down syndrome have a greater risk of developing a range of health problems, including cardiac problems, thyroid disorders, sensory impairments, reduced muscle tone (hypotonia) and Alzheimer's disease. Despite this increased risk, regular screening is not typically offered to individuals with Down syndrome. A multidisciplinary health…
ERIC Educational Resources Information Center
Priano, Christine
2013-01-01
This model-building activity provides a quick, visual, hands-on tool that allows students to examine more carefully the cloverleaf structure of a typical tRNA molecule. When used as a supplement to lessons that involve gene expression, this exercise reinforces several concepts in molecular genetics, including nucleotide base-pairing rules, the…
Choice Rules and Accumulator Networks
2015-01-01
This article presents a preference accumulation model that can be used to implement a number of different multi-attribute heuristic choice rules, including the lexicographic rule, the majority of confirming dimensions (tallying) rule and the equal weights rule. The proposed model differs from existing accumulators in terms of attribute representation: Leakage and competition, typically applied only to preference accumulation, are also assumed to be involved in processing attribute values. This allows the model to perform a range of sophisticated attribute-wise comparisons, including comparisons that compute relative rank. The ability of a preference accumulation model composed of leaky competitive networks to mimic symbolic models of heuristic choice suggests that these 2 approaches are not incompatible, and that a unitary cognitive model of preferential choice, based on insights from both these approaches, may be feasible. PMID:28670592
NASA Technical Reports Server (NTRS)
Burk, S. M., Jr.; Bowman, J. S., Jr.; White, W. L.
1977-01-01
A spin tunnel study is reported on a scale model of a research airplane typical of low-wing, single-engine, light general aviation airplanes to determine the tail parachute diameter and canopy distance (riser length plus suspension-line length) required for energency spin recovery. Nine tail configurations were tested, resulting in a wide range of developed spin conditions, including steep spins and flat spins. The results indicate that the full-scale parachute diameter required for satisfactory recovery from the most critical conditions investigated is about 3.2 m and that the canopy distance, which was found to be critical for flat spins, should be between 4.6 and 6.1 m.
Sullivan, Annett B.; Rounds, Stewart A.
2006-01-01
To meet water quality targets and the municipal and industrial water needs of a growing population in the Tualatin River Basin in northwestern Oregon, an expansion of Henry Hagg Lake is under consideration. Hagg Lake is the basin's primary storage reservoir and provides water during western Oregon's typically dry summers. Potential modifications include raising the dam height by 6.1 meters (20 feet), 7.6 meters (25 feet), or 12.2 meters (40 feet); installing additional outlets (possibly including a selective withdrawal tower); and adding additional inflows to provide greater reliability of filling the enlarged reservoir. One method of providing additional inflows is to route water from the upper Tualatin River through a tunnel and into Sain Creek, a tributary to the lake. Another option is to pump water from the Tualatin River (downstream of the lake) uphill and into the reservoir during the winter--the 'pump-back' option. A calibrated CE-QUAL-W2 model of Henry Hagg Lake's hydrodynamics, temperature, and water quality was used to examine the effect of these proposed changes on water quality in the lake and downstream. Most model scenarios were run with the calibrated model for 2002, a typical water year; a few scenarios were run for 2001, a drought year. More...
NASA Technical Reports Server (NTRS)
Susko, M.; Hill, C. K.; Kaufman, J. W.
1974-01-01
The quantitative estimates are presented of pollutant concentrations associated with the emission of the major combustion products (HCl, CO, and Al2O3) to the lower atmosphere during normal launches of the space shuttle. The NASA/MSFC Multilayer Diffusion Model was used to obtain these calculations. Results are presented for nine sets of typical meteorological conditions at Kennedy Space Center, including fall, spring, and a sea-breeze condition, and six sets at Vandenberg AFB. In none of the selected typical meteorological regimes studied was a 10-min limit of 4 ppm exceeded.
Interactions between hyporheic flow produced by stream meanders, bars, and dunes
Stonedahl, Susa H.; Harvey, Judson W.; Packman, Aaron I.
2013-01-01
Stream channel morphology from grain-scale roughness to large meanders drives hyporheic exchange flow. In practice, it is difficult to model hyporheic flow over the wide spectrum of topographic features typically found in rivers. As a result, many studies only characterize isolated exchange processes at a single spatial scale. In this work, we simulated hyporheic flows induced by a range of geomorphic features including meanders, bars and dunes in sand bed streams. Twenty cases were examined with 5 degrees of river meandering. Each meandering river model was run initially without any small topographic features. Models were run again after superimposing only bars and then only dunes, and then run a final time after including all scales of topographic features. This allowed us to investigate the relative importance and interactions between flows induced by different scales of topography. We found that dunes typically contributed more to hyporheic exchange than bars and meanders. Furthermore, our simulations show that the volume of water exchanged and the distributions of hyporheic residence times resulting from various scales of topographic features are close to, but not linearly additive. These findings can potentially be used to develop scaling laws for hyporheic flow that can be widely applied in streams and rivers.
Teaching Raster GIS Operations with Spreadsheets.
ERIC Educational Resources Information Center
Raubal, Martin; Gaupmann, Bernhard; Kuhn, Werner
1997-01-01
Defines raster technology in its relationship to geographic information systems and notes that it is typically used with the application of remote sensing techniques and scanning devices. Discusses the role of spreadsheets in a raster model, and describes a general approach based on spreadsheets. Includes six computer-generated illustrations. (MJP)
Fecal coliform (FC) contamination in coastal waters is an ongoing public health problem worldwide. Coastal wetlands and lagoons are typically expected to protect coastal waters by attenuating watershed pollutants including FC bacteria. However, new evidence suggests that coast...
Community-Based Participatory Study Abroad: A Proposed Model for Social Work Education
ERIC Educational Resources Information Center
Fisher, Colleen M.; Grettenberger, Susan E.
2015-01-01
Study abroad experiences offer important benefits for social work students and faculty, including global awareness, practice skill development, and enhanced multicultural competence. Short-term study abroad programs are most feasible but typically lack depth of engagement with host communities and may perpetuate existing systems of power and…
Environmental science and management are fed by individual studies of pollution effects, often focused on single locations. Data are encountered data, typically from multiple sources and on different time and spatial scales. Statistical issues including publication bias and m...
2011-07-01
demand capabilities, a force-generation model that provides sufficient strategic depth, and a comprehensive study on the future balance between Active...career, and use of bonuses and credits to reward critical specialties and outstanding perfor- mance. They also include a continuum-of-service model that...development projects (for instance, the F–22) typically try to produce major leaps in technology and performance in a single step. A better model , it
Barlow, Paul M.
1997-01-01
Steady-state, two- and three-dimensional, ground-water-flow models coupled with particle tracking were evaluated to determine their effectiveness in delineating contributing areas of wells pumping from stratified-drift aquifers of Cape Cod, Massachusetts. Several contributing areas delineated by use of the three-dimensional models do not conform to simple ellipsoidal shapes that are typically delineated by use of two-dimensional analytical and numerical modeling techniques and included discontinuous areas of the water table.
Efficient occupancy model-fitting for extensive citizen-science data.
Dennis, Emily B; Morgan, Byron J T; Freeman, Stephen N; Ridout, Martin S; Brereton, Tom M; Fox, Richard; Powney, Gary D; Roy, David B
2017-01-01
Appropriate large-scale citizen-science data present important new opportunities for biodiversity modelling, due in part to the wide spatial coverage of information. Recently proposed occupancy modelling approaches naturally incorporate random effects in order to account for annual variation in the composition of sites surveyed. In turn this leads to Bayesian analysis and model fitting, which are typically extremely time consuming. Motivated by presence-only records of occurrence from the UK Butterflies for the New Millennium data base, we present an alternative approach, in which site variation is described in a standard way through logistic regression on relevant environmental covariates. This allows efficient occupancy model-fitting using classical inference, which is easily achieved using standard computers. This is especially important when models need to be fitted each year, typically for many different species, as with British butterflies for example. Using both real and simulated data we demonstrate that the two approaches, with and without random effects, can result in similar conclusions regarding trends. There are many advantages to classical model-fitting, including the ability to compare a range of alternative models, identify appropriate covariates and assess model fit, using standard tools of maximum likelihood. In addition, modelling in terms of covariates provides opportunities for understanding the ecological processes that are in operation. We show that there is even greater potential; the classical approach allows us to construct regional indices simply, which indicate how changes in occupancy typically vary over a species' range. In addition we are also able to construct dynamic occupancy maps, which provide a novel, modern tool for examining temporal changes in species distribution. These new developments may be applied to a wide range of taxa, and are valuable at a time of climate change. They also have the potential to motivate citizen scientists.
Efficient occupancy model-fitting for extensive citizen-science data
Morgan, Byron J. T.; Freeman, Stephen N.; Ridout, Martin S.; Brereton, Tom M.; Fox, Richard; Powney, Gary D.; Roy, David B.
2017-01-01
Appropriate large-scale citizen-science data present important new opportunities for biodiversity modelling, due in part to the wide spatial coverage of information. Recently proposed occupancy modelling approaches naturally incorporate random effects in order to account for annual variation in the composition of sites surveyed. In turn this leads to Bayesian analysis and model fitting, which are typically extremely time consuming. Motivated by presence-only records of occurrence from the UK Butterflies for the New Millennium data base, we present an alternative approach, in which site variation is described in a standard way through logistic regression on relevant environmental covariates. This allows efficient occupancy model-fitting using classical inference, which is easily achieved using standard computers. This is especially important when models need to be fitted each year, typically for many different species, as with British butterflies for example. Using both real and simulated data we demonstrate that the two approaches, with and without random effects, can result in similar conclusions regarding trends. There are many advantages to classical model-fitting, including the ability to compare a range of alternative models, identify appropriate covariates and assess model fit, using standard tools of maximum likelihood. In addition, modelling in terms of covariates provides opportunities for understanding the ecological processes that are in operation. We show that there is even greater potential; the classical approach allows us to construct regional indices simply, which indicate how changes in occupancy typically vary over a species’ range. In addition we are also able to construct dynamic occupancy maps, which provide a novel, modern tool for examining temporal changes in species distribution. These new developments may be applied to a wide range of taxa, and are valuable at a time of climate change. They also have the potential to motivate citizen scientists. PMID:28328937
Uncorrelated Encounter Model of the National Airspace System, Version 2.0
2013-08-19
can exist to certify avoidance systems for operational use. Evaluations typically include flight tests, operational impact studies, and simulation of...appropriate for large-scale air traffic impact studies— for example, examination of sector loading or conflict rates. The focus here includes two types of...between two IFR aircraft in oceanic airspace. The reason for this is that one cannot observe encounters of sufficient fidelity in the available data
Well test mathematical model for fractures network in tight oil reservoirs
NASA Astrophysics Data System (ADS)
Diwu, Pengxiang; Liu, Tongjing; Jiang, Baoyi; Wang, Rui; Yang, Peidie; Yang, Jiping; Wang, Zhaoming
2018-02-01
Well test, especially build-up test, has been applied widely in the development of tight oil reservoirs, since it is the only available low cost way to directly quantify flow ability and formation heterogeneity parameters. However, because of the fractures network near wellbore, generated from artificial fracturing linking up natural factures, traditional infinite and finite conductivity fracture models usually result in significantly deviation in field application. In this work, considering the random distribution of natural fractures, physical model of fractures network is proposed, and it shows a composite model feature in the large scale. Consequently, a nonhomogeneous composite mathematical model is established with threshold pressure gradient. To solve this model semi-analytically, we proposed a solution approach including Laplace transform and virtual argument Bessel function, and this method is verified by comparing with existing analytical solution. The matching data of typical type curves generated from semi-analytical solution indicates that the proposed physical and mathematical model can describe the type curves characteristic in typical tight oil reservoirs, which have up warping in late-term rather than parallel lines with slope 1/2 or 1/4. It means the composite model could be used into pressure interpretation of artificial fracturing wells in tight oil reservoir.
Experimental Study of a Hot Structure for a Reentry Vehicle
NASA Technical Reports Server (NTRS)
Pride, Richard A.; Royster, Dick M.; Helms, Bobbie F.
1960-01-01
A large structural model of a reentry vehicle has been built incorporating design concepts applicable to a radiation-cooled vehicle. Thermal-stress alleviating features of the model are discussed. Environmental tests on the model include approximately 100 cycles of loading at room temperature and 33 cycles of combined loading and-heating up to temperatures of 1,6000 F. Measured temperatures are shown for typical parts of the model. Comparisons are made between experimental and calculated deflections and strains. The structure successfully survived the heating and loading environments.
A tensor approach to modeling of nonhomogeneous nonlinear systems
NASA Technical Reports Server (NTRS)
Yurkovich, S.; Sain, M.
1980-01-01
Model following control methodology plays a key role in numerous application areas. Cases in point include flight control systems and gas turbine engine control systems. Typical uses of such a design strategy involve the determination of nonlinear models which generate requested control and response trajectories for various commands. Linear multivariable techniques provide trim about these motions; and protection logic is added to secure the hardware from excursions beyond the specification range. This paper reports upon experience in developing a general class of such nonlinear models based upon the idea of the algebraic tensor product.
Track structure model for damage to mammalian cell cultures during solar proton events
NASA Technical Reports Server (NTRS)
Cucinotta, F. A.; Wilson, J. W.; Townsend, L. W.; Shinn, J. L.; Katz, R.
1992-01-01
Solar proton events (SPEs) occur infrequently and unpredictably, thus representing a potential hazard to interplanetary space missions. Biological damage from SPEs will be produced principally through secondary electron production in tissue, including important contributions due to delta rays from nuclear reaction products. We review methods for estimating the biological effectiveness of SPEs using a high energy proton model and the parametric cellular track model. Results of the model are presented for several of the historically largest flares using typical levels and body shielding.
Investigation of flowfields found in typical combustor geometries
NASA Technical Reports Server (NTRS)
Lilley, D. G.
1985-01-01
Activities undertaken during the entire course of research are summarized. Studies were concerned with experimental and theoretical research on 2-D axisymmetric geometries under low speed nonreacting, turbulent, swirling flow conditions typical of gas turbine and ramjet combustion chambers. They included recirculation zone characterization, time-mean and turbulence simulation in swirling recirculating flow, sudden and gradual expansion flowfields, and furher complexities and parameter influences. The study included the investigation of: a complete range of swirl strengths; swirler performance; downstream contraction nozzle sizes and locations; expansion ratios; and inlet side-wall angles. Their individual and combined effects on the test section flowfield were observed, measured and characterized. Experimental methods included flow visualization (with smoke and neutrally-buoyant helium-filled soap bubbles), five-hole pitot probe time-mean velocity field measurements, and single-, double-, and triple-wire hot-wire anemometry measurements of time-mean velocities, normal and shear Reynolds sresses. Computational methods included development of the STARPIC code from the primitive-variable TEACH computer code, and its use in flowfield prediction and turbulence model development.
Constraints on millisecond magnetars as the engines of prompt emission in gamma-ray bursts
NASA Astrophysics Data System (ADS)
Beniamini, Paz; Giannios, Dimitrios; Metzger, Brian D.
2017-12-01
We examine millisecond magnetars as central engines of gamma-ray bursts' (GRBs) prompt emission. Using the protomagnetar wind model of Metzger et al., we estimate the temporal evolution of the magnetization and power injection at the base of the GRB jet and apply these to different prompt emission models to make predictions for the GRB energetics, spectra and light curves. We investigate both shock and magnetic reconnection models for the particle acceleration, as well as the effects of energy dissipation across optically thick and thin regions of the jet. The magnetization at the base of the jet, σ0, is the main parameter driving the GRB evolution in the magnetar model and the emission is typically released for 100 ≲σ0 ≲3000. Given the rapid increase in σ0 as the protomagnetar cools and its neutrino-driven mass loss subsides, the GRB duration is typically limited to ≲100 s. This low baryon loading at late times challenges magnetar models for ultralong GRBs, though black hole models likely run into similar difficulties without substantial entrainment from the jet walls. The maximum radiated gamma-ray energy is ≲5 × 1051 erg, significantly less than the magnetar's total initial rotational energy and in strong tension with the high end of the observed GRB energy distribution. However, the gradual magnetic dissipation model applied to a magnetar central engine, naturally explains several key observables of typical GRBs, including energetics, durations, stable peak energies, spectral slopes and a hard to soft evolution during the burst.
Prediction of space shuttle fluctuating pressure environments, including rocket plume effects
NASA Technical Reports Server (NTRS)
Plotkin, K. J.; Robertson, J. E.
1973-01-01
Preliminary estimates of space shuttle fluctuating pressure environments have been made based on prediction techniques developed by Wyle Laboratories. Particular emphasis has been given to the transonic speed regime during launch of a parallel-burn space shuttle configuration. A baseline configuration consisting of a lightweight orbiter and monolithic SRB, together with a typical flight trajectory, have been used as models for the predictions. Critical fluctuating pressure environments are predicted at transonic Mach numbers. Comparisons between predicted environments and wind tunnel test results, in general, showed good agreement. Predicted one-third octave band spectra for the above environments were generally one of three types: (1) attached turbulent boundary layer spectra (typically high frequencies); (2) homogeneous separated flow and shock-free interference flow spectra (typically intermediate frequencies); and (3) shock-oscillation and shock-induced interference flow spectra (typically low frequencies). Predictions of plume induced separated flow environments were made. Only the SRB plumes are important, with fluctuating levels comparable to compression-corner induced separated flow shock oscillation.
1983-09-01
which serve as aquifers. The aquifers include, in ascending order, the Patuxent, the Patapso, the Magothy , and the Aquia Formations. These aquifer...consist typically of sand layers of varying thickness interbedded with clays. The general thickness of the Patuxent, Patapsco, Magothy and Aquia in the...Aquifers. This was accomplished using a digital simulation model originally developed by the USGS for the Magothy Aquifer. The model uses a finite
Taylor, Mark J; Taylor, Natasha
2014-12-01
England and Wales are moving toward a model of 'opt out' for use of personal confidential data in health research. Existing research does not make clear how acceptable this move is to the public. While people are typically supportive of health research, when asked to describe the ideal level of control there is a marked lack of consensus over the preferred model of consent (e.g. explicit consent, opt out etc.). This study sought to investigate a relatively unexplored difference between the consent model that people prefer and that which they are willing to accept. It also sought to explore any reasons for such acceptance.A mixed methods approach was used to gather data, incorporating a structured questionnaire and in-depth focus group discussions led by an external facilitator. The sampling strategy was designed to recruit people with different involvement in the NHS but typically with experience of NHS services. Three separate focus groups were carried out over three consecutive days.The central finding is that people are typically willing to accept models of consent other than that which they would prefer. Such acceptance is typically conditional upon a number of factors, including: security and confidentiality, no inappropriate commercialisation or detrimental use, transparency, independent overview, the ability to object to any processing considered to be inappropriate or particularly sensitive.This study suggests that most people would find research use without the possibility of objection to be unacceptable. However, the study also suggests that people who would prefer to be asked explicitly before data were used for purposes beyond direct care may be willing to accept an opt out model of consent if the reasons for not seeking explicit consent are accessible to them and they trust that data is only going to be used under conditions, and with safeguards, that they would consider to be acceptable even if not preferable.
Polarimetric ISAR: Simulation and image reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chambers, David H.
In polarimetric ISAR the illumination platform, typically airborne, carries a pair of antennas that are directed toward a fixed point on the surface as the platform moves. During platform motion, the antennas maintain their gaze on the point, creating an effective aperture for imaging any targets near that point. The interaction between the transmitted fields and targets (e.g. ships) is complicated since the targets are typically many wavelengths in size. Calculation of the field scattered from the target typically requires solving Maxwell’s equations on a large three-dimensional numerical grid. This is prohibitive to use in any real-world imaging algorithm, somore » the scattering process is typically simplified by assuming the target consists of a cloud of independent, non-interacting, scattering points (centers). Imaging algorithms based on this scattering model perform well in many applications. Since polarimetric radar is not very common, the scattering model is often derived for a scalar field (single polarization) where the individual scatterers are assumed to be small spheres. However, when polarization is important, we must generalize the model to explicitly account for the vector nature of the electromagnetic fields and its interaction with objects. In this note, we present a scattering model that explicitly includes the vector nature of the fields but retains the assumption that the individual scatterers are small. The response of the scatterers is described by electric and magnetic dipole moments induced by the incident fields. We show that the received voltages in the antennas are linearly related to the transmitting currents through a scattering impedance matrix that depends on the overall geometry of the problem and the nature of the scatterers.« less
Moloney, Eoin; O'Connor, Joanne; Craig, Dawn; Robalino, Shannon; Chrysos, Alexandros; Javanbakht, Mehdi; Sims, Andrew; Stansby, Gerard; Wilkes, Scott; Allen, John
2018-04-23
Peripheral arterial disease (PAD) is a common condition, in which atherosclerotic narrowing in the arteries restricts blood supply to the leg muscles. In order to support future model-based economic evaluations comparing methods of diagnosis in this area, a systematic review of economic modelling studies was conducted. A systematic literature review was performed in June 2017 to identify model-based economic evaluations of diagnostic tests to detect PAD, with six individual databases searched. The review was conducted in accordance with the methods outlined in the Centre for Reviews and Dissemination's guidance for undertaking reviews in healthcare, and appropriate inclusion criteria were applied. Relevant data were extracted, and studies were quality assessed. Seven studies were included in the final review, all of which were published between 1995 and 2014. There was wide variation in the types of diagnostic test compared. The majority of the studies (six of seven) referenced the sources used to develop their model, and all studies stated and justified the structural assumptions. Reporting of the data within the included studies could have been improved. Only one identified study focused on the cost-effectiveness of a test typically used in primary care. This review brings together all applied modelling methods for tests used in the diagnosis of PAD, which could be used to support future model-based economic evaluations in this field. The limited modelling work available on tests typically used for the detection of PAD in primary care, in particular, highlights the importance of future work in this area.
Study of multiband disordered systems using the typical medium dynamical cluster approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yi; Terletska, Hanna; Moore, C.
We generalize the typical medium dynamical cluster approximation to multiband disordered systems. Using our extended formalism, we perform a systematic study of the nonlocal correlation effects induced by disorder on the density of states and the mobility edge of the three-dimensional two-band Anderson model. We include interband and intraband hopping and an intraband disorder potential. Our results are consistent with those obtained by the transfer matrix and the kernel polynomial methods. We also apply the method to K xFe 2-ySe 2 with Fe vacancies. Despite the strong vacancy disorder and anisotropy, we find the material is not an Anderson insulator.more » Moreover our results demonstrate the application of the typical medium dynamical cluster approximation method to study Anderson localization in real materials.« less
Study of multiband disordered systems using the typical medium dynamical cluster approximation
Zhang, Yi; Terletska, Hanna; Moore, C.; ...
2015-11-06
We generalize the typical medium dynamical cluster approximation to multiband disordered systems. Using our extended formalism, we perform a systematic study of the nonlocal correlation effects induced by disorder on the density of states and the mobility edge of the three-dimensional two-band Anderson model. We include interband and intraband hopping and an intraband disorder potential. Our results are consistent with those obtained by the transfer matrix and the kernel polynomial methods. We also apply the method to K xFe 2-ySe 2 with Fe vacancies. Despite the strong vacancy disorder and anisotropy, we find the material is not an Anderson insulator.more » Moreover our results demonstrate the application of the typical medium dynamical cluster approximation method to study Anderson localization in real materials.« less
Wilson, Antoinette R; Leaper, Campbell
2016-08-01
The purpose of this study was to integrate and validate a multidimensional model of ethnic-racial identity and gender identity borrowing constructs and measures based on social identity and gender identity theories. Participants included 662 emerging adults (M age = 19.86 years; 75 % female) who self-identified either as Asian American, Latino/a, or White European American. We assessed the following facets separately for ethnic-racial identity and gender identity: centrality, in-group affect, in-group ties, self-perceived typicality, and felt conformity pressure. Within each identity domain (gender or ethnicity/race), the five dimensions generally indicated small-to-moderate correlations with one another. Also, correlations between domains for each dimension (e.g., gender typicality and ethnic-racial typicality) were mostly moderate in magnitude. We also noted some group variations based on participants' ethnicity/race and gender in how strongly particular dimensions were associated with self-esteem. Finally, participants who scored positively on identity dimensions for both gender and ethnic-racial domains indicated higher self-esteem than those who scored high in only one domain or low in both domains. We recommend the application of multidimensional models to study social identities in multiple domains as they may relate to various outcomes during development.
NASA Technical Reports Server (NTRS)
Holdeman, James D.
1991-01-01
Experimental and computational results on the mixing of single, double, and opposed rows of jets with an isothermal or variable temperature mainstream in a confined subsonic crossflow are summarized. The studies were performed to investigate flow and geometric variations typical of the complex 3D flowfield in the dilution zone of combustion chambers in gas turbine engines. The principal observations from the experiments were that the momentum-flux ratio was the most significant flow variable, and that temperature distributions were similar (independent of orifice diameter) when the orifice spacing and the square-root of the momentum-flux ratio were inversely proportional. The experiments and empirical model for the mixing of a single row of jets from round holes were extended to include several variations typical of gas turbine combustors.
Semantic Elements in Deep Structures as Seen from a Modernist Definition of Clarity.
ERIC Educational Resources Information Center
Lemke, Alan
Typically, teachers approach ambiguity in student writing by suggesting that students focus on diction, syntax, and writing format; however, the works of modernists (including T.S. Eliot, Ludwig Wittgenstein, Karl Marx, and Pablo Picasso) suggest the importance of conceptions of semantic clarity. Transformational models for syntactic elements in…
Teachers' Cognitive Activities and Overt Behaviors.
ERIC Educational Resources Information Center
Brophy, Jere E.
Recent research on teacher planning, thinking, and decision making is reviewed. The work on planning reveals that teachers typically do not use the objectives-based, rational models stressed in textbooks, but instead concentrate on the activities included in a curriculum as they seem to relate to the needs and interests of the students. This…
Undergraduate Student Perspectives on Electronic Portfolio Assessment in College Composition Courses
ERIC Educational Resources Information Center
Fullerton, Bridget Katherine Jean
2017-01-01
Though Linda Adler-Kassner and Peggy O'Neill claim that ethical writing assessment models "must be designed and built collaboratively, with careful attention to the values and passions of all involved, through a process that provides access to all," college students have not typically been included in scholarly conversations about…
Cyclic Polyynes as Examples of the Quantum Mechanical Particle on a Ring
ERIC Educational Resources Information Center
Anderson, Bruce D.
2012-01-01
Many quantum mechanical models are discussed as part of the undergraduate physical chemistry course to help students understand the connection between eigenvalue expressions and spectroscopy. Typical examples covered include the particle in a box, the harmonic oscillator, the rigid rotor, and the hydrogen atom. This article demonstrates that…
Leading a Friends Helping Friends Peer Program.
ERIC Educational Resources Information Center
Painter, Carol
This manual is a guide for the adult learner who is developing and maintaining a peer counselor program. The first chapter presents an overview of peer counseling. The second chapter describes a model for a high school peer counseling program. Training, placements and programs, and a typical week's schedule are included. The third chapter presents…
Creative Behavior, Motivation, Environment and Culture: The Building of a Systems Model
ERIC Educational Resources Information Center
Hennessey, Beth A.
2015-01-01
With the exception of research examining the productivity of teams, the empirical study of creativity was until recently almost exclusively focused at the level of the individual creator. Investigators and theorists typically chose to decontextualize the creative process and failed to include a consideration of anyone or anything beyond the person…
NASA Astrophysics Data System (ADS)
Rognlien, Thomas; Rensink, Marvin
2016-10-01
Transport simulations for the edge plasma of tokamaks and other magnetic fusion devices requires the coupling of plasma and recycling or injected neutral gas. There are various neutral models used for this purpose, e.g., atomic fluid model, a Monte Carlo particle models, transition/escape probability methods, and semi-analytic models. While the Monte Carlo method is generally viewed as the most accurate, it is time consuming, which becomes even more demanding for device simulations of high densities and size typical of fusion power plants because the neutral collisional mean-free path becomes very small. Here we examine the behavior of an extended fluid neutral model for hydrogen that includes both atoms and molecules, which easily includes nonlinear neutral-neutral collision effects. In addition to the strong charge-exchange between hydrogen atoms and ions, elastic scattering is included among all species. Comparisons are made with the DEGAS 2 Monte Carlo code. Work performed for U.S. DoE by LLNL under Contract DE-AC52-07NA27344.
a Numerical Model for Flue Gas Desulfurization System.
NASA Astrophysics Data System (ADS)
Kim, Sung Joon
The purpose of this work is to develop a reliable numerical model for spray dryer desulfurization systems. The shape of the spray dryer requires that a body fitted orthogonal coordinate system be used for the numerical model. The governing equations are developed in the general orthogonal coordinates and discretized to yield a system of algebraic equations. A turbulence model is also included in the numerical program. A new second order numerical scheme is developed and included in the numerical model. The trajectory approach is used to simulate the flow of the dispersed phase. Two-way coupling phenomena is modeled by this scheme. The absorption of sulfur dioxide into lime slurry droplets is simulated by a model based on gas -phase mass transfer. The program is applied to a typical spray dryer desulfurization system. The results show the capability of the program to predict the sensitivity of system performance to changes in operational parameters.
Zuthi, Mst Fazana Rahman; Guo, Wenshan; Ngo, Huu Hao; Nghiem, Duc Long; Hai, Faisal I; Xia, Siqing; Li, Jianxin; Li, Jixiang; Liu, Yi
2017-08-01
This study aimed to develop a practical semi-empirical mathematical model of membrane fouling that accounts for cake formation on the membrane and its pore blocking as the major processes of membrane fouling. In the developed model, the concentration of mixed liquor suspended solid is used as a lumped parameter to describe the formation of cake layer including the biofilm. The new model considers the combined effect of aeration and backwash on the foulants' detachment from the membrane. New exponential coefficients are also included in the model to describe the exponential increase of transmembrane pressure that typically occurs after the initial stage of an MBR operation. The model was validated using experimental data obtained from a lab-scale aerobic sponge-submerged membrane bioreactor (MBR), and the simulation of the model agreed well with the experimental findings. Copyright © 2017 Elsevier Ltd. All rights reserved.
Review: Optimization methods for groundwater modeling and management
NASA Astrophysics Data System (ADS)
Yeh, William W.-G.
2015-09-01
Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.
Lubricant rheology applied to elastohydrodynamic lubrication
NASA Technical Reports Server (NTRS)
Winer, W. O.; Sanborn, D. M.
1977-01-01
Viscosity measurements in a high pressure rheometer, elastohydrodynamic simulator studies (including the development of a temperature measuring technique), and analytical fluid modeling for elastohydrodynamic contacts are described. The more recent research which is described concerns infrared temperature measurements in elastohydrodynamic contacts and the exploration of the glassy state of lubricants. A correlation, of engineering significance, was made between transient surface temperature measurements and surface roughness profiles. Measurements of glass transitions of lubricants and the study of the effect of rate processes on materials lead to the conclusion that typical lubricants go into the glassy state as they pass through the contact region of typical elastohydrodynamic contacts.
Burrough, Eric; Strait, Erin; Kinyon, Joann; Bower, Leslie; Madson, Darin; Schwartz, Kent; Frana, Timothy; Songer, J Glenn
2012-12-07
Multiple Brachyspira spp. can colonize the porcine colon, and the presence of the strongly beta-hemolytic Brachyspira hyodysenteriae is typically associated with clinical swine dysentery. Recently, several Brachyspira spp. have been isolated from the feces of pigs with clinical disease suggestive of swine dysentery, yet these isolates were not identified as B. hyodysenteriae by genotypic or phenotypic methods. This study used a mouse model of swine dysentery to compare the pathogenic potential of seventeen different Brachyspira isolates including eight atypical clinical isolates, six typical clinical isolates, the standard strain of B. hyodysenteriae (B204), and reference strains of Brachyspira intermedia and Brachyspira innocens. Results revealed that strongly beta-hemolytic isolates induced significantly greater cecal inflammation than weakly beta-hemolytic isolates regardless of the genetic identification of the isolate, and that strongly beta-hemolytic isolates identified as 'Brachyspira sp. SASK30446' and B. intermedia by PCR produced lesions indistinguishable from those caused by B. hyodysenteriae in this model. Copyright © 2012 Elsevier B.V. All rights reserved.
A log-normal distribution model for the molecular weight of aquatic fulvic acids
Cabaniss, S.E.; Zhou, Q.; Maurice, P.A.; Chin, Y.-P.; Aiken, G.R.
2000-01-01
The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a lognormal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured M(n) and M(w) and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several types of molecular weight data, including the shapes of high- pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a log-normal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured Mn and Mw and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several type's of molecular weight data, including the shapes of high-pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.
Multi-parametric centrality method for graph network models
NASA Astrophysics Data System (ADS)
Ivanov, Sergei Evgenievich; Gorlushkina, Natalia Nikolaevna; Ivanova, Lubov Nikolaevna
2018-04-01
The graph model networks are investigated to determine centrality, weights and the significance of vertices. For centrality analysis appliesa typical method that includesany one of the properties of graph vertices. In graph theory, methods of analyzing centrality are used: in terms by degree, closeness, betweenness, radiality, eccentricity, page-rank, status, Katz and eigenvector. We have proposed a new method of multi-parametric centrality, which includes a number of basic properties of the network member. The mathematical model of multi-parametric centrality method is developed. Comparison of results for the presented method with the centrality methods is carried out. For evaluate the results for the multi-parametric centrality methodthe graph model with hundreds of vertices is analyzed. The comparative analysis showed the accuracy of presented method, includes simultaneously a number of basic properties of vertices.
Anisotropic Shear Dispersion Parameterization for Mesoscale Eddy Transport
NASA Astrophysics Data System (ADS)
Reckinger, S. J.; Fox-Kemper, B.
2016-02-01
The effects of mesoscale eddies are universally treated isotropically in general circulation models. However, the processes that the parameterization approximates, such as shear dispersion, typically have strongly anisotropic characteristics. The Gent-McWilliams/Redi mesoscale eddy parameterization is extended for anisotropy and tested using 1-degree Community Earth System Model (CESM) simulations. The sensitivity of the model to anisotropy includes a reduction of temperature and salinity biases, a deepening of the southern ocean mixed-layer depth, and improved ventilation of biogeochemical tracers, particularly in oxygen minimum zones. The parameterization is further extended to include the effects of unresolved shear dispersion, which sets the strength and direction of anisotropy. The shear dispersion parameterization is similar to drifter observations in spatial distribution of diffusivity and high-resolution model diagnosis in the distribution of eddy flux orientation.
Modernizing Earth and Space Science Modeling Workflows in the Big Data Era
NASA Astrophysics Data System (ADS)
Kinter, J. L.; Feigelson, E.; Walker, R. J.; Tino, C.
2017-12-01
Modeling is a major aspect of the Earth and space science research. The development of numerical models of the Earth system, planetary systems or astrophysical systems is essential to linking theory with observations. Optimal use of observations that are quite expensive to obtain and maintain typically requires data assimilation that involves numerical models. In the Earth sciences, models of the physical climate system are typically used for data assimilation, climate projection, and inter-disciplinary research, spanning applications from analysis of multi-sensor data sets to decision-making in climate-sensitive sectors with applications to ecosystems, hazards, and various biogeochemical processes. In space physics, most models are from first principles, require considerable expertise to run and are frequently modified significantly for each case study. The volume and variety of model output data from modeling Earth and space systems are rapidly increasing and have reached a scale where human interaction with data is prohibitively inefficient. A major barrier to progress is that modeling workflows isn't deemed by practitioners to be a design problem. Existing workflows have been created by a slow accretion of software, typically based on undocumented, inflexible scripts haphazardly modified by a succession of scientists and students not trained in modern software engineering methods. As a result, existing modeling workflows suffer from an inability to onboard new datasets into models; an inability to keep pace with accelerating data production rates; and irreproducibility, among other problems. These factors are creating an untenable situation for those conducting and supporting Earth system and space science. Improving modeling workflows requires investments in hardware, software and human resources. This paper describes the critical path issues that must be targeted to accelerate modeling workflows, including script modularization, parallelization, and automation in the near term, and longer term investments in virtualized environments for improved scalability, tolerance for lossy data compression, novel data-centric memory and storage technologies, and tools for peer reviewing, preserving and sharing workflows, as well as fundamental statistical and machine learning algorithms.
Dark Photon Searches at BESIII
NASA Astrophysics Data System (ADS)
Wang, Dayong
Many models beyond the Standard Model, motivated by the recent astrophysical anomalies, predict a new type of weak-interacting degrees of freedom. Typical models include the possibility of the low-mass dark gauge bosons of a few GeV and thus making them accessible at the BESIII experiment running at the tau-charm region. The BESIII has recently searched such dark bosons in several decay modes using the high statistics data set collected at charmonium resonaces. This talk will summarize the recent BESIII results of these dark photon searches and related new physics studies.
Nanoparticle accumulation and transcytosis in brain endothelial cell layers
NASA Astrophysics Data System (ADS)
Ye, Dong; Raghnaill, Michelle Nic; Bramini, Mattia; Mahon, Eugene; Åberg, Christoffer; Salvati, Anna; Dawson, Kenneth A.
2013-10-01
The blood-brain barrier (BBB) is a selective barrier, which controls and limits access to the central nervous system (CNS). The selectivity of the BBB relies on specialized characteristics of the endothelial cells that line the microvasculature, including the expression of intercellular tight junctions, which limit paracellular permeability. Several reports suggest that nanoparticles have a unique capacity to cross the BBB. However, direct evidence of nanoparticle transcytosis is difficult to obtain, and we found that typical transport studies present several limitations when applied to nanoparticles. In order to investigate the capacity of nanoparticles to access and transport across the BBB, several different nanomaterials, including silica, titania and albumin- or transferrin-conjugated gold nanoparticles of different sizes, were exposed to a human in vitro BBB model of endothelial hCMEC/D3 cells. Extensive transmission electron microscopy imaging was applied in order to describe nanoparticle endocytosis and typical intracellular localisation, as well as to look for evidence of eventual transcytosis. Our results show that all of the nanoparticles were internalised, to different extents, by the BBB model and accumulated along the endo-lysosomal pathway. Rare events suggestive of nanoparticle transcytosis were also observed for several of the tested materials.The blood-brain barrier (BBB) is a selective barrier, which controls and limits access to the central nervous system (CNS). The selectivity of the BBB relies on specialized characteristics of the endothelial cells that line the microvasculature, including the expression of intercellular tight junctions, which limit paracellular permeability. Several reports suggest that nanoparticles have a unique capacity to cross the BBB. However, direct evidence of nanoparticle transcytosis is difficult to obtain, and we found that typical transport studies present several limitations when applied to nanoparticles. In order to investigate the capacity of nanoparticles to access and transport across the BBB, several different nanomaterials, including silica, titania and albumin- or transferrin-conjugated gold nanoparticles of different sizes, were exposed to a human in vitro BBB model of endothelial hCMEC/D3 cells. Extensive transmission electron microscopy imaging was applied in order to describe nanoparticle endocytosis and typical intracellular localisation, as well as to look for evidence of eventual transcytosis. Our results show that all of the nanoparticles were internalised, to different extents, by the BBB model and accumulated along the endo-lysosomal pathway. Rare events suggestive of nanoparticle transcytosis were also observed for several of the tested materials. Electronic supplementary information (ESI) available: Nanoparticle characterization in relevant media by Dynamic Light Scattering and SDS-PAGE. Transport study for silica nanoparticles across the BBB layer. Additional Electron Microscopy images of cells treated with the different nanoparticles investigated and details of the filters of the transwell systems. See DOI: 10.1039/c3nr02905k
NASA Technical Reports Server (NTRS)
Schwan, Karsten
1994-01-01
Atmospheric modeling is a grand challenge problem for several reasons, including its inordinate computational requirements and its generation of large amounts of data concurrent with its use of very large data sets derived from measurement instruments like satellites. In addition, atmospheric models are typically run several times, on new data sets or to reprocess existing data sets, to investigate or reinvestigate specific chemical or physical processes occurring in the earth's atmosphere, to understand model fidelity with respect to observational data, or simply to experiment with specific model parameters or components.
Selecting among competing models of electro-optic, infrared camera system range performance
Nichols, Jonathan M.; Hines, James E.; Nichols, James D.
2013-01-01
Range performance is often the key requirement around which electro-optical and infrared camera systems are designed. This work presents an objective framework for evaluating competing range performance models. Model selection based on the Akaike’s Information Criterion (AIC) is presented for the type of data collected during a typical human observer and target identification experiment. These methods are then demonstrated on observer responses to both visible and infrared imagery in which one of three maritime targets was placed at various ranges. We compare the performance of a number of different models, including those appearing previously in the literature. We conclude that our model-based approach offers substantial improvements over the traditional approach to inference, including increased precision and the ability to make predictions for some distances other than the specific set for which experimental trials were conducted.
NASA Astrophysics Data System (ADS)
Preusker, F.; Oberst, J.; Stark, A.; Burmeister, S.
2018-04-01
We produce high-resolution (222 m/grid element) Digital Terrain Models (DTMs) for Mercury using stereo images from the MESSENGER orbital mission. We have developed a scheme to process large numbers, typically more than 6000, images by photogrammetric techniques, which include, multiple image matching, pyramid strategy, and bundle block adjustments. In this paper, we present models for map quadrangles of the southern hemisphere H11, H12, H13, and H14.
Curutchet, Carles; Cupellini, Lorenzo; Kongsted, Jacob; Corni, Stefano; Frediani, Luca; Steindal, Arnfinn Hykkerud; Guido, Ciro A; Scalmani, Giovanni; Mennucci, Benedetta
2018-03-13
Mixed multiscale quantum/molecular mechanics (QM/MM) models are widely used to explore the structure, reactivity, and electronic properties of complex chemical systems. Whereas such models typically include electrostatics and potentially polarization in so-called electrostatic and polarizable embedding approaches, respectively, nonelectrostatic dispersion and repulsion interactions are instead commonly described through classical potentials despite their quantum mechanical origin. Here we present an extension of the Tkatchenko-Scheffler semiempirical van der Waals (vdW TS ) scheme aimed at describing dispersion and repulsion interactions between quantum and classical regions within a QM/MM polarizable embedding framework. Starting from the vdW TS expression, we define a dispersion and a repulsion term, both of them density-dependent and consistently based on a Lennard-Jones-like potential. We explore transferable atom type-based parametrization strategies for the MM parameters, based on either vdW TS calculations performed on isolated fragments or on a direct estimation of the parameters from atomic polarizabilities taken from a polarizable force field. We investigate the performance of the implementation by computing self-consistent interaction energies for the S22 benchmark set, designed to represent typical noncovalent interactions in biological systems, in both equilibrium and out-of-equilibrium geometries. Overall, our results suggest that the present implementation is a promising strategy to include dispersion and repulsion in multiscale QM/MM models incorporating their explicit dependence on the electronic density.
NASA Technical Reports Server (NTRS)
Guillermo, P.
1975-01-01
A mathematical model of the aerothermochemical environment along the stagnation line of a planetary return spacecraft using an ablative thermal protection system was developed and solved for conditions typical of atmospheric entry from planetary missions. The model, implemented as a FORTRAN 4 computer program, was designed to predict viscous, reactive and radiative coupled shock layer structure and the resulting body heating rates. The analysis includes flow field coupling with the ablator surface, binary diffusion, coupled line and continuum radiative and equilibrium or finite rate chemistry effects. The gas model used includes thermodynamic, transport, kinetic and radiative properties of air and ablation product species, including 19 chemical species and 16 chemical reactions. Specifically, the impact of nonequilibrium chemistry effects upon stagnation line shock layer structure and body heating rates was investigated.
Flight Guidance System Requirements Specification
NASA Technical Reports Server (NTRS)
Miller, Steven P.; Tribble, Alan C.; Carlson, Timothy M.; Danielson, Eric J.
2003-01-01
This report describes a requirements specification written in the RSML-e language for the mode logic of a Flight Guidance System of a typical regional jet aircraft. This model was created as one of the first steps in a five-year project sponsored by the NASA Langley Research Center, Rockwell Collins Inc., and the Critical Systems Research Group of the University of Minnesota to develop new methods and tools to improve the safety of avionics designs. This model will be used to demonstrate the application of a variety of methods and techniques, including safety analysis of system and subsystem requirements, verification of key properties using theorem provers and model checkers, identification of potential sources mode confusion in system designs, partitioning of applications based on the criticality of system hazards, and autogeneration of avionics quality code. While this model is representative of the mode logic of a typical regional jet aircraft, it does not describe an actual or planned product. Several aspects of a full Flight Guidance System, such as recovery from failed sensors, have been omitted, and no claims are made regarding the accuracy or completeness of this specification.
Indirect detection constraints on s- and t-channel simplified models of dark matter
NASA Astrophysics Data System (ADS)
Carpenter, Linda M.; Colburn, Russell; Goodman, Jessica; Linden, Tim
2016-09-01
Recent Fermi-LAT observations of dwarf spheroidal galaxies in the Milky Way have placed strong limits on the gamma-ray flux from dark matter annihilation. In order to produce the strongest limit on the dark matter annihilation cross section, the observations of each dwarf galaxy have typically been "stacked" in a joint-likelihood analysis, utilizing optical observations to constrain the dark matter density profile in each dwarf. These limits have typically been computed only for singular annihilation final states, such as b b ¯ or τ+τ- . In this paper, we generalize this approach by producing an independent joint-likelihood analysis to set constraints on models where the dark matter particle annihilates to multiple final-state fermions. We interpret these results in the context of the most popular simplified models, including those with s- and t-channel dark matter annihilation through scalar and vector mediators. We present our results as constraints on the minimum dark matter mass and the mediator sector parameters. Additionally, we compare our simplified model results to those of effective field theory contact interactions in the high-mass limit.
Development of Semi-Empirical Damping Equation for Baffled Tank with Oblate Spheroidal Dome
NASA Technical Reports Server (NTRS)
Yang, H. Q.; West, Jeff; Brodnick, Jacob; Eberhart, Chad
2016-01-01
Propellant slosh is a potential source of disturbance that can significantly impact the stability of space vehicles. The slosh dynamics are typically represented by a mechanical model of a spring-mass-damper. This mechanical model is then included in the equation of motion of the entire vehicle for Guidance, Navigation and Control analysis. The typical parameters required by the mechanical model include natural frequency of the slosh, slosh mass, slosh mass center location, and the critical damping ratio. A fundamental study has been undertaken at NASA MSFC to understand the fluid damping physics from a ring baffle in the barrel section of a propellant tank. An asymptotic damping equation and CFD blended equation have been derived by NASA MSFC team to complement the popularly used Miles equation at different flow regimes. The new development has found success in providing a nonlinear damping model for the Space Launch System. The purpose of this study is to further extend the semi-empirical damping equations into the oblate spheroidal dome section of the propellant tanks. First, previous experimental data from the spherical baffled tank are collected and analyzed. Several methods of taking the dome curvature effect, including a generalized Miles equation, area projection method, and equalized fill height method, are assessed. CFD simulation is used to shed light on the interaction of vorticity around the baffle with the locally curved wall and liquid-gas interface. The final damping equation will be validated by a recent subscale test with an oblate spheroidal dome conducted at NASA MSFC.
Nevers, M.B.; Whitman, R.L.
2008-01-01
To understand the fate and movement of Escherichia coli in beach water, numerous modeling studies have been undertaken including mechanistic predictions of currents and plumes and empirical modeling based on hydrometeorological variables. Most approaches are limited in scope by nearshore currents or physical obstacles and data limitations; few examine the issue from a larger spatial scale. Given the similarities between variables typically included in these models, we attempted to take a broader view of E. coli fluctuations by simultaneously examining twelve beaches along 35 km of Indiana's Lake Michigan coastline that includes five point-source outfalls. The beaches had similar E. coli fluctuations, and a best-fit empirical model included two variables: wave height and an interactive term comprised of wind direction and creek turbidity. Individual beach R2 was 0.32-0.50. Data training-set results were comparable to validation results (R2 = 0.48). Amount of variation explained by the model was similar to previous reports for individual beaches. By extending the modeling approach to include more coastline distance, broader-scale spatial and temporal changes in bacteria concentrations and the influencing factors can be characterized. ?? 2008 American Chemical Society.
Stability estimation of autoregulated genes under Michaelis-Menten-type kinetics
NASA Astrophysics Data System (ADS)
Arani, Babak M. S.; Mahmoudi, Mahdi; Lahti, Leo; González, Javier; Wit, Ernst C.
2018-06-01
Feedback loops are typical motifs appearing in gene regulatory networks. In some well-studied model organisms, including Escherichia coli, autoregulated genes, i.e., genes that activate or repress themselves through their protein products, are the only feedback interactions. For these types of interactions, the Michaelis-Menten (MM) formulation is a suitable and widely used approach, which always leads to stable steady-state solutions representative of homeostatic regulation. However, in many other biological phenomena, such as cell differentiation, cancer progression, and catastrophes in ecosystems, one might expect to observe bistable switchlike dynamics in the case of strong positive autoregulation. To capture this complex behavior we use the generalized family of MM kinetic models. We give a full analysis regarding the stability of autoregulated genes. We show that the autoregulation mechanism has the capability to exhibit diverse cellular dynamics including hysteresis, a typical characteristic of bistable systems, as well as irreversible transitions between bistable states. We also introduce a statistical framework to estimate the kinetics parameters and probability of different stability regimes given observational data. Empirical data for the autoregulated gene SCO3217 in the SOS system in Streptomyces coelicolor are analyzed. The coupling of a statistical framework and the mathematical model can give further insight into understanding the evolutionary mechanisms toward different cell fates in various systems.
Tracing the evolution of the Galactic bulge with chemodynamical modelling of alpha-elements
NASA Astrophysics Data System (ADS)
Friaça, A. C. S.; Barbuy, B.
2017-02-01
Context. Galactic bulge abundances can be best understood as indicators of bulge formation and nucleosynthesis processes by comparing them with chemo-dynamical evolution models. Aims: The aim of this work is to study the abundances of alpha-elements in the Galactic bulge, including a revision of the oxygen abundance in a sample of 56 bulge red giants. Methods: Literature abundances for O, Mg, Si, Ca and Ti in Galactic bulge stars are compared with chemical evolution models. For oxygen in particular, we reanalysed high-resolution spectra obtained using FLAMES+UVES on the Very Large Telescope, now taking each star's carbon abundances, derived from CI and C2 lines, into account simultaneously. Results: We present a chemical evolution model of alpha-element enrichment in a massive spheroid that represents a typical classical bulge evolution. The code includes multi-zone chemical evolution coupled with hydrodynamics of the gas. Comparisons between the model predictions and the abundance data suggest a typical bulge formation timescale of 1-2 Gyr. The main constraint on the bulge evolution is provided by the O data from analyses that have taken the C abundance and dissociative equilibrium into account. Mg, Si, Ca and Ti trends are well reproduced, whereas the level of overabundance critically depends on the adopted nucleosynthesis prescriptions. Observations collected both at the European Southern Observatory, Paranal, Chile (ESO programmes 71.B-0617A, 73.B0074A, and GTO 71.B-0196)
Shannon information entropy in heavy-ion collisions
NASA Astrophysics Data System (ADS)
Ma, Chun-Wang; Ma, Yu-Gang
2018-03-01
The general idea of information entropy provided by C.E. Shannon "hangs over everything we do" and can be applied to a great variety of problems once the connection between a distribution and the quantities of interest is found. The Shannon information entropy essentially quantify the information of a quantity with its specific distribution, for which the information entropy based methods have been deeply developed in many scientific areas including physics. The dynamical properties of heavy-ion collisions (HICs) process make it difficult and complex to study the nuclear matter and its evolution, for which Shannon information entropy theory can provide new methods and observables to understand the physical phenomena both theoretically and experimentally. To better understand the processes of HICs, the main characteristics of typical models, including the quantum molecular dynamics models, thermodynamics models, and statistical models, etc., are briefly introduced. The typical applications of Shannon information theory in HICs are collected, which cover the chaotic behavior in branching process of hadron collisions, the liquid-gas phase transition in HICs, and the isobaric difference scaling phenomenon for intermediate mass fragments produced in HICs of neutron-rich systems. Even though the present applications in heavy-ion collision physics are still relatively simple, it would shed light on key questions we are seeking for. It is suggested to further develop the information entropy methods in nuclear reactions models, as well as to develop new analysis methods to study the properties of nuclear matters in HICs, especially the evolution of dynamics system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahlgren, Björn; Larsson, Josefin; Nymark, Tanja
The origin of the prompt emission in gamma-ray bursts (GRBs) is still an unsolved problem and several different mechanisms have been suggested. We fit Fermi GRB data with a photospheric emission model which includes dissipation of the jet kinetic energy below the photosphere. The resulting spectra are dominated by Comptonization and contain no significant contribution from synchrotron radiation. In order to fit to the data, we span a physically motivated part of the model's parameter space and create DREAM (Dissipation with Radiative Emission as A table Model), a table model for XSPEC. Here, we show that this model can describemore » different kinds of GRB spectra, including GRB 090618, representing a typical Band function spectrum, and GRB 100724B, illustrating a double peaked spectrum, previously fitted with a Band+blackbody model, suggesting they originate from a similar scenario. We also suggest that the main difference between these two types of bursts is the optical depth at the dissipation site.« less
System Model for MEMS based Laser Ultrasonic Receiver
NASA Technical Reports Server (NTRS)
Wilson, William C.
2002-01-01
A need has been identified for more advanced nondestructive Evaluation technologies for assuring the integrity of airframe structures, wiring, etc. Laser ultrasonic inspection instruments have been shown to detect flaws in structures. However, these instruments are generally too bulky to be used in the confined spaces that are typical of aerospace vehicles. Microsystems technology is one key to reducing the size of current instruments and enabling increased inspection coverage in areas that were previously inaccessible due to instrument size and weight. This paper investigates the system modeling of a Micro OptoElectroMechanical System (MOEMS) based laser ultrasonic receiver. The system model is constructed in software using MATLAB s dynamical simulator, Simulink. The optical components are modeled using geometrical matrix methods and include some image processing. The system model includes a test bench which simulates input stimuli and models the behavior of the material under test.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, R.W.; Phillips, A.M.
1990-02-01
Low-permeability reservoirs are currently being propped with sand, resin-coated sand, intermediate-density proppants, and bauxite. This wide range of proppant cost and performance has resulted in the proliferation of proppant selection models. Initially, a rather vague relationship between well depth and proppant strength dictated the choice of proppant. More recently, computerized models of varying complexity that use net-present-value (NPV) calculations have become available. The input is based on the operator's performance goals for each well and specific reservoir properties. Simpler, noncomputerized approaches include cost/performance comparisons and nomographs. Each type of model, including several of the computerized models, is examined here. Bymore » use of these models and NPV calculations, optimum fracturing treatment designs have been developed for such low-permeability reservoirs as the Prue in Oklahoma. Typical well conditions are used in each of the selection models, and the results are compared.« less
Modeling a maintenance simulation of the geosynchronous platform
NASA Technical Reports Server (NTRS)
Kleiner, A. F., Jr.
1980-01-01
A modeling technique used to conduct a simulation study comparing various maintenance routines for a space platform is dicussed. A system model is described and illustrated, the basic concepts of a simulation pass are detailed, and sections on failures and maintenance are included. The operation of the system across time is best modeled by a discrete event approach with two basic events - failure and maintenance of the system. Each overall simulation run consists of introducing a particular model of the physical system, together with a maintenance policy, demand function, and mission lifetime. The system is then run through many passes, each pass corresponding to one mission and the model is re-initialized before each pass. Statistics are compiled at the end of each pass and after the last pass a report is printed. Items of interest typically include the time to first maintenance, total number of maintenance trips for each pass, average capability of the system, etc.
Teacher Evaluation and School Improvement: An Analysis of the Evidence
ERIC Educational Resources Information Center
Hallinger, Philip; Heck, Ronald H.; Murphy, Joseph
2014-01-01
In recent years, substantial investments have been made in reengineering systems of teacher evaluation. The new generation models of teacher evaluation typically adopt a standards-based view of teaching quality and include a value-added measure of growth in student learning. With more than a decade of experience and research, it is timely to…
ERIC Educational Resources Information Center
Meyer, Becky Weller; Jain, Sachin; Canfield-Davis, Kathy
2011-01-01
Adolescents defined as at-risk typically lack healthy models of parenting and receive no parenthood education prior to assuming the parenting role. Unless a proactive approach is implemented, the cyclic pattern of dysfunctional parenting-- including higher rates of teen pregnancy, increased childhood abuse, low educational attainment,…
Network Modeling and Simulation (NEMSE)
2013-07-01
for public release by the 88th ABW, Wright-Patterson AFB Public Affairs Office and is available to the general public, including foreign nationals...PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE ( DD -MM-YYYY) JULY 2013 2. REPORT TYPE FINAL TECHNICAL REPORT 3. DATES COVERED...Paradigm .................................................................................................... 2 Figure 2 - Typical Design from Standard
The Role of Media/Video Production in Non-Media Disciplines: The Case of Health Promotion
ERIC Educational Resources Information Center
Shuldman, Mitch; Tajik, Mansoureh
2010-01-01
Media creation has been almost exclusively a domain of media and communication fields. Traditionally, non-media fields, such as public health and health promotion, do not typically include media creation courses. As media technologies continue to advance, however, opportunities arise for the development of new pedagogical models based on new…
How Adolescents Comprehend Unfamiliar Proverbs: The Role of Top-Down and Bottom-Up Processes.
ERIC Educational Resources Information Center
Nippold, Marilyn A.; Allen, Melissa M.; Kirsch, Dixon I.
2000-01-01
Relationships between word knowledge and proverb comprehension was examined in 150 typically achieving adolescents (ages 12, 15, and 18). Word knowledge was associated with proverb comprehension in all groups, particularly in the case of abstract proverbs. Results support a model of proverb comprehension in adolescents that includes bottom-up in…
ERIC Educational Resources Information Center
Bonifacci, Paola; Tobia, Valentina
2017-01-01
The present study evaluated which components within the simple view of reading model better predicted reading comprehension in a sample of bilingual language-minority children exposed to Italian, a highly transparent language, as a second language. The sample included 260 typically developing bilingual children who were attending either the first…
Synchronized Trajectories in a Climate "Supermodel"
NASA Astrophysics Data System (ADS)
Duane, Gregory; Schevenhoven, Francine; Selten, Frank
2017-04-01
Differences in climate projections among state-of-the-art models can be resolved by connecting the models in run-time, either through inter-model nudging or by directly combining the tendencies for corresponding variables. Since it is clearly established that averaging model outputs typically results in improvement as compared to any individual model output, averaged re-initializations at typical analysis time intervals also seems appropriate. The resulting "supermodel" is more like a single model than it is like an ensemble, because the constituent models tend to synchronize even with limited inter-model coupling. Thus one can examine the properties of specific trajectories, rather than averaging the statistical properties of the separate models. We apply this strategy to a study of the index cycle in a supermodel constructed from several imperfect copies of the SPEEDO model (a global primitive-equation atmosphere-ocean-land climate model). As with blocking frequency, typical weather statistics of interest like probabilities of heat waves or extreme precipitation events, are improved as compared to the standard multi-model ensemble approach. In contrast to the standard approach, the supermodel approach provides detailed descriptions of typical actual events.
NASA Astrophysics Data System (ADS)
Hill, M. C.; Jakeman, J.; Razavi, S.; Tolson, B.
2015-12-01
For many environmental systems model runtimes have remained very long as more capable computers have been used to add more processes and more time and space discretization. Scientists have also added more parameters and kinds of observations, and many model runs are needed to explore the models. Computational demand equals run time multiplied by number of model runs divided by parallelization opportunities. Model exploration is conducted using sensitivity analysis, optimization, and uncertainty quantification. Sensitivity analysis is used to reveal consequences of what may be very complex simulated relations, optimization is used to identify parameter values that fit the data best, or at least better, and uncertainty quantification is used to evaluate the precision of simulated results. The long execution times make such analyses a challenge. Methods for addressing this challenges include computationally frugal analysis of the demanding original model and a number of ingenious surrogate modeling methods. Both commonly use about 50-100 runs of the demanding original model. In this talk we consider the tradeoffs between (1) original model development decisions, (2) computationally frugal analysis of the original model, and (3) using many model runs of the fast surrogate model. Some questions of interest are as follows. If the added processes and discretization invested in (1) are compared with the restrictions and approximations in model analysis produced by long model execution times, is there a net benefit related of the goals of the model? Are there changes to the numerical methods that could reduce the computational demands while giving up less fidelity than is compromised by using computationally frugal methods or surrogate models for model analysis? Both the computationally frugal methods and surrogate models require that the solution of interest be a smooth function of the parameters or interest. How does the information obtained from the local methods typical of (2) and the global averaged methods typical of (3) compare for typical systems? The discussion will use examples of response of the Greenland glacier to global warming and surface and groundwater modeling.
Analysis of opioid-seeking reinstatement in the rat.
Fattore, Liana; Fadda, Paola; Zanda, Mary Tresa; Fratta, Walter
2015-01-01
The inability to maintain drug abstinence is often referred to as relapse and consists of a process by which an abstaining individual slips back into old behavioral patterns and substance use. Animal models of relapse have been developed and validated over the last decades, and significantly contributed to shed light on the neurobiological mechanisms underlying vulnerability to relapse. The most common procedure to study drug-seeking and relapse-like behavior in animals is the "reinstatement model." Originally elaborated by Pavlov and Skinner, the concepts of reinforced operant responding and conditioned behavior were applied to addiction research not before 1971 (Stretch et al., Can J Physiol Pharmacol 49:581-589, 1971), and the first report of a reinstatement animal model as it is now used worldwide was published only 10 years later (De Wit and Stewart, Psychopharmacology 75:134-143, 1981). According to the proposed model, opioids are typically self-administered intravenously, as humans do, and although rodents are most often employed in these studies, this model has been used with a variety of species including nonhuman primates, dogs, cats, and pigeons. A variety of operant responses are available, depending on the species studied. For example, a lever press or a nose poke response typically is used for rodents, whereas a panel press response typically is used for nonhuman primates. Here, we describe a simple and easily reproducible protocol of heroin-seeking reinstatement in rats, which proved useful to study the neurobiological mechanisms underlying relapse to heroin and vulnerability factors enhancing the resumption of heroin-seeking behavior.
Body Fat Percentage Prediction Using Intelligent Hybrid Approaches
Shao, Yuehjen E.
2014-01-01
Excess of body fat often leads to obesity. Obesity is typically associated with serious medical diseases, such as cancer, heart disease, and diabetes. Accordingly, knowing the body fat is an extremely important issue since it affects everyone's health. Although there are several ways to measure the body fat percentage (BFP), the accurate methods are often associated with hassle and/or high costs. Traditional single-stage approaches may use certain body measurements or explanatory variables to predict the BFP. Diverging from existing approaches, this study proposes new intelligent hybrid approaches to obtain fewer explanatory variables, and the proposed forecasting models are able to effectively predict the BFP. The proposed hybrid models consist of multiple regression (MR), artificial neural network (ANN), multivariate adaptive regression splines (MARS), and support vector regression (SVR) techniques. The first stage of the modeling includes the use of MR and MARS to obtain fewer but more important sets of explanatory variables. In the second stage, the remaining important variables are served as inputs for the other forecasting methods. A real dataset was used to demonstrate the development of the proposed hybrid models. The prediction results revealed that the proposed hybrid schemes outperformed the typical, single-stage forecasting models. PMID:24723804
Comprehensive atmospheric modeling of reactive cyclic siloxanes and their oxidation products
NASA Astrophysics Data System (ADS)
Janechek, Nathan J.; Hansen, Kaj M.; Stanier, Charles O.
2017-07-01
Cyclic volatile methyl siloxanes (cVMSs) are important components in personal care products that transport and react in the atmosphere. Octamethylcyclotetrasiloxane (D4), decamethylcyclopentasiloxane (D5), dodecamethylcyclohexasiloxane (D6), and their gas-phase oxidation products have been incorporated into the Community Multiscale Air Quality (CMAQ) model. Gas-phase oxidation products, as the precursor to secondary organic aerosol from this compound class, were included to quantify the maximum potential for aerosol formation from gas-phase reactions with OH. Four 1-month periods were modeled to quantify typical concentrations, seasonal variability, spatial patterns, and vertical profiles. Typical model concentrations showed parent compounds were highly dependent on population density as cities had monthly averaged peak D5 concentrations up to 432 ng m-3. Peak oxidized D5 concentrations were significantly less, up to 9 ng m-3, and were located downwind of major urban areas. Model results were compared to available measurements and previous simulation results. Seasonal variation was analyzed and differences in seasonal influences were observed between urban and rural locations. Parent compound concentrations in urban and peri-urban locations were sensitive to transport factors, while parent compounds in rural areas and oxidized product concentrations were influenced by large-scale seasonal variability in OH.
Lin, Dexin; Wu, Xianbin; Ji, Xiaoke; Zhang, Qiyu; Lin, YuanWei; Chen, WeiJian; Jin, Wangxun; Deng, Liming; Chen, Yunzhi; Chen, Bicheng; Li, Jianmin
2012-01-01
Current large animal models that could closely resemble the typical features of cirrhotic portal hypertension in human have not been well established. Thus, we aimed to develop and describe a reliable and reproducible canine cirrhosis model of portal hypertension. A total of 30 mongrel dogs were randomly divided into four groups: 1 (control; n = 5), 2 (portal vein stenosis [PVS]; n = 5], 3 (thioacetamide [TAA]; n = 5), and 4 (PVS plus TAA; n = 15). After 4-months modeling period, liver and spleen CT perfusion, abdominal CT scans, portal hemodynamics, gastroscopy, hepatic function, blood routine, the bone marrow, liver, and spleen histology were studied. The animals in group 2 (PVS) developed extrahepatic portosystemic collateral circulation, particularly esophageal varices, without hepatic cirrhosis and portal hypertension. Animals from group 3 (TAA) presented mild cirrhosis and portal hypertension without significant symptoms of esophageal varices and hypersplenism. In contrast, animals from group 4 (PVS + TAA) showed well-developed micronodular and macronodular cirrhosis, associated with significant portal hypertension and hypersplenism. The combination of PVS and TAA represents a novel, reliable, and reproducible canine cirrhosis model of portal hypertension, which is associated with the typical characteristics of portal hypertension, including hypersplenism.
Time-Dependent Behavior of Diabase and a Nonlinear Creep Model
NASA Astrophysics Data System (ADS)
Yang, Wendong; Zhang, Qiangyong; Li, Shucai; Wang, Shugang
2014-07-01
Triaxial creep tests were performed on diabase specimens from the dam foundation of the Dagangshan hydropower station, and the typical characteristics of creep curves were analyzed. Based on the test results under different stress levels, a new nonlinear visco-elasto-plastic creep model with creep threshold and long-term strength was proposed by connecting an instantaneous elastic Hooke body, a visco-elasto-plastic Schiffman body, and a nonlinear visco-plastic body in series mode. By introducing the nonlinear visco-plastic component, this creep model can describe the typical creep behavior, which includes the primary creep stage, the secondary creep stage, and the tertiary creep stage. Three-dimensional creep equations under constant stress conditions were deduced. The yield approach index (YAI) was used as the criterion for the piecewise creep function to resolve the difficulty in determining the creep threshold value and the long-term strength. The expression of the visco-plastic component was derived in detail and the three-dimensional central difference form was given. An example was used to verify the credibility of the model. The creep parameters were identified, and the calculated curves were in good agreement with the experimental curves, indicating that the model is capable of replicating the physical processes.
The Systems Biology Markup Language (SBML) Level 3 Package: Flux Balance Constraints.
Olivier, Brett G; Bergmann, Frank T
2015-09-04
Constraint-based modeling is a well established modelling methodology used to analyze and study biological networks on both a medium and genome scale. Due to their large size, genome scale models are typically analysed using constraint-based optimization techniques. One widely used method is Flux Balance Analysis (FBA) which, for example, requires a modelling description to include: the definition of a stoichiometric matrix, an objective function and bounds on the values that fluxes can obtain at steady state. The Flux Balance Constraints (FBC) Package extends SBML Level 3 and provides a standardized format for the encoding, exchange and annotation of constraint-based models. It includes support for modelling concepts such as objective functions, flux bounds and model component annotation that facilitates reaction balancing. The FBC package establishes a base level for the unambiguous exchange of genome-scale, constraint-based models, that can be built upon by the community to meet future needs (e. g. by extending it to cover dynamic FBC models).
The Systems Biology Markup Language (SBML) Level 3 Package: Flux Balance Constraints.
Olivier, Brett G; Bergmann, Frank T
2015-06-01
Constraint-based modeling is a well established modelling methodology used to analyze and study biological networks on both a medium and genome scale. Due to their large size, genome scale models are typically analysed using constraint-based optimization techniques. One widely used method is Flux Balance Analysis (FBA) which, for example, requires a modelling description to include: the definition of a stoichiometric matrix, an objective function and bounds on the values that fluxes can obtain at steady state. The Flux Balance Constraints (FBC) Package extends SBML Level 3 and provides a standardized format for the encoding, exchange and annotation of constraint-based models. It includes support for modelling concepts such as objective functions, flux bounds and model component annotation that facilitates reaction balancing. The FBC package establishes a base level for the unambiguous exchange of genome-scale, constraint-based models, that can be built upon by the community to meet future needs (e. g. by extending it to cover dynamic FBC models).
NASA Astrophysics Data System (ADS)
Donnelly, William J., III
2012-06-01
PURPOSE: To present a commercially available optical modeling software tool to assist the development of optical instrumentation and systems that utilize and/or integrate with the human eye. METHODS: A commercially available flexible eye modeling system is presented, the Advanced Human Eye Model (AHEM). AHEM is a module that the engineer can use to perform rapid development and test scenarios on systems that integrate with the eye. Methods include merging modeled systems initially developed outside of AHEM and performing a series of wizard-type operations that relieve the user from requiring an optometric or ophthalmic background to produce a complete eye inclusive system. Scenarios consist of retinal imaging of targets and sources through integrated systems. Uses include, but are not limited to, optimization, telescopes, microscopes, spectacles, contact and intraocular lenses, ocular aberrations, cataract simulation and scattering, and twin eye model (binocular) systems. RESULTS: Metrics, graphical data, and exportable CAD geometry are generated from the various modeling scenarios.
NASA Astrophysics Data System (ADS)
McConnell, William J.
Due to the call of current science education reform for the integration of engineering practices within science classrooms, design-based instruction is receiving much attention in science education literature. Although some aspect of modeling is often included in well-known design-based instructional methods, it is not always a primary focus. The purpose of this study was to better understand how design-based instruction with an emphasis on scientific modeling might impact students' spatial abilities and their model-based argumentation abilities. In the following mixed-method multiple case study, seven seventh grade students attending a secular private school in the Mid-Atlantic region of the United States underwent an instructional intervention involving design-based instruction, modeling and argumentation. Through the course of a lesson involving students in exploring the interrelatedness of the environment and an animal's form and function, students created and used multiple forms of expressed models to assist them in model-based scientific argument. Pre/post data were collected through the use of The Purdue Spatial Visualization Test: Rotation, the Mental Rotation Test and interviews. Other data included a spatial activities survey, student artifacts in the form of models, notes, exit tickets, and video recordings of students throughout the intervention. Spatial abilities tests were analyzed using descriptive statistics while students' arguments were analyzed using the Instrument for the Analysis of Scientific Curricular Arguments and a behavior protocol. Models were analyzed using content analysis and interviews and all other data were coded and analyzed for emergent themes. Findings in the area of spatial abilities included increases in spatial reasoning for six out of seven participants, and an immense difference in the spatial challenges encountered by students when using CAD software instead of paper drawings to create models. Students perceived 3D printed models to better assist them in scientific argumentation over paper drawing models. In fact, when given a choice, students rarely used paper drawing to assist in argument. There was also a difference in model utility between the two different model types. Participants explicitly used 3D printed models to complete gestural modeling, while participants rarely looked at 2D models when involved in gestural modeling. This study's findings added to current theory dealing with the varied spatial challenges involved in different modes of expressed models. This study found that depth, symmetry and the manipulation of perspectives are typically spatial challenges students will attend to using CAD while they will typically ignore them when drawing using paper and pencil. This study also revealed a major difference in model-based argument in a design-based instruction context as opposed to model-based argument in a typical science classroom context. In the context of design-based instruction, data revealed that design process is an important part of model-based argument. Due to the importance of design process in model-based argumentation in this context, trusted methods of argument analysis, like the coding system of the IASCA, was found lacking in many respects. Limitations and recommendations for further research were also presented.
A Quantitative Quasispecies Theory-Based Model of Virus Escape Mutation Under Immune Selection
2012-01-01
immune pressure, and their capacity for rapid escape mutation underlies many of the difficulties in combating pathogens, including HIV -1. In a typical...interpreted as the total number of virions within a finite sys- tem. The HIV -1 viral load during the acute infection phase can reach up to 104 ∼ 106...therefore models both the de- crease of the mean fitness away from WT and the distribution of neutral, deleterious, and beneficial mutants for a
Modeling and Optimizing Green Microgrids at Remote U.S. Navy Islands
2017-12-01
storage, and controls. All of these components work together as a system solution to serve a nearby load, such as a wind turbine and a storage battery...includes five diesel generators of varying capacities and seven 100kW wind turbines . The diesel genset specifics are shown in Table 1. They typically...run at only 30% of nominal capacity, while they are most efficient at 70% (Anderson et al. 2017). The wind turbines are all Northwind 100 kW models
Proceedings of the Workshop on NDE of Polymers Held at Vimeiro, Portugal on 4-5 September 1984.
1984-09-05
theory. The most realis- tic model in which dislocations have been studied is in the soundfield of a * pulsed circular piston radiator, by Wright...Berry (1984). This study was again mostly numerical, and figure 3 shows two of the nearfield plots. The * top line is the syimmetry axis, R and Z are...general behaviour is typical of all the models studied , including those I will introduce below, and has been verified experimentally by Humphrey (1980
NASA Technical Reports Server (NTRS)
1983-01-01
Mission areas analyzed for input to the baseline mission model include: (1) commercial materials processing, including representative missions for producing metallurgical, chemical and biological products; (2) commercial Earth observation, represented by a typical carry-on mission amenable to commercialization; (3) solar terrestrial and resource observations including missions in geoscience and scientific land observation; (4) global environment, including representative missions in meteorology, climatology, ocean science, and atmospheric science; (5) materials science, including missions for measuring material properties, studying chemical reactions and utilizing the high vacuum-pumping capacity of space; and (6) life sciences with experiments in biomedicine and animal and plant biology.
Genetic control of postnatal human brain growth
van Dyck, Laura I.; Morrow, Eric M.
2017-01-01
Purpose of review Studies investigating postnatal brain growth disorders inform the biology underlying the development of human brain circuitry. This research is becoming increasingly important for the diagnosis and treatment of childhood neurodevelopmental disorders, including autism and related disorders. Here we review recent research on typical and abnormal postnatal brain growth and examine potential biological mechanisms. Recent findings Clinically, brain growth disorders are heralded by diverging head size for a given age and sex, but are more precisely characterized by brain imaging, postmortem analysis, and animal model studies. Recent neuroimaging and molecular biological studies on postnatal brain growth disorders have broadened our view of both typical and pathological postnatal neurodevelopment. Correlating gene and protein function with brain growth trajectories uncovers postnatal biological mechanisms, including neuronal arborization, synaptogenesis and pruning, and gliogenesis and myelination. Recent investigations of childhood neurodevelopmental and neurodegenerative disorders highlight the underlying genetic programming and experience-dependent remodeling of neural circuitry. Summary In order to understand typical and abnormal postnatal brain development, clinicians and researchers should characterize brain growth trajectories in the context of neurogenetic syndromes. Understanding mechanisms and trajectories of postnatal brain growth will aid in differentiating, diagnosing, and potentially treating neurodevelopmental disorders. PMID:27898583
Schroeder, Natalia; Park, Young-Hee; Kang, Min-Sook; Kim, Yangsuk; Ha, Grace K; Kim, Haeng-Ran; Yates, Allison A; Caballero, Benjamin
2015-07-01
Dietary patterns that are considered healthy (eg, the Dietary Approaches to Stop Hypertension diet and Mediterranean diet) may be more successful in reducing typical cardiovascular disease risks compared to dietary patterns considered unhealthy (eg, energy-dense diets such as the typical American diet). This study assessed the effects of a Korean diet, the 2010 Dietary Guidelines for Americans (DGA), and a typical American diet on cardiometabolic risk factors, including lipid levels and blood pressure, in overweight, non-Asian individuals in the United States with elevated low-density lipoprotein cholesterol. The study was a three-period crossover, controlled-feeding study from January 2012 to May 2012. Thirty-one subjects were randomly allocated to one of six possible sequential orders for consuming the three diets for 4 weeks, each separated by a 10-day break. Data analysis included 27 subjects on the Korean diet periods and 29 in the DGA and typical American diet periods. Subjects remained weight stable. Lipid profile, blood pressure, insulin, glucose, and 24-hour urinary sodium were determined at baseline and at the end of each diet period. The additive main effects multiplicative interactions model was used to test for a subject by diet interaction. Differences among diets were determined using a mixed-models procedure (PROC MIXED) with random intercept for each subject. Total cholesterol and low-density lipoprotein cholesterol significantly decreased on Korean (P<0.0001 and P<0.01, respectively) and DGA (P<0.01 and P<0.05, respectively) diets, but not on the typical American diet. Although an unfavorable outcome, high-density lipoprotein cholesterol significantly decreased on all three diets (Korean: P<0.0001; DGA: P<0.0001; typical American: P<0.05). No diet had a significant effect on serum triglycerides, but a slight increase in triglycerides in the Korean and decrease in the DGA resulted in a significant difference between these two diets (P<0.01). All three diets caused modest decreases in systolic and diastolic blood pressure, which reached statistical significance for DGA only (P<0.05 and P<0.01, respectively). No diet had significant effect on fasting insulin, whereas fasting glucose decreased significantly on the Korean (P<0.01) and typical American (P<0.05) diets only. Urinary sodium output decreased significantly on DGA (P<0.0001). After a 4-week feeding period, Korean and DGA diet patterns resulted in positive changes in cardiovascular disease risk factors. Copyright © 2015 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tournassat, C.; Tinnacher, R. M.; Grangeon, S.
The prediction of U(VI) adsorption onto montmorillonite clay is confounded by the complexities of: (1) the montmorillonite structure in terms of adsorption sites on basal and edge surfaces, and the complex interactions between the electrical double layers at these surfaces, and (2) U(VI) solution speciation, which can include cationic, anionic and neutral species. Previous U(VI)-montmorillonite adsorption and modeling studies have typically expanded classical surface complexation modeling approaches, initially developed for simple oxides, to include both cation exchange and surface complexation reactions. However, previous models have not taken into account the unique characteristics of electrostatic surface potentials that occur at montmorillonitemore » edge sites, where the electrostatic surface potential of basal plane cation exchange sites influences the surface potential of neighboring edge sites (‘spillover’ effect).« less
A Computer Model for Analyzing Volatile Removal Assembly
NASA Technical Reports Server (NTRS)
Guo, Boyun
2010-01-01
A computer model simulates reactional gas/liquid two-phase flow processes in porous media. A typical process is the oxygen/wastewater flow in the Volatile Removal Assembly (VRA) in the Closed Environment Life Support System (CELSS) installed in the International Space Station (ISS). The volatile organics in the wastewater are combusted by oxygen gas to form clean water and carbon dioxide, which is solved in the water phase. The model predicts the oxygen gas concentration profile in the reactor, which is an indicator of reactor performance. In this innovation, a mathematical model is included in the computer model for calculating the mass transfer from the gas phase to the liquid phase. The amount of mass transfer depends on several factors, including gas-phase concentration, distribution, and reaction rate. For a given reactor dimension, these factors depend on pressure and temperature in the reactor and composition and flow rate of the influent.
Tournassat, C.; Tinnacher, R. M.; Grangeon, S.; ...
2017-10-06
The prediction of U(VI) adsorption onto montmorillonite clay is confounded by the complexities of: (1) the montmorillonite structure in terms of adsorption sites on basal and edge surfaces, and the complex interactions between the electrical double layers at these surfaces, and (2) U(VI) solution speciation, which can include cationic, anionic and neutral species. Previous U(VI)-montmorillonite adsorption and modeling studies have typically expanded classical surface complexation modeling approaches, initially developed for simple oxides, to include both cation exchange and surface complexation reactions. However, previous models have not taken into account the unique characteristics of electrostatic surface potentials that occur at montmorillonitemore » edge sites, where the electrostatic surface potential of basal plane cation exchange sites influences the surface potential of neighboring edge sites (‘spillover’ effect).« less
Structural analysis of a Petri net model of oxidative stress in atherosclerosis.
Kozak, Adam; Formanowicz, Dorota; Formanowicz, Piotr
2018-06-01
Atherosclerosis is a complex process of gathering sub-endothelial plaques decreasing lumen of the blood vessels. This disorder affects people of all ages, but its progression is asymptomatic for many years. It is regulated by many typical and atypical factors including the immune system response, a chronic kidney disease, a diet rich in lipids, a local inflammatory process and a local oxidative stress that is here one of the key factors. In this study, a Petri net model of atherosclerosis regulation is presented. This model includes also some information about stoichiometric relationships between its components and covers all mentioned factors. For the model, a structural analysis based on invariants was made and biological conclusions are presented. Since the model contains inhibitor arcs, a heuristic method for analysis of such cases is presented. This method can be used to extend the concept of feasible t -invariants.
The outflow structure of GW170817 from late-time broad-band observations
NASA Astrophysics Data System (ADS)
Troja, E.; Piro, L.; Ryan, G.; van Eerten, H.; Ricci, R.; Wieringa, M. H.; Lotti, S.; Sakamoto, T.; Cenko, S. B.
2018-07-01
We present our broad-band study of GW170817 from radio to hard X-rays, including NuSTAR and Chandra observations up to 165 d after the merger, and a multimessenger analysis including LIGO constraints. The data are compared with predictions from a wide range of models, providing the first detailed comparison between non-trivial cocoon and jet models. Homogeneous and power-law shaped jets, as well as simple cocoon models are ruled out by the data, while both a Gaussian shaped jet and a cocoon with energy injection can describe the current data set for a reasonable range of physical parameters, consistent with the typical values derived from short GRB afterglows. We propose that these models can be unambiguously discriminated by future observations measuring the post-peak behaviour, with Fν ∝ t˜-1.0 for the cocoon and Fν∝ t˜-2.5 for the jet model.
The outflow structure of GW170817 from late time broadband observations
NASA Astrophysics Data System (ADS)
Troja, E.; Piro, L.; Ryan, G.; van Eerten, H.; Ricci, R.; Wieringa, M.; Lotti, S.; Sakamoto, T.; Cenko, S. B.
2018-04-01
We present our broadband study of GW170817 from radio to hard X-rays, including NuSTAR and Chandra observations up to 165 days after the merger, and a multi-messenger analysis including LIGO constraints. The data are compared with predictions from a wide range of models, providing the first detailed comparison between non-trivial cocoon and jet models. Homogeneous and power-law shaped jets, as well as simple cocoon models are ruled out by the data, while both a Gaussian shaped jet and a cocoon with energy injection can describe the current dataset for a reasonable range of physical parameters, consistent with the typical values derived from short GRB afterglows. We propose that these models can be unambiguously discriminated by future observations measuring the post-peak behaviour, with Fν∝t˜-1.0 for the cocoon and Fν∝t˜-2.5 for the jet model.
Manual for a workstation-based generic flight simulation program (LaRCsim), version 1.4
NASA Technical Reports Server (NTRS)
Jackson, E. Bruce
1995-01-01
LaRCsim is a set of ANSI C routines that implement a full set of equations of motion for a rigid-body aircraft in atmospheric and low-earth orbital flight, suitable for pilot-in-the-loop simulations on a workstation-class computer. All six rigid-body degrees of freedom are modeled. The modules provided include calculations of the typical aircraft rigid-body simulation variables, earth geodesy, gravity and atmospheric models, and support several data recording options. Features/limitations of the current version include English units of measure, a 1962 atmosphere model in cubic spline function lookup form, ranging from sea level to 75,000 feet, rotating oblate spheroidal earth model, with aircraft C.G. coordinates in both geocentric and geodetic axes. Angular integrations are done using quaternion state variables Vehicle X-Z symmetry is assumed.
Numerical models of cell death in RF ablation with monopolar and bipolar probes
NASA Astrophysics Data System (ADS)
Bright, Benjamin M.; Pearce, John A.
2013-02-01
Radio frequency (RF) is used clinically to treat unresectible tumors. Finite element modeling has proven useful in treatment planning and applicator design. Typically isotherms in the middle 50s °C have been used as the parameter of assessment in these models. We compare and contrast isotherms for multiple known Arrhenius thermal damage predictors including collagen denaturation, vascular disruption, liver coagulation and cell death. Models for RITA probe geometries are included in the study. Comparison to isotherms is sensible when the activation time is held constant, but varies considerably when heating times vary. The purpose of this paper is to demonstrate the importance of looking at specific processes and keeping track of the methods used to derive the Arrhenius coefficients in order to study the extremely complex cell death processes due to thermal therapies.
Huckle, Taisia; Huakau, John; Sweetsur, Paul; Huisman, Otto; Casswell, Sally
2008-10-01
This study examines the relationship between physical, socio-economic and social environments and alcohol consumption and drunkenness among a general population sample of drinkers aged 12-17 years. DESIGN, SETTING, PARTICIPANTS AND MEASURES: The study was conducted in Auckland, New Zealand. The design comprised two components: (i) environmental measures including alcohol outlet density, locality-based measure of willingness to sell alcohol (derived from purchase surveys of outlets) and a locality-based neighbourhood deprivation measure calculated routinely in New Zealand (known as NZDEP); and (ii) the second component was a random telephone survey to collect individual-level information from respondents aged 12-17 years including ethnicity, frequency of alcohol supplied socially (by parents, friends and others), young person's income; frequency of exposure to alcohol advertising; recall of brands of alcohol and self-reported purchase from alcohol outlets. A multi-level model was fitted to predict typical-occasion quantity, frequency of drinking and drunkenness in drinkers aged 12-17 years. Typical-occasion quantity was predicted by: frequency of social supply (by parents, friends and others); ethnicity and outlet density; and self-reported purchasing approached significance. NZDEP was correlated highly with outlet density so could not be analysed in the same model. In a separate model, NZDEP was associated with quantity consumed on a typical drinking occasion. Annual frequency was predicted by: frequency of social supply of alcohol, self-reported purchasing from alcohol outlets and ethnicity. Feeling drunk was predicted by frequency of social supply of alcohol, self-reported purchasing from alcohol outlets and ethnicity; outlet density approached significance. Age and gender also had effects in the models, but retailers' willingness to sell to underage patrons had no effects on consumption, nor did the advertising measures. The young person's income was influential on typical-occasion quantity once deprivation was taken into account. Alcohol outlet density was associated with quantities consumed among teenage drinkers in this study, as was neighbourhood deprivation. Supply by family, friends and others also predicted quantities consumed among underage drinkers and both social supply and self-reported purchase were associated with frequency of drinking and drunkenness. The ethnic status of young people also had an effect on consumption.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kruzic, Jamie J.; Evans, T. Matthew; Greaney, P. Alex
The report describes the development of a discrete element method (DEM) based modeling approach to quantitatively predict deformation and failure of typical nickel based superalloys. A series of experimental data, including microstructure and mechanical property characterization at 600°C, was collected for a relatively simple, model solid solution Ni-20Cr alloy (Nimonic 75) to determine inputs for the model and provide data for model validation. Nimonic 75 was considered ideal for this study because it is a certified tensile and creep reference material. A series of new DEM modeling approaches were developed to capture the complexity of metal deformation, including cubic elasticmore » anisotropy and plastic deformation both with and without strain hardening. Our model approaches were implemented into a commercially available DEM code, PFC3D, that is commonly used by engineers. It is envisioned that once further developed, this new DEM modeling approach can be adapted to a wide range of engineering applications.« less
The Importance of Modelling in the Teaching and Popularization of Science.
ERIC Educational Resources Information Center
Giordan, Andre
1991-01-01
Discusses the epistemology and typical applications of learning models focusing on practical methods to operationally introduce the distinctive, alloseric models into the educational environment. Alloseric learning models strive to minimize the characteristic resistance that learners typically exhibit when confronted with the need to reorganize or…
Objective Model Selection for Identifying the Human Feedforward Response in Manual Control.
Drop, Frank M; Pool, Daan M; van Paassen, Marinus Rene M; Mulder, Max; Bulthoff, Heinrich H
2018-01-01
Realistic manual control tasks typically involve predictable target signals and random disturbances. The human controller (HC) is hypothesized to use a feedforward control strategy for target-following, in addition to feedback control for disturbance-rejection. Little is known about human feedforward control, partly because common system identification methods have difficulty in identifying whether, and (if so) how, the HC applies a feedforward strategy. In this paper, an identification procedure is presented that aims at an objective model selection for identifying the human feedforward response, using linear time-invariant autoregressive with exogenous input models. A new model selection criterion is proposed to decide on the model order (number of parameters) and the presence of feedforward in addition to feedback. For a range of typical control tasks, it is shown by means of Monte Carlo computer simulations that the classical Bayesian information criterion (BIC) leads to selecting models that contain a feedforward path from data generated by a pure feedback model: "false-positive" feedforward detection. To eliminate these false-positives, the modified BIC includes an additional penalty on model complexity. The appropriate weighting is found through computer simulations with a hypothesized HC model prior to performing a tracking experiment. Experimental human-in-the-loop data will be considered in future work. With appropriate weighting, the method correctly identifies the HC dynamics in a wide range of control tasks, without false-positive results.
Key technique study and application of infrared thermography in hypersonic wind tunnel
NASA Astrophysics Data System (ADS)
LI, Ming; Yang, Yan-guang; Li, Zhi-hui; Zhu, Zhi-wei; Zhou, Jia-sui
2014-11-01
The solutions to some key techniques using infrared thermographic technique in hypersonic wind tunnel, such as temperature measurement under great measurement angle, the corresponding relation between model spatial coordinates and the ones in infrared map, the measurement uncertainty analysis of the test data etc., are studied. The typical results in the hypersonic wind tunnel test are presented, including the comparison of the transfer rates on a thin skin flat plate model with a wedge measured with infrared thermography and thermocouple, the experimental study heating effect on the flat plate model impinged by plume flow and the aerodynamic heating on the lift model.
Energy-saving management modelling and optimization for lead-acid battery formation process
NASA Astrophysics Data System (ADS)
Wang, T.; Chen, Z.; Xu, J. Y.; Wang, F. Y.; Liu, H. M.
2017-11-01
In this context, a typical lead-acid battery producing process is introduced. Based on the formation process, an efficiency management method is proposed. An optimization model with the objective to minimize the formation electricity cost in a single period is established. This optimization model considers several related constraints, together with two influencing factors including the transformation efficiency of IGBT charge-and-discharge machine and the time-of-use price. An example simulation is shown using PSO algorithm to solve this mathematic model, and the proposed optimization strategy is proved to be effective and learnable for energy-saving and efficiency optimization in battery producing industries.
Langley's CSI evolutionary model: Phase O
NASA Technical Reports Server (NTRS)
Belvin, W. Keith; Elliott, Kenny B.; Horta, Lucas G.; Bailey, Jim P.; Bruner, Anne M.; Sulla, Jeffrey L.; Won, John; Ugoletti, Roberto M.
1991-01-01
A testbed for the development of Controls Structures Interaction (CSI) technology to improve space science platform pointing is described. The evolutionary nature of the testbed will permit the study of global line-of-sight pointing in phases 0 and 1, whereas, multipayload pointing systems will be studied beginning with phase 2. The design, capabilities, and typical dynamic behavior of the phase 0 version of the CSI evolutionary model (CEM) is documented for investigator both internal and external to NASA. The model description includes line-of-sight pointing measurement, testbed structure, actuators, sensors, and real time computers, as well as finite element and state space models of major components.
NASA Astrophysics Data System (ADS)
Benettin, G.; Pasquali, S.; Ponno, A.
2018-05-01
FPU models, in dimension one, are perturbations either of the linear model or of the Toda model; perturbations of the linear model include the usual β -model, perturbations of Toda include the usual α +β model. In this paper we explore and compare two families, or hierarchies, of FPU models, closer and closer to either the linear or the Toda model, by computing numerically, for each model, the maximal Lyapunov exponent χ . More precisely, we consider statistically typical trajectories and study the asymptotics of χ for large N (the number of particles) and small ɛ (the specific energy E / N), and find, for all models, asymptotic power laws χ ˜eq Cɛ ^a, C and a depending on the model. The asymptotics turns out to be, in general, rather slow, and producing accurate results requires a great computational effort. We also revisit and extend the analytic computation of χ introduced by Casetti, Livi and Pettini, originally formulated for the β -model. With great evidence the theory extends successfully to all models of the linear hierarchy, but not to models close to Toda.
ERIC Educational Resources Information Center
Lee, HwaYoung; Beretvas, S. Natasha
2014-01-01
Conventional differential item functioning (DIF) detection methods (e.g., the Mantel-Haenszel test) can be used to detect DIF only across observed groups, such as gender or ethnicity. However, research has found that DIF is not typically fully explained by an observed variable. True sources of DIF may include unobserved, latent variables, such as…
ERIC Educational Resources Information Center
Brückner, Sebastian; Pellegrino, James W.
2016-01-01
The Standards for Educational and Psychological Testing indicate that validation of assessments should include analyses of participants' response processes. However, such analyses typically are conducted only to supplement quantitative field studies with qualitative data, and seldom are such data connected to quantitative data on student or item…
ERIC Educational Resources Information Center
Adamo, Elyse K.; Wu, Jenny; Wolery, Mark; Hemmeter, Mary Louise; Ledford, Jennifer R.; Barton, Erin E.
2015-01-01
Children with Down syndrome may be at increased risk of problems associated with inactivity. Early intervention to increase physical activity may lead to increased participation in typical activities and long-term increases in quality of life (e.g., decreased likelihood of obesity-related illness). A multi-component intervention, including video…
ERIC Educational Resources Information Center
Koutsouris, George; Norwich, Brahm; Fujita, Taro; Ralph, Thomas; Adlam, Anna; Milton, Fraser
2017-01-01
This article presents an evaluation of distance technology used in a novel Lesson Study (LS) approach involving a dispersed LS team for inter-professional purposes. A typical LS model with only school teachers as team members was modified by including university-based lecturers with the school-based teachers, using video-conferencing and online…
Cascade model of gamma-ray bursts: Power-law and annihilation-line components
NASA Technical Reports Server (NTRS)
Harding, A. K.; Sturrock, P. A.; Daugherty, J. K.
1988-01-01
If, in a neutron star magnetosphere, an electron is accelerated to an energy of 10 to the 11th or 12th power eV by an electric field parallel to the magnetic field, motion of the electron along the curved field line leads to a cascade of gamma rays and electron-positron pairs. This process is believed to occur in radio pulsars and gamma ray burst sources. Results are presented from numerical simulations of the radiation and photon annihilation pair production processes, using a computer code previously developed for the study of radio pulsars. A range of values of initial energy of a primary electron was considered along with initial injection position, and magnetic dipole moment of the neutron star. The resulting spectra was found to exhibit complex forms that are typically power law over a substantial range of photon energy, and typically include a dip in the spectrum near the electron gyro-frequency at the injection point. The results of a number of models are compared with data for the 5 Mar., 1979 gamma ray burst. A good fit was found to the gamma ray part of the spectrum, including the equivalent width of the annihilation line.
Dust Plume Modeling at Fort Bliss: Move-Out Operations, Combat Training and Wind Erosion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapman, Elaine G.; Rishel, Jeremy P.; Rutz, Frederick C.
2006-09-29
The potential for air-quality impacts from heavy mechanized vehicles operating in the training ranges and on the unpaved main supply routes at Fort Bliss was investigated. This report details efforts by the staff of Pacific Northwest National Laboratory for the Fort Bliss Directorate of Environment in this investigation. Dust emission and dispersion from typical activities, including move outs and combat training, occurring on the installation were simulated using the atmospheric modeling system DUSTRAN. Major assumptions associated with designing specific modeling scenarios are summarized, and results from the simulations are presented.
The Origin and Evolution of the Behavior Analysis Program at the University of Nevada, Reno.
Hayes, Linda J; Houmanfar, Ramona A; Ghezzi, Patrick M; Williams, W Larry; Locey, Matthew; Hayes, Steven C
2016-05-01
The origins of the Behavior Analysis program at the University of Nevada, Reno by way of a self-capitalized model through its transition to a more typical graduate program is described. Details of the original proposal to establish the program and the funding model are described. Some of the unusual features of the program executed in this way are discussed, along with problems engendered by the model. Also included is the diversification of faculty interests over time. The status of the program, now, after 25 years of operation, is presented.
Dissipative quantum hydrodynamics model of x-ray Thomson scattering in dense plasmas
NASA Astrophysics Data System (ADS)
Diaw, Abdourahmane; Murillo, Michael
2017-10-01
X-ray Thomson scattering (XRTS) provides detailed diagnostic information about dense plasma experiments. The inferences made rely on an accurate model for the form factor, which is typically expressed in terms of a well-known response function. Here, we develop an alternate approach based on quantum hydrodynamics using a viscous form of dynamical density functional theory. This approach is shown to include the equation of state self-consistently, including sum rules, as well as irreversibility arising from collisions. This framework is used to generate a model for the scattering spectrum, and it offers an avenue for measuring hydrodynamic properties, such as transport coefficients, using XRTS. This work was supported by the Air Force Office of Scientific Research (Grant No. FA9550-12-1-0344).
Infant fMRI: A Model System for Cognitive Neuroscience.
Ellis, Cameron T; Turk-Browne, Nicholas B
2018-05-01
Our understanding of the typical human brain has benefitted greatly from studying different kinds of brains and their associated behavioral repertoires, including animal models and neuropsychological patients. This same comparative perspective can be applied to early development - the environment, behavior, and brains of infants provide a model system for understanding how the mature brain works. This approach requires noninvasive methods for measuring brain function in awake, behaving infants. fMRI is becoming increasingly viable for this purpose, with the unique ability to precisely measure the entire brain, including both cortical and subcortical structures. Here we discuss potential lessons from infant fMRI for several domains of adult cognition and consider the challenges of conducting such research and how they might be mitigated. Copyright © 2018 Elsevier Ltd. All rights reserved.
Probabilistic analysis for fatigue strength degradation of materials
NASA Technical Reports Server (NTRS)
Royce, Lola
1989-01-01
This report presents the results of the first year of a research program conducted for NASA-LeRC by the University of Texas at San Antonio. The research included development of methodology that provides a probabilistic treatment of lifetime prediction of structural components of aerospace propulsion systems subjected to fatigue. Material strength degradation models, based on primitive variables, include both a fatigue strength reduction model and a fatigue crack growth model. Linear elastic fracture mechanics is utilized in the latter model. Probabilistic analysis is based on simulation, and both maximum entropy and maximum penalized likelihood methods are used for the generation of probability density functions. The resulting constitutive relationships are included in several computer programs, RANDOM2, RANDOM3, and RANDOM4. These programs determine the random lifetime of an engine component, in mechanical load cycles, to reach a critical fatigue strength or crack size. The material considered was a cast nickel base superalloy, one typical of those used in the Space Shuttle Main Engine.
NASA Astrophysics Data System (ADS)
Topping, David; Alibay, Irfan; Bane, Michael
2017-04-01
To predict the evolving concentration, chemical composition and ability of aerosol particles to act as cloud droplets, we rely on numerical modeling. Mechanistic models attempt to account for the movement of compounds between the gaseous and condensed phases at a molecular level. This 'bottom up' approach is designed to increase our fundamental understanding. However, such models rely on predicting the properties of molecules and subsequent mixtures. For partitioning between the gaseous and condensed phases this includes: saturation vapour pressures; Henrys law coefficients; activity coefficients; diffusion coefficients and reaction rates. Current gas phase chemical mechanisms predict the existence of potentially millions of individual species. Within a dynamic ensemble model, this can often be used as justification for neglecting computationally expensive process descriptions. Indeed, on whether we can quantify the true sensitivity to uncertainties in molecular properties, even at the single aerosol particle level it has been impossible to embed fully coupled representations of process level knowledge with all possible compounds, typically relying on heavily parameterised descriptions. Relying on emerging numerical frameworks, and designed for the changing landscape of high-performance computing (HPC), in this study we focus specifically on the ability to capture activity coefficients in liquid solutions using the UNIFAC method. Activity coefficients are often neglected with the largely untested hypothesis that they are simply too computationally expensive to include in dynamic frameworks. We present results demonstrating increased computational efficiency for a range of typical scenarios, including a profiling of the energy use resulting from reliance on such computations. As the landscape of HPC changes, the latter aspect is important to consider in future applications.
Models of unit operations used for solid-waste processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savage, G.M.; Glaub, J.C.; Diaz, L.F.
1984-09-01
This report documents the unit operations models that have been developed for typical refuse-derived-fuel (RDF) processing systems. These models, which represent the mass balances, energy requirements, and economics of the unit operations, are derived, where possible, from basic principles. Empiricism has been invoked where a governing theory has yet to be developed. Field test data and manufacturers' information, where available, supplement the analytical development of the models. A literature review has also been included for the purpose of compiling and discussing in one document the available information pertaining to the modeling of front-end unit operations. Separate analytics have been donemore » for each task.« less
Confronting GRB prompt emission with a model for subphotospheric dissipation
Ahlgren, Björn; Larsson, Josefin; Nymark, Tanja; ...
2015-09-16
The origin of the prompt emission in gamma-ray bursts (GRBs) is still an unsolved problem and several different mechanisms have been suggested. We fit Fermi GRB data with a photospheric emission model which includes dissipation of the jet kinetic energy below the photosphere. The resulting spectra are dominated by Comptonization and contain no significant contribution from synchrotron radiation. In order to fit to the data, we span a physically motivated part of the model's parameter space and create DREAM (Dissipation with Radiative Emission as A table Model), a table model for XSPEC. Here, we show that this model can describemore » different kinds of GRB spectra, including GRB 090618, representing a typical Band function spectrum, and GRB 100724B, illustrating a double peaked spectrum, previously fitted with a Band+blackbody model, suggesting they originate from a similar scenario. We also suggest that the main difference between these two types of bursts is the optical depth at the dissipation site.« less
Testable solution of the cosmological constant and coincidence problems
NASA Astrophysics Data System (ADS)
Shaw, Douglas J.; Barrow, John D.
2011-02-01
We present a new solution to the cosmological constant (CC) and coincidence problems in which the observed value of the CC, Λ, is linked to other observable properties of the Universe. This is achieved by promoting the CC from a parameter that must be specified, to a field that can take many possible values. The observed value of Λ≈(9.3Gyrs)-2 [≈10-120 in Planck units] is determined by a new constraint equation which follows from the application of a causally restricted variation principle. When applied to our visible Universe, the model makes a testable prediction for the dimensionless spatial curvature of Ωk0=-0.0056(ζb/0.5), where ζb˜1/2 is a QCD parameter. Requiring that a classical history exist, our model determines the probability of observing a given Λ. The observed CC value, which we successfully predict, is typical within our model even before the effects of anthropic selection are included. When anthropic selection effects are accounted for, we find that the observed coincidence between tΛ=Λ-1/2 and the age of the Universe, tU, is a typical occurrence in our model. In contrast to multiverse explanations of the CC problems, our solution is independent of the choice of a prior weighting of different Λ values and does not rely on anthropic selection effects. Our model includes no unnatural small parameters and does not require the introduction of new dynamical scalar fields or modifications to general relativity, and it can be tested by astronomical observations in the near future.
Development of a Common Research Model for Applied CFD Validation Studies
NASA Technical Reports Server (NTRS)
Vassberg, John C.; Dehaan, Mark A.; Rivers, S. Melissa; Wahls, Richard A.
2008-01-01
The development of a wing/body/nacelle/pylon/horizontal-tail configuration for a common research model is presented, with focus on the aerodynamic design of the wing. Here, a contemporary transonic supercritical wing design is developed with aerodynamic characteristics that are well behaved and of high performance for configurations with and without the nacelle/pylon group. The horizontal tail is robustly designed for dive Mach number conditions and is suitably sized for typical stability and control requirements. The fuselage is representative of a wide/body commercial transport aircraft; it includes a wing-body fairing, as well as a scrubbing seal for the horizontal tail. The nacelle is a single-cowl, high by-pass-ratio, flow-through design with an exit area sized to achieve a natural unforced mass-flow-ratio typical of commercial aircraft engines at cruise. The simplicity of this un-bifurcated nacelle geometry will facilitate grid generation efforts of subsequent CFD validation exercises. Detailed aerodynamic performance data has been generated for this model; however, this information is presented in such a manner as to not bias CFD predictions planned for the fourth AIAA CFD Drag Prediction Workshop, which incorporates this common research model into its blind test cases. The CFD results presented include wing pressure distributions with and without the nacelle/pylon, ML/D trend lines, and drag-divergence curves; the design point for the wing/body configuration is within 1% of its max-ML/D. Plans to test the common research model in the National Transonic Facility and the Ames 11-ft wind tunnels are also discussed.
NASA Astrophysics Data System (ADS)
Zhu, Jie; Sun, Ge; Li, Wenhong; Zhang, Yu; Miao, Guofang; Noormets, Asko; McNulty, Steve G.; King, John S.; Kumar, Mukesh; Wang, Xuan
2017-12-01
The southeastern United States hosts extensive forested wetlands, providing ecosystem services including carbon sequestration, water quality improvement, groundwater recharge, and wildlife habitat. However, these wetland ecosystems are dependent on local climate and hydrology, and are therefore at risk due to climate and land use change. This study develops site-specific empirical hydrologic models for five forested wetlands with different characteristics by analyzing long-term observed meteorological and hydrological data. These wetlands represent typical cypress ponds/swamps, Carolina bays, pine flatwoods, drained pocosins, and natural bottomland hardwood ecosystems. The validated empirical models are then applied at each wetland to predict future water table changes using climate projections from 20 general circulation models (GCMs) participating in Coupled Model Inter-comparison Project 5 (CMIP5) under the Representative Concentration Pathways (RCPs) 4.5 and 8.5 scenarios. We show that combined future changes in precipitation and potential evapotranspiration would significantly alter wetland hydrology including groundwater dynamics by the end of the 21st century. Compared to the historical period, all five wetlands are predicted to become drier over time. The mean water table depth is predicted to drop by 4 to 22 cm in response to the decrease in water availability (i.e., precipitation minus potential evapotranspiration) by the year 2100. Among the five examined wetlands, the depressional wetland in hot and humid Florida appears to be most vulnerable to future climate change. This study provides quantitative information on the potential magnitude of wetland hydrological response to future climate change in typical forested wetlands in the southeastern US.
Liu, Yun; Li, Hong; Sun, Sida; Fang, Sheng
2017-09-01
An enhanced air dispersion modelling scheme is proposed to cope with the building layout and complex terrain of a typical Chinese nuclear power plant (NPP) site. In this modelling, the California Meteorological Model (CALMET) and the Stationary Wind Fit and Turbulence (SWIFT) are coupled with the Risø Mesoscale PUFF model (RIMPUFF) for refined wind field calculation. The near-field diffusion coefficient correction scheme of the Atmospheric Relative Concentrations in the Building Wakes Computer Code (ARCON96) is adopted to characterize dispersion in building arrays. The proposed method is evaluated by a wind tunnel experiment that replicates the typical Chinese NPP site. For both wind speed/direction and air concentration, the enhanced modelling predictions agree well with the observations. The fraction of the predictions within a factor of 2 and 5 of observations exceeds 55% and 82% respectively in the building area and the complex terrain area. This demonstrates the feasibility of the new enhanced modelling for typical Chinese NPP sites. Copyright © 2017 Elsevier Ltd. All rights reserved.
Novel Approach to Simulate Sleep Apnea Patients for Evaluating Positive Pressure Therapy Devices.
Isetta, Valentina; Montserrat, Josep M; Santano, Raquel; Wimms, Alison J; Ramanan, Dinesh; Woehrle, Holger; Navajas, Daniel; Farré, Ramon
2016-01-01
Bench testing is a useful method to characterize the response of different automatic positive airway pressure (APAP) devices under well-controlled conditions. However, previous models did not consider the diversity of obstructive sleep apnea (OSA) patients' characteristics and phenotypes. The objective of this proof-of-concept study was to design a new bench test for realistically simulating an OSA patient's night, and to implement a one-night example of a typical female phenotype for comparing responses to several currently-available APAP devices. We developed a novel approach aimed at replicating a typical night of sleep which includes different disturbed breathing events, disease severities, sleep/wake phases, body postures and respiratory artefacts. The simulated female OSA patient example that we implemented included periods of wake, light sleep and deep sleep with positional changes and was connected to ten different APAP devices. Flow and pressure readings were recorded; each device was tested twice. The new approach for simulating female OSA patients effectively combined a wide variety of disturbed breathing patterns to mimic the response of a predefined patient type. There were marked differences in response between devices; only three were able to overcome flow limitation to normalize breathing, and only five devices were associated with a residual apnea-hypopnea index of <5/h. In conclusion, bench tests can be designed to simulate specific patient characteristics, and typical stages of sleep, body position, and wake. Each APAP device behaved differently when exposed to this controlled model of a female OSA patient, and should lead to further understanding of OSA treatment.
Schipper, Aafke M; Posthuma, Leo; de Zwart, Dick; Huijbregts, Mark A J
2014-12-16
Quantitative relationships between species richness and single environmental factors, also called species sensitivity distributions (SSDs), are helpful to understand and predict biodiversity patterns, identify environmental management options and set environmental quality standards. However, species richness is typically dependent on a variety of environmental factors, implying that it is not straightforward to quantify SSDs from field monitoring data. Here, we present a novel and flexible approach to solve this, based on the method of stacked species distribution modeling. First, a species distribution model (SDM) is established for each species, describing its probability of occurrence in relation to multiple environmental factors. Next, the predictions of the SDMs are stacked along the gradient of each environmental factor with the remaining environmental factors at fixed levels. By varying those fixed levels, our approach can be used to investigate how field-based SSDs for a given environmental factor change in relation to changing confounding influences, including for example optimal, typical, or extreme environmental conditions. This provides an asset in the evaluation of potential management measures to reach good ecological status.
Ball Aerospace Advances in 35 K Cooling-The SB235E Cryocooler
NASA Astrophysics Data System (ADS)
Lock, J. S.; Glaister, D. S.; Gully, W.; Hendershott, P.; Marquardt, E.
2008-03-01
This paper describes the design, development, testing and performance of the Ball Aerospace & Technologies Corp. SB235E, a 2-stage long life space cryocooler optimized for 2 cooling loads. The SB235E model is designed to provide simultaneous cooling at 35 K (typically for HgCdTe detectors) and 85 K (typically for optics). The SB235E is a higher capacity model derivative of the SB235. Initial testing of the SB235E has shown performance of 2.13 W at 35 K and 8.14 W at 85 K for 200 W power at 289 K rejection temperature. These data equate to Carnot efficiency of 0.175 or nearly twice that of other published space cryocooler data. Qualification testing has been completed including full performance mapping and vibration export. Performance mapping with the cold-stage temperature varying from 20 K to 80 K and mid-stage temperature varying from 85 K to 175 K are presented. Two engineering models of the SB235E are currently in build.
NASA Astrophysics Data System (ADS)
McIntyre, N.; Keir, G.
2014-12-01
Water supply systems typically encompass components of both natural systems (e.g. catchment runoff, aquifer interception) and engineered systems (e.g. process equipment, water storages and transfers). Many physical processes of varying spatial and temporal scales are contained within these hybrid systems models. The need to aggregate and simplify system components has been recognised for reasons of parsimony and comprehensibility; and the use of probabilistic methods for modelling water-related risks also prompts the need to seek computationally efficient up-scaled conceptualisations. How to manage the up-scaling errors in such hybrid systems models has not been well-explored, compared to research in the hydrological process domain. Particular challenges include the non-linearity introduced by decision thresholds and non-linear relations between water use, water quality, and discharge strategies. Using a case study of a mining region, we explore the nature of up-scaling errors in water use, water quality and discharge, and we illustrate an approach to identification of a scale-adjusted model including an error model. Ways forward for efficient modelling of such complex, hybrid systems are discussed, including interactions with human, energy and carbon systems models.
Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations
Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.; ...
2017-08-29
Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene havemore » visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. In conclusion, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.« less
Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.
Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene havemore » visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. In conclusion, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.« less
Typical and atypical metastatic sites of recurrent endometrial carcinoma
Krajewski, Katherine M.; Jagannathan, Jyothi; Giardino, Angela; Berlin, Suzanne; Ramaiya, Nikhil
2013-01-01
Abstract The purpose of this article is to illustrate the imaging findings of typical and atypical metastatic sites of recurrent endometrial carcinoma. Typical sites include local pelvic recurrence, pelvic and para-aortic nodes, peritoneum, and lungs. Atypical sites include extra-abdominal lymph nodes, liver, adrenals, brain, bones and soft tissue. It is important for radiologists to recognize the typical and atypical sites of metastases in patients with recurrent endometrial carcinoma to facilitate earlier diagnosis and treatment. PMID:23545091
Ghosal, Ratna; Sorensen, Peter W
2016-06-01
Male-typical reproductive behaviors vary greatly between different species of fishes with androgens playing a variety of roles that appear especially important in the gonochorist cypriniform fishes. The goldfish is an important model for the cypriniformes and while it is clear that male goldfish are fully feminized by prostaglandin F2α(PGF2α), it is not clear whether females will exhibit normal levels of male-typical reproductive behaviors as well as olfactory function when treated with androgens. To answer this question, we exposed sexually-regressed adult female goldfish to several types of androgen and monitored their tendencies to court (inspect females) and mate (spawn, or attempt to release gametes) while monitoring their olfactory sensitivity until changes in these attributes were maximized. Untreated adult males (intact) were included to determine the extent of masculinization. Treatments included the natural androgens, 11-ketotestosterone and testosterone (KT and T), administered via capsules (KT+T-implanted fish); the artificial androgen, methyltestosterone (MT), administered via capsules (MT-C); and MT administered in the fishes' water (MT-B). Male-typical olfactory sensitivity to a pheromone (15keto-PGF2α) increased in all androgen-treated groups and by week 6 was fully equivalent to that of males. Male-typical courtship behavior increased in all androgen-treated groups although slowly, and only MT-B females came to exhibit levels equivalent to those of males after 18weeks. In contrast, male-typical mating activity increased only slightly, with MT-B females reaching levels one-third that of males after 30weeks. We conclude that while androgens fully masculinize olfactory sensitivity and courtship behavior in goldfish, mating behavior is controlled by a different neuroendocrine mechanism(s) that has yet to be fully elucidated. Copyright © 2016 Elsevier Inc. All rights reserved.
Stellar Collisions and Blue Straggler Stars in Dense Globular Clusters
NASA Astrophysics Data System (ADS)
Chatterjee, Sourav; Rasio, Frederic A.; Sills, Alison; Glebbeek, Evert
2013-11-01
Blue straggler stars (BSSs) are abundantly observed in all Galactic globular clusters (GGCs) where data exist. However, observations alone cannot reveal the relative importance of various formation channels or the typical formation times for this well-studied population of anomalous stars. Using a state-of-the-art Hénon-type Monte Carlo code that includes all relevant physical processes, we create 128 models with properties typical of the observed GGCs. These models include realistic numbers of single and binary stars, use observationally motivated initial conditions, and span large ranges in central density, concentration, binary fraction, and mass. Their properties can be directly compared with those of observed GGCs. We can easily identify the BSSs in our models and determine their formation channels and birth times. We find that for central densities above ~103 M ⊙ pc-3, the dominant formation channel is stellar collisions, while for lower density clusters, mass transfer in binaries provides a significant contribution (up to 60% in our models). The majority of these collisions are binary-mediated, occurring during three-body and four-body interactions. As a result, a strong correlation between the specific frequency of BSSs and the binary fraction in a cluster can be seen in our models. We find that the number of BSSs in the core shows only a weak correlation with the collision rate estimator Γ traditionally used by observers, in agreement with the latest Hubble Space Telescope Advanced Camera for Surveys data. Using an idealized "full mixing" prescription for collision products, our models indicate that the BSSs observed today may have formed several Gyr ago. However, denser clusters tend to have younger (~1 Gyr) BSSs.
Arruda, Andréia Gonçalves; Friendship, Robert; Carpenter, Jane; Greer, Amy; Poljak, Zvonimir
2016-01-01
The objective of this study was to develop a discrete event agent-based stochastic model to explore the likelihood of the occurrence of porcine reproductive and respiratory syndrome (PRRS) outbreaks in swine herds with different PRRS control measures in place. The control measures evaluated included vaccination with a modified-live attenuated vaccine and live-virus inoculation of gilts, and both were compared to a baseline scenario where no control measures were in place. A typical North American 1,000-sow farrow-to-wean swine herd was used as a model, with production and disease parameters estimated from the literature and expert opinion. The model constructed herein was not only able to capture individual animal heterogeneity in immunity to and shedding of the PRRS virus, but also the dynamic animal flow and contact structure typical in such herds under field conditions. The model outcomes included maximum number of females infected per simulation, and time at which that happened and the incidence of infected weaned piglets during the first year of challenge-virus introduction. Results showed that the baseline scenario produced a larger percentage of simulations resulting in outbreaks compared to the control scenarios, and interestingly some of the outbreaks occurred over long periods after virus introduction. The live-virus inoculation scenario showed promising results, with fewer simulations resulting in outbreaks than the other scenarios, but the negative impacts of maintaining a PRRS-positive population should be considered. Finally, under the assumptions of the current model, neither of the control strategies prevented the infection from spreading to the piglet population, which highlights the importance of maintaining internal biosecurity practices at the farrowing room level.
Wang, Yawei; Wang, Lizhen; Du, Chengfei; Mo, Zhongjun; Fan, Yubo
2016-06-01
In contrast to numerous researches on static or quasi-static stiffness of cervical spine segments, very few investigations on their dynamic stiffness were published. Currently, scale factors and estimated coefficients were usually used in multi-body models for including viscoelastic properties and damping effects, meanwhile viscoelastic properties of some tissues were unavailable for establishing finite element models. Because dynamic stiffness of cervical spine segments in these models were difficult to validate because of lacking in experimental data, we tried to gain some insights on current modeling methods through studying dynamic stiffness differences between these models. A finite element model and a multi-body model of C6-C7 segment were developed through using available material data and typical modeling technologies. These two models were validated with quasi-static response data of the C6-C7 cervical spine segment. Dynamic stiffness differences were investigated through controlling motions of C6 vertebrae at different rates and then comparing their reaction forces or moments. Validation results showed that both the finite element model and the multi-body model could generate reasonable responses under quasi-static loads, but the finite element segment model exhibited more nonlinear characters. Dynamic response investigations indicated that dynamic stiffness of this finite element model might be underestimated because of the absence of dynamic stiffen effect and damping effects of annulus fibrous, while representation of these effects also need to be improved in current multi-body model. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Animal models of the non-motor features of Parkinson’s disease
McDowell, Kimberly; Chesselet, Marie-Françoise
2012-01-01
The non-motor symptoms (NMS) of Parkinson’s disease (PD) occur in roughly 90% of patients, have a profound negative impact on their quality of life, and often go undiagnosed. NMS typically involve many functional systems, and include sleep disturbances, neuropsychiatric and cognitive deficits, and autonomic and sensory dysfunction. The development and use of animal models have provided valuable insight into the classical motor symptoms of PD over the past few decades. Toxin-induced models provide a suitable approach to study aspects of the disease that derive from the loss of nigrostriatal dopaminergic neurons, a cardinal feature of PD. This also includes some NMS, primarily cognitive dysfunction. However, several NMS poorly respond to dopaminergic treatments, suggesting that they may be due to other pathologies. Recently developed genetic models of PD are providing new ways to model these NMS and identify their mechanisms. This review summarizes the current available literature on the ability of both toxin-induced and genetically-based animal models to reproduce the NMS of PD. PMID:22236386
A Model of Family and Child Functioning in Siblings of Youth with Autism Spectrum Disorder
ERIC Educational Resources Information Center
Tudor, Megan E.; Rankin, James; Lerner, Matthew D.
2018-01-01
The potential clinical needs of typically developing (TD) siblings of youth with autism spectrum disorder (ASD) remain disputed. A total of 239 mothers of youth aged 6-17, including one youth with ASD (M = 11.14 years; simplex families) and at least one other youth (M = 11.74 years) completed online standardized measures of various familial…
Review of Interorganizational Trust Models
2010-09-01
rooted in common values, including a common concept of moral obligation. This type of trust typically takes a long time to develop, and is the type of...mais les résultats de la recherche se sont avérés relativement maigres. Bien que nous ayons trouvé de nombreux modèles de confiance...DAVENPORT, 2004, P. 192) ........................................................................................ 16 FIGURE 6. CONCEPT MAP OF LITERATURE
Incorporating Non-Linear Sorption into High Fidelity Subsurface Reactive Transport Models
NASA Astrophysics Data System (ADS)
Matott, L. S.; Rabideau, A. J.; Allen-King, R. M.
2014-12-01
A variety of studies, including multiple NRC (National Research Council) reports, have stressed the need for simulation models that can provide realistic predictions of contaminant behavior during the groundwater remediation process, most recently highlighting the specific technical challenges of "back diffusion and desorption in plume models". For a typically-sized remediation site, a minimum of about 70 million grid cells are required to achieve desired cm-level thickness among low-permeability lenses responsible for driving the back-diffusion phenomena. Such discretization is nearly three orders of magnitude more than is typically seen in modeling practice using public domain codes like RT3D (Reactive Transport in Three Dimensions). Consequently, various extensions have been made to the RT3D code to support efficient modeling of recently proposed dual-mode non-linear sorption processes (e.g. Polanyi with linear partitioning) at high-fidelity scales of grid resolution. These extensions have facilitated development of exploratory models in which contaminants are introduced into an aquifer via an extended multi-decade "release period" and allowed to migrate under natural conditions for centuries. These realistic simulations of contaminant loading and migration provide high fidelity representation of the underlying diffusion and sorption processes that control remediation. Coupling such models with decision support processes is expected to facilitate improved long-term management of complex remediation sites that have proven intractable to conventional remediation strategies.
NASA Astrophysics Data System (ADS)
Katzav, Joel
2014-05-01
I bring out the limitations of four important views of what the target of useful climate model assessment is. Three of these views are drawn from philosophy. They include the views of Elisabeth Lloyd and Wendy Parker, and an application of Bayesian confirmation theory. The fourth view I criticise is based on the actual practice of climate model assessment. In bringing out the limitations of these four views, I argue that an approach to climate model assessment that neither demands too much of such assessment nor threatens to be unreliable will, in typical cases, have to aim at something other than the confirmation of claims about how the climate system actually is. This means, I suggest, that the Intergovernmental Panel on Climate Change's (IPCC's) focus on establishing confidence in climate model explanations and predictions is misguided. So too, it means that standard epistemologies of science with pretensions to generality, e.g., Bayesian epistemologies, fail to illuminate the assessment of climate models. I go on to outline a view that neither demands too much nor threatens to be unreliable, a view according to which useful climate model assessment typically aims to show that certain climatic scenarios are real possibilities and, when the scenarios are determined to be real possibilities, partially to determine how remote they are.
Dynamic Emulation Modelling (DEMo) of large physically-based environmental models
NASA Astrophysics Data System (ADS)
Galelli, S.; Castelletti, A.
2012-12-01
In environmental modelling large, spatially-distributed, physically-based models are widely adopted to describe the dynamics of physical, social and economic processes. Such an accurate process characterization comes, however, to a price: the computational requirements of these models are considerably high and prevent their use in any problem requiring hundreds or thousands of model runs to be satisfactory solved. Typical examples include optimal planning and management, data assimilation, inverse modelling and sensitivity analysis. An effective approach to overcome this limitation is to perform a top-down reduction of the physically-based model by identifying a simplified, computationally efficient emulator, constructed from and then used in place of the original model in highly resource-demanding tasks. The underlying idea is that not all the process details in the original model are equally important and relevant to the dynamics of the outputs of interest for the type of problem considered. Emulation modelling has been successfully applied in many environmental applications, however most of the literature considers non-dynamic emulators (e.g. metamodels, response surfaces and surrogate models), where the original dynamical model is reduced to a static map between input and the output of interest. In this study we focus on Dynamic Emulation Modelling (DEMo), a methodological approach that preserves the dynamic nature of the original physically-based model, with consequent advantages in a wide variety of problem areas. In particular, we propose a new data-driven DEMo approach that combines the many advantages of data-driven modelling in representing complex, non-linear relationships, but preserves the state-space representation typical of process-based models, which is both particularly effective in some applications (e.g. optimal management and data assimilation) and facilitates the ex-post physical interpretation of the emulator structure, thus enhancing the credibility of the model to stakeholders and decision-makers. Numerical results from the application of the approach to the reduction of 3D coupled hydrodynamic-ecological models in several real world case studies, including Marina Reservoir (Singapore) and Googong Reservoir (Australia), are illustrated.
Goldstein, Benjamin A.; Navar, Ann Marie; Carter, Rickey E.
2017-01-01
Abstract Risk prediction plays an important role in clinical cardiology research. Traditionally, most risk models have been based on regression models. While useful and robust, these statistical methods are limited to using a small number of predictors which operate in the same way on everyone, and uniformly throughout their range. The purpose of this review is to illustrate the use of machine-learning methods for development of risk prediction models. Typically presented as black box approaches, most machine-learning methods are aimed at solving particular challenges that arise in data analysis that are not well addressed by typical regression approaches. To illustrate these challenges, as well as how different methods can address them, we consider trying to predicting mortality after diagnosis of acute myocardial infarction. We use data derived from our institution's electronic health record and abstract data on 13 regularly measured laboratory markers. We walk through different challenges that arise in modelling these data and then introduce different machine-learning approaches. Finally, we discuss general issues in the application of machine-learning methods including tuning parameters, loss functions, variable importance, and missing data. Overall, this review serves as an introduction for those working on risk modelling to approach the diffuse field of machine learning. PMID:27436868
Reliability of four models for clinical gait analysis.
Kainz, Hans; Graham, David; Edwards, Julie; Walsh, Henry P J; Maine, Sheanna; Boyd, Roslyn N; Lloyd, David G; Modenese, Luca; Carty, Christopher P
2017-05-01
Three-dimensional gait analysis (3DGA) has become a common clinical tool for treatment planning in children with cerebral palsy (CP). Many clinical gait laboratories use the conventional gait analysis model (e.g. Plug-in-Gait model), which uses Direct Kinematics (DK) for joint kinematic calculations, whereas, musculoskeletal models, mainly used for research, use Inverse Kinematics (IK). Musculoskeletal IK models have the advantage of enabling additional analyses which might improve the clinical decision-making in children with CP. Before any new model can be used in a clinical setting, its reliability has to be evaluated and compared to a commonly used clinical gait model (e.g. Plug-in-Gait model) which was the purpose of this study. Two testers performed 3DGA in eleven CP and seven typically developing participants on two occasions. Intra- and inter-tester standard deviations (SD) and standard error of measurement (SEM) were used to compare the reliability of two DK models (Plug-in-Gait and a six degrees-of-freedom model solved using Vicon software) and two IK models (two modifications of 'gait2392' solved using OpenSim). All models showed good reliability (mean SEM of 3.0° over all analysed models and joint angles). Variations in joint kinetics were less in typically developed than in CP participants. The modified 'gait2392' model which included all the joint rotations commonly reported in clinical 3DGA, showed reasonable reliable joint kinematic and kinetic estimates, and allows additional musculoskeletal analysis on surgically adjustable parameters, e.g. muscle-tendon lengths, and, therefore, is a suitable model for clinical gait analysis. Copyright © 2017. Published by Elsevier B.V.
Effects of the Canopy and Flux Tube Anchoring on Evaporation Flow of a Solar Flare
NASA Astrophysics Data System (ADS)
Unverferth, John; Longcope, Dana
2018-06-01
Spectroscopic observations of flare ribbons typically show chromospheric evaporation flows, which are subsonic for their high temperatures. This contrasts with many numerical simulations where evaporation is typically supersonic. These simulations typically assume flow along a flux tube with a uniform cross-sectional area. A simple model of the magnetic canopy, however, includes many regions of low magnetic field strength, where flux tubes achieve local maxima in their cross-sectional area. These are analgous to a chamber in a flow tube. We find that one-third of all field lines in a model have some form of chamber through which evaporation flow must pass. Using a one-dimensional isothermal hydrodynamic code, we simulated supersonic flow through an assortment of chambers and found that a subset of solutions exhibit a stationary standing shock within the chamber. These shocked solutions have slower and denser upflows than a flow through a uniform tube would. We use our solution to construct synthetic spectral lines and find that the shocked solutions show higher emission and lower Doppler shifts. When these synthetic lines are combined into an ensemble representing a single canopy cell, the composite line appears slower, even subsonic, than expected due to the outsized contribution from shocked solutions.
Synaptic damage underlies EEG abnormalities in postanoxic encephalopathy: A computational study.
Ruijter, B J; Hofmeijer, J; Meijer, H G E; van Putten, M J A M
2017-09-01
In postanoxic coma, EEG patterns indicate the severity of encephalopathy and typically evolve in time. We aim to improve the understanding of pathophysiological mechanisms underlying these EEG abnormalities. We used a mean field model comprising excitatory and inhibitory neurons, local synaptic connections, and input from thalamic afferents. Anoxic damage is modeled as aggravated short-term synaptic depression, with gradual recovery over many hours. Additionally, excitatory neurotransmission is potentiated, scaling with the severity of anoxic encephalopathy. Simulations were compared with continuous EEG recordings of 155 comatose patients after cardiac arrest. The simulations agree well with six common categories of EEG rhythms in postanoxic encephalopathy, including typical transitions in time. Plausible results were only obtained if excitatory synapses were more severely affected by short-term synaptic depression than inhibitory synapses. In postanoxic encephalopathy, the evolution of EEG patterns presumably results from gradual improvement of complete synaptic failure, where excitatory synapses are more severely affected than inhibitory synapses. The range of EEG patterns depends on the excitation-inhibition imbalance, probably resulting from long-term potentiation of excitatory neurotransmission. Our study is the first to relate microscopic synaptic dynamics in anoxic brain injury to both typical EEG observations and their evolution in time. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Engineering evaluation of SSME dynamic data from engine tests and SSV flights
NASA Technical Reports Server (NTRS)
1986-01-01
An engineering evaluation of dynamic data from SSME hot firing tests and SSV flights is summarized. The basic objective of the study is to provide analyses of vibration, strain and dynamic pressure measurements in support of MSFC performance and reliability improvement programs. A brief description of the SSME test program is given and a typical test evaluation cycle reviewed. Data banks generated to characterize SSME component dynamic characteristics are described and statistical analyses performed on these data base measurements are discussed. Analytical models applied to define the dynamic behavior of SSME components (such as turbopump bearing elements and the flight accelerometer safety cut-off system) are also summarized. Appendices are included to illustrate some typical tasks performed under this study.
Kryzak, Lauren A; Jones, Emily A
2017-11-01
The present study taught typically developing (TD) siblings of children with autism spectrum disorders (ASD) social-communicative and self-management skills. The authors' hypothesized that the acquisition of self-management skills would support generalization of targeted social-communicative responses. A multiple baseline probe design across sibling dyads was used to decrease exposure to unnecessary sessions in the absence of intervention. Four TD siblings were taught self-management of a social skills curriculum using behavioral skills training, which consisted of instructions, modeling, practice, and subsequent feedback. Results indicated that TD siblings learned to self-manage the social skills curriculum with some generalization across novel settings and over time. Comparisons of social-communicative responses to their typical peers provided some support for the social validity of the intervention outcomes. These results support the use of self-management, when explicitly programming for generalization, which continues to be a key consideration when including TD siblings in interventions with their siblings with ASD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Universal Common Communication Substrate (UCCS) is a low-level communication substrate that exposes high-performance communication primitives, while providing network interoperability. It is intended to support multiple upper layer protocol (ULPs) or programming models including SHMEM,UPC,Titanium,Co-Array Fortran,Global Arrays,MPI,GASNet, and File I/O. it provides various communication operations including one-sided and two-sided point-to-point, collectives, and remote atomic operations. In addition to operations for ULPs, it provides an out-of-band communication channel required typically required to wire-up communication libraries.
Koslowsky; Solomon; Bleich; Laor
1994-06-01
In one of the few models specific to victims' reactions to traumatic events, it has been proposed that consequences typically include alternating patterns of intrusive and avoidance symptoms. The present exploratory investigation examined the responses of 120 victims who had been evacuated to a hotel after a SCUD missile attack on their home. Analyses using structural equation modeling showed that both psychological states follow stressful stimuli and perceived threat. In addition, results were found to be consistent with a model that posits intrusion as antecedent to anxiety which, in turn, was found to precede a latent outcome measure consisting of psychological, physical, and work functioning.
NASA Technical Reports Server (NTRS)
Free, April M.; Flowers, George T.; Trent, Victor S.
1993-01-01
Auxiliary bearings are a critical feature of any magnetic bearing system. They protect the soft iron core of the magnetic bearing during an overload or failure. An auxiliary bearing typically consists of a rolling element bearing or bushing with a clearance gap between the rotor and the inner race of the support. The dynamics of such systems can be quite complex. It is desired to develop a rotor-dynamic model and assess the dynamic behavior of a magnetic bearing rotor system which includes the effects of auxiliary bearings. Of particular interest is the effects of introducing sideloading into such a system during failure of the magnetic bearing. A model is developed from an experimental test facility and a number of simulation studies are performed. These results are presented and discussed.
The role of stand history in assessing forest impacts
Dale, V.H.; Doyle, T.W.
1987-01-01
Air pollution, harvesting practices, and natural disturbances can affect the growth of trees and forest development. To make predictions about anthropogenic impacts on forests, we need to understand how these factors affect tree growth. In this study the effect of disturbance history on tree growth and stand structure was examined by using a computer model of forest development. The model was run under the climatic conditions of east Tennessee, USA, and the results compared to stand structure and tree growth data from a yellow poplar-white oak forest. Basal area growth and forest biomass were more accurately projected when rough approximations of the thinning and fire history typical of the measured plots were included in the simulation model. Stand history can influence tree growth rates and forest structure and should be included in any attempt to assess forest impacts.
Hull, Laura; Mandy, William; Petrides, K V
2017-08-01
Studies assessing sex/gender differences in autism spectrum conditions often fail to include typically developing control groups. It is, therefore, unclear whether observed sex/gender differences reflect those found in the general population or are particular to autism spectrum conditions. A systematic search identified articles comparing behavioural and cognitive characteristics in males and females with and without an autism spectrum condition diagnosis. A total of 13 studies were included in meta-analyses of sex/gender differences in core autism spectrum condition symptoms (social/communication impairments and restricted/repetitive behaviours and interests) and intelligence quotient. A total of 20 studies were included in a qualitative review of sex/gender differences in additional autism spectrum condition symptoms. For core traits and intelligence quotient, sex/gender differences were comparable in autism spectrum conditions and typical samples. Some additional autism spectrum condition symptoms displayed different patterns of sex/gender differences in autism spectrum conditions and typically developing groups, including measures of executive function, empathising and systemising traits, internalising and externalising problems and play behaviours. Individuals with autism spectrum conditions display typical sex/gender differences in core autism spectrum condition traits, suggesting that diagnostic criteria based on these symptoms should take into account typical sex/gender differences. However, awareness of associated autism spectrum condition symptoms should include the possibility of different male and female phenotypes, to ensure those who do not fit the 'typical' autism spectrum condition presentation are not missed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hwang, H.H.M.; Chen, C.H.S.
1990-04-16
An assessment of the seismic hazard that exists along the major crude oil pipeline running through the New Madrid seismic zone from southeastern Louisiana to Patoka, Illinois is examined in the report. An 1811-1812 type New Madrid earthquake with moment magnitude 8.2 is assumed to occur at three locations where large historical earthquakes have occurred. Six pipeline crossings of the major rivers in West Tennessee are chosen as the sites for hazard evaluation because of the liquefaction potential at these sites. A seismologically-based model is used to predict the bedrock accelerations. Uncertainties in three model parameters, i.e., stress parameter, cutoffmore » frequency, and strong-motion duration are included in the analysis. Each parameter is represented by three typical values. From the combination of these typical values, a total of 27 earthquake time histories can be generated for each selected site due to an 1811-1812 type New Madrid earthquake occurring at a postulated seismic source.« less
Cournot games with network effects for electric power markets
NASA Astrophysics Data System (ADS)
Spezia, Carl John
The electric utility industry is moving from regulated monopolies with protected service areas to an open market with many wholesale suppliers competing for consumer load. This market is typically modeled by a Cournot game oligopoly where suppliers compete by selecting profit maximizing quantities. The classical Cournot model can produce multiple solutions when the problem includes typical power system constraints. This work presents a mathematical programming formulation of oligopoly that produces unique solutions when constraints limit the supplier outputs. The formulation casts the game as a supply maximization problem with power system physical limits and supplier incremental profit functions as constraints. The formulation gives Cournot solutions identical to other commonly used algorithms when suppliers operate within the constraints. Numerical examples demonstrate the feasibility of the theory. The results show that the maximization formulation will give system operators more transmission capacity when compared to the actions of suppliers in a classical constrained Cournot game. The results also show that the profitability of suppliers in constrained networks depends on their location relative to the consumers' load concentration.
Buck, Jacalyn; Loversidge, Jacqueline; Chipps, Esther; Gallagher-Ford, Lynn; Genter, Lynne; Yen, Po-Yin
2018-05-01
The aims of this study were to describe nurses' perceptions of nursing activities and analyze for consistency with top-of-license (TOL) practice. The Advisory Board Company expert panel proposed 8 TOL core nursing responsibilities representing practice at its potential. Thus far, no empirical work has examined nursing practices relative to TOL, from staff nurses' points of view. This qualitative study used focus groups to explore perceptions of typical nursing activities. We analyzed activities for themes that described nurses' work during typical shifts. Nurses' full scope of work included TOL-consistent categories, as well as categories that did not exemplify TOL practice, such as nonnursing care. A proposed model was developed, which depicts nurses' total scope of work, inclusive of all activity categories. In addition, hindrances to TOL practice were also identified. Findings from this study can inform leadership imperatives and the development of innovative, sustainable nursing practice models that support nursing practice at TOL.
Vibrations and structureborne noise in space station
NASA Technical Reports Server (NTRS)
Vaicaitis, R.
1985-01-01
Theoretical models were developed capable of predicting structural response and noise transmission to random point mechanical loads. Fiber reinforced composite and aluminum materials were considered. Cylindrical shells and circular plates were taken as typical representatives of structural components for space station habitability modules. Analytical formulations include double wall and single wall constructions. Pressurized and unpressurized models were considered. Parametric studies were conducted to determine the effect on structural response and noise transmission due to fiber orientation, point load location, damping in the core and the main load carrying structure, pressurization, interior acoustic absorption, etc. These analytical models could serve as preliminary tools for assessing noise related problems, for space station applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neymark, J.; Kennedy, M.; Judkoff, R.
This report documents a set of diagnostic analytical verification cases for testing the ability of whole building simulation software to model the air distribution side of typical heating, ventilating and air conditioning (HVAC) equipment. These cases complement the unitary equipment cases included in American National Standards Institute (ANSI)/American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs, which test the ability to model the heat-transfer fluid side of HVAC equipment.
Modeling of Inhomogeneous Compressible Turbulence Using a Two-Scale Statistical Theory
NASA Technical Reports Server (NTRS)
Hamba, Fujihiro
1996-01-01
Turbulence modeling plays an important role in the study of high-speed flows in engineering and aerodynamic problems; they include flows in supersonic combustion engines and over hypersonic transport aircraft. The enhancement of the kinetic energy dissipation by the dilatational terms is one of the typical compressibility effects. Zeman (1990) and Sarkar et al. (1991) proposed that the dilatation dissipation is proportional to the solenoidal dissipation and is a function of the turbulent Mach number. Sarkar (1992) also modeled the pressure-dilatation correlation using the turbulent Mach number. Zeman (1991) related the correlation to the rate of change of the pressure variance.
NDARC NASA Design and Analysis of Rotorcraft. Appendix 5; Theory
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2017-01-01
The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool that supports both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet specified requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft configurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates configuration flexibility, a hierarchy of models, and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with low-fidelity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance calculation, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft configurations is facilitated, while retaining the capability to model novel and advanced concepts. Specific rotorcraft configurations considered are single-main-rotor and tail-rotor helicopter, tandem helicopter, coaxial helicopter, and tiltrotor. The architecture of the code accommodates addition of new or higher-fidelity attribute models for a component, as well as addition of new components.
NDARC: NASA Design and Analysis of Rotorcraft. Appendix 3; Theory
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2016-01-01
The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool that supports both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet speci?ed requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft con?gurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates con?guration ?exibility, a hierarchy of models, and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with low-?delity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy speci?ed design conditions and missions. The analysis tasks can include off-design mission performance calculation, ?ight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft con?gurations is facilitated, while retaining the capability to model novel and advanced concepts. Speci?c rotorcraft con?gurations considered are single-main-rotor and tail-rotor helicopter, tandem helicopter, coaxial helicopter, and tiltrotor. The architecture of the code accommodates addition of new or higher-?delity attribute models for a component, as well as addition of new components.
NDARC NASA Design and Analysis of Rotorcraft - Input, Appendix 2
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2016-01-01
The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool that supports both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet specified requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft configurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates configuration exibility, a hierarchy of models, and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with low-fidelity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance calculation, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft configurations is facilitated, while retaining the capability to model novel and advanced concepts. Specific rotorcraft configurations considered are single-main-rotor and tail-rotor helicopter, tandem helicopter, coaxial helicopter, and tilt-rotor. The architecture of the code accommodates addition of new or higher-fidelity attribute models for a component, as well as addition of new components.
NDARC NASA Design and Analysis of Rotorcraft. Appendix 6; Input
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2017-01-01
The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool that supports both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet specified requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft configurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates configuration flexibility, a hierarchy of models, and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with low-fidelity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance calculation, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft configurations is facilitated, while retaining the capability to model novel and advanced concepts. Specific rotorcraft configurations considered are single-main-rotor and tail-rotor helicopter, tandem helicopter, coaxial helicopter, and tiltrotor. The architecture of the code accommodates addition of new or higher-fidelity attribute models for a component, as well as addition of new components.
NDARC NASA Design and Analysis of Rotorcraft
NASA Technical Reports Server (NTRS)
Johnson, Wayne R.
2009-01-01
The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool intended to support both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet specified requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft configurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates configuration flexibility; a hierarchy of models; and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with lowfidelity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance calculation, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft configurations is facilitated, while retaining the capability to model novel and advanced concepts. Specific rotorcraft configurations considered are single main-rotor and tailrotor helicopter; tandem helicopter; coaxial helicopter; and tiltrotors. The architecture of the code accommodates addition of new or higher-fidelity attribute models for a component, as well as addition of new components.
NDARC - NASA Design and Analysis of Rotorcraft
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2015-01-01
The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool that supports both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet specified requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft configurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates configuration flexibility, a hierarchy of models, and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with low-fidelity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance calculation, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft configurations is facilitated, while retaining the capability to model novel and advanced concepts. Specific rotorcraft configurations considered are single-main-rotor and tail-rotor helicopter, tandem helicopter, coaxial helicopter, and tiltrotor. The architecture of the code accommodates addition of new or higher-fidelity attribute models for a component, as well as addition of new components.
NDARC NASA Design and Analysis of Rotorcraft Theory Appendix 1
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2016-01-01
The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool that supports both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet specified requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft configurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates configuration flexibility, a hierarchy of models, and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with low-fidelity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance calculation, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft configurations is facilitated, while retaining the capability to model novel and advanced concepts. Specific rotorcraft configurations considered are single-main-rotor and tail-rotor helicopter, tandem helicopter, coaxial helicopter, and tiltrotor. The architecture of the code accommodates addition of new or higher-fidelity attribute models for a component, as well as addition of new components.
Decision-Tree Models of Categorization Response Times, Choice Proportions, and Typicality Judgments
ERIC Educational Resources Information Center
Lafond, Daniel; Lacouture, Yves; Cohen, Andrew L.
2009-01-01
The authors present 3 decision-tree models of categorization adapted from T. Trabasso, H. Rollins, and E. Shaughnessy (1971) and use them to provide a quantitative account of categorization response times, choice proportions, and typicality judgments at the individual-participant level. In Experiment 1, the decision-tree models were fit to…
Bronchoscopic cryotherapy treatment of isolated endoluminal typical carcinoid tumor.
Bertoletti, Laurent; Elleuch, Rami; Kaczmarek, David; Jean-François, Rita; Vergnon, Jean Michel
2006-11-01
Bronchial typical carcinoid tumors are rare. The "gold standard" treatment is surgery, but there is literature to support bronchoscopic therapy with curative intent. Based on the efficacy of cryotherapy for in situ lung cancer, we studied the safety and efficacy of rigid bronchoscopic treatment with cryotherapy on isolated endoluminal typical carcinoid tumors. All the patients from the Department of Pulmonary Diseases and Thoracic Oncology of St. Etienne University Hospital (France), and of Hôpital Notre Dame, University Hospital of Montreal referred with typical carcinoid were screened. Inclusion criteria included the following: proven typical carcinoid, strictly endoluminal disease amenable to bronchoscopic therapy, and no evidence of lymph node invasion. All patients had a complete removal of the tumor, and all patients received cryotherapy to the implantation base. Twenty-nine patients were screened, and 18 were included. Mean age was 47 years, and study population included 11 women. Median follow-up was 55 months. There was a single recurrence 7 years after the initial bronchoscopic treatment. Cryotherapy is a safe and effective adjunct to endobronchial mechanical resection of typical carcinoids. Unlike other adjuncts that have been proposed, cryotherapy is not associated with long-term complications including bronchial stenosis.
Modelling Parameters Characterizing Selected Water Supply Systems in Lower Silesia Province
NASA Astrophysics Data System (ADS)
Nowogoński, Ireneusz; Ogiołda, Ewa
2017-12-01
The work presents issues of modelling water supply systems in the context of basic parameters characterizing their operation. In addition to typical parameters, such as water pressure and flow rate, assessing the age of the water is important, as a parameter of assessing the quality of the distributed medium. The analysis was based on two facilities, including one with a diverse spectrum of consumers, including residential housing and industry. The carried out simulations indicate the possibility of the occurrence of water quality degradation as a result of excessively long periods of storage in the water supply network. Also important is the influence of the irregularity of water use, especially in the case of supplying various kinds of consumers (in the analysed case - mining companies).
NASA Technical Reports Server (NTRS)
Shia, Run-Lie; Ha, Yuk Lung; Wen, Jun-Shan; Yung, Yuk L.
1990-01-01
Extensive testing of the advective scheme proposed by Prather (1986) has been carried out in support of the California Institute of Technology-Jet Propulsion Laboratory two-dimensional model of the middle atmosphere. The original scheme is generalized to include higher-order moments. In addition, it is shown how well the scheme works in the presence of chemistry as well as eddy diffusion. Six types of numerical experiments including simple clock motion and pure advection in two dimensions have been investigated in detail. By comparison with analytic solutions, it is shown that the new algorithm can faithfully preserve concentration profiles, has essentially no numerical diffusion, and is superior to a typical fourth-order finite difference scheme.
Global embeddings for branes at toric singularities
NASA Astrophysics Data System (ADS)
Balasubramanian, Vijay; Berglund, Per; Braun, Volker; García-Etxebarria, Iñaki
2012-10-01
We describe how local toric singularities, including the Toric Lego construction, can be embedded in compact Calabi-Yau manifolds. We study in detail the addition of D-branes, including non-compact flavor branes as typically used in semi-realistic model building. The global geometry provides constraints on allowable local models. As an illustration of our discussion we focus on D3 and D7-branes on (the partially resolved) ( dP 0)3 singularity, its embedding in a specific Calabi-Yau manifold as a hypersurface in a toric variety, the related type IIB orientifold compactification, as well as the corresponding F-theory uplift. Our techniques generalize naturally to complete intersections, and to a large class of F-theory backgrounds with singularities.
CHROMOSPHERIC MODELS AND THE OXYGEN ABUNDANCE IN GIANT STARS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dupree, A. K.; Avrett, E. H.; Kurucz, R. L., E-mail: dupree@cfa.harvard.edu
Realistic stellar atmospheric models of two typical metal-poor giant stars in Omega Centauri, which include a chromosphere (CHR), influence the formation of optical lines of O i: the forbidden lines (λ6300, λ6363) and the infrared triplet (λλ7771−7775). One-dimensional semi-empirical non-local thermodynamic equilibrium (LTE) models are constructed based on observed Balmer lines. A full non-LTE formulation is applied for evaluating the line strengths of O i, including photoionization by the Lyman continuum and photoexcitation by Lyα and Lyβ. Chromospheric models (CHR) yield forbidden oxygen transitions that are stronger than those in radiative/convective equilibrium (RCE) models. The triplet oxygen lines from highmore » levels also appear stronger than those produced in an RCE model. The inferred oxygen abundance from realistic CHR models for these two stars is decreased by factors of ∼3 as compared to values derived from RCE models. A lower oxygen abundance suggests that intermediate-mass AGB stars contribute to the observed abundance pattern in globular clusters. A change in the oxygen abundance of metal-poor field giants could affect models of deep mixing episodes on the red giant branch. Changes in the oxygen abundance can impact other abundance determinations that are critical to astrophysics, including chemical tagging techniques and galactic chemical evolution.« less
Humanitarian response: improving logistics to save lives.
McCoy, Jessica
2008-01-01
Each year, millions of people worldwide are affected by disasters, underscoring the importance of effective relief efforts. Many highly visible disaster responses have been inefficient and ineffective. Humanitarian agencies typically play a key role in disaster response (eg, procuring and distributing relief items to an affected population, assisting with evacuation, providing healthcare, assisting in the development of long-term shelter), and thus their efficiency is critical for a successful disaster response. The field of disaster and emergency response modeling is well established, but the application of such techniques to humanitarian logistics is relatively recent. This article surveys models of humanitarian response logistics and identifies promising opportunities for future work. Existing models analyze a variety of preparation and response decisions (eg, warehouse location and the distribution of relief supplies), consider both natural and manmade disasters, and typically seek to minimize cost or unmet demand. Opportunities to enhance the logistics of humanitarian response include the adaptation of models developed for general disaster response; the use of existing models, techniques, and insights from the literature on commercial supply chain management; the development of working partnerships between humanitarian aid organizations and private companies with expertise in logistics; and the consideration of behavioral factors relevant to a response. Implementable, realistic models that support the logistics of humanitarian relief can improve the preparation for and the response to disasters, which in turn can save lives.
On land-use modeling: A treatise of satellite imagery data and misclassification error
NASA Astrophysics Data System (ADS)
Sandler, Austin M.
Recent availability of satellite-based land-use data sets, including data sets with contiguous spatial coverage over large areas, relatively long temporal coverage, and fine-scale land cover classifications, is providing new opportunities for land-use research. However, care must be used when working with these datasets due to misclassification error, which causes inconsistent parameter estimates in the discrete choice models typically used to model land-use. I therefore adapt the empirical correction methods developed for other contexts (e.g., epidemiology) so that they can be applied to land-use modeling. I then use a Monte Carlo simulation, and an empirical application using actual satellite imagery data from the Northern Great Plains, to compare the results of a traditional model ignoring misclassification to those from models accounting for misclassification. Results from both the simulation and application indicate that ignoring misclassification will lead to biased results. Even seemingly insignificant levels of misclassification error (e.g., 1%) result in biased parameter estimates, which alter marginal effects enough to affect policy inference. At the levels of misclassification typical in current satellite imagery datasets (e.g., as high as 35%), ignoring misclassification can lead to systematically erroneous land-use probabilities and substantially biased marginal effects. The correction methods I propose, however, generate consistent parameter estimates and therefore consistent estimates of marginal effects and predicted land-use probabilities.
Common modeling system for digital simulation
NASA Technical Reports Server (NTRS)
Painter, Rick
1994-01-01
The Joint Modeling and Simulation System is a tri-service investigation into a common modeling framework for the development digital models. The basis for the success of this framework is an X-window-based, open systems architecture, object-based/oriented methodology, standard interface approach to digital model construction, configuration, execution, and post processing. For years Department of Defense (DOD) agencies have produced various weapon systems/technologies and typically digital representations of the systems/technologies. These digital representations (models) have also been developed for other reasons such as studies and analysis, Cost Effectiveness Analysis (COEA) tradeoffs, etc. Unfortunately, there have been no Modeling and Simulation (M&S) standards, guidelines, or efforts towards commonality in DOD M&S. The typical scenario is an organization hires a contractor to build hardware and in doing so an digital model may be constructed. Until recently, this model was not even obtained by the organization. Even if it was procured, it was on a unique platform, in a unique language, with unique interfaces, and, with the result being UNIQUE maintenance required. Additionally, the constructors of the model expended more effort in writing the 'infrastructure' of the model/simulation (e.g. user interface, database/database management system, data journalizing/archiving, graphical presentations, environment characteristics, other components in the simulation, etc.) than in producing the model of the desired system. Other side effects include: duplication of efforts; varying assumptions; lack of credibility/validation; and decentralization in policy and execution. J-MASS provides the infrastructure, standards, toolset, and architecture to permit M&S developers and analysts to concentrate on the their area of interest.
Ball Aerospace Long Life, Low Temperature Space Cryocoolers
NASA Astrophysics Data System (ADS)
Glaister, D. S.; Gully, W.; Marquardt, E.; Stack, R.
2004-06-01
This paper describes the development, qualification, characterization testing and performance at Ball Aerospace of long life, low temperature (from 4 to 35 K) space cryocoolers. For over a decade, Ball has built long life (>10 year), multi-stage Stirling and Joule-Thomson (J-T) cryocoolers for space applications, with specific performance and design features for low temperature operation. As infrared space missions have continually pushed for operation at longer wavelengths, the applications for these low temperature cryocoolers have increased. The Ball cryocooler technologies have culminated in the flight qualified SB235 Cryocooler and the in-development 6 K NASA/JPL ACTDP (Advanced Cryocooler Technology Development Program) Cryocooler. The SB235 and its model derivative SB235E are 2-stage coolers designed to provide simultaneous cooling at 35 K (typically, for Mercury Cadmium Telluride or MCT detectors) and 100 K (typically, for the optics) and were baselined for the Raytheon SBIRS Low Track Sensor. The Ball ACTDP cooler is a hybrid Stirling/J-T cooler that has completed its preliminary design with an Engineering Model to be tested in 2005. The ACTDP cooler provides simultaneous cooling at 6 K (typically, for either doped Si detectors or as a sub-Kelvin precooler) and 18 K (typically, for optics or shielding). The ACTDP cooler is under development for the NASA JWST (James Webb Space Telescope), TPF (Terrestrial Planet Finder), and Con-X (Constellation X-Ray) missions. Both the SB235 and ACTDP Coolers are highly leveraged off previous Ball space coolers including multiple life test and flight units.
Intrinsic ethics regarding integrated assessment models for climate management.
Schienke, Erich W; Baum, Seth D; Tuana, Nancy; Davis, Kenneth J; Keller, Klaus
2011-09-01
In this essay we develop and argue for the adoption of a more comprehensive model of research ethics than is included within current conceptions of responsible conduct of research (RCR). We argue that our model, which we label the ethical dimensions of scientific research (EDSR), is a more comprehensive approach to encouraging ethically responsible scientific research compared to the currently typically adopted approach in RCR training. This essay focuses on developing a pedagogical approach that enables scientists to better understand and appreciate one important component of this model, what we call intrinsic ethics. Intrinsic ethical issues arise when values and ethical assumptions are embedded within scientific findings and analytical methods. Through a close examination of a case study and its application in teaching, namely, evaluation of climate change integrated assessment models, this paper develops a method and case for including intrinsic ethics within research ethics training to provide scientists with a comprehensive understanding and appreciation of the critical role of values and ethical choices in the production of research outcomes.
NASA Technical Reports Server (NTRS)
Bast, Callie C.; Boyce, Lola
1995-01-01
The development of methodology for a probabilistic material strength degradation is described. The probabilistic model, in the form of a postulated randomized multifactor equation, provides for quantification of uncertainty in the lifetime material strength of aerospace propulsion system components subjected to a number of diverse random effects. This model is embodied in the computer program entitled PROMISS, which can include up to eighteen different effects. Presently, the model includes five effects that typically reduce lifetime strength: high temperature, high-cycle mechanical fatigue, low-cycle mechanical fatigue, creep and thermal fatigue. Results, in the form of cumulative distribution functions, illustrated the sensitivity of lifetime strength to any current value of an effect. In addition, verification studies comparing predictions of high-cycle mechanical fatigue and high temperature effects with experiments are presented. Results from this limited verification study strongly supported that material degradation can be represented by randomized multifactor interaction models.
Consistent parameter fixing in the quark-meson model with vacuum fluctuations
NASA Astrophysics Data System (ADS)
Carignano, Stefano; Buballa, Michael; Elkamhawy, Wael
2016-08-01
We revisit the renormalization prescription for the quark-meson model in an extended mean-field approximation, where vacuum quark fluctuations are included. At a given cutoff scale the model parameters are fixed by fitting vacuum quantities, typically including the sigma-meson mass mσ and the pion decay constant fπ. In most publications the latter is identified with the expectation value of the sigma field, while for mσ the curvature mass is taken. When quark loops are included, this prescription is however inconsistent, and the correct identification involves the renormalized pion decay constant and the sigma pole mass. In the present article we investigate the influence of the parameter-fixing scheme on the phase structure of the model at finite temperature and chemical potential. Despite large differences between the model parameters in the two schemes, we find that in homogeneous matter the effect on the phase diagram is relatively small. For inhomogeneous phases, on the other hand, the choice of the proper renormalization prescription is crucial. In particular, we show that if renormalization effects on the pion decay constant are not considered, the model does not even present a well-defined renormalized limit when the cutoff is sent to infinity.
Effects of slope smoothing in river channel modeling
NASA Astrophysics Data System (ADS)
Kim, Kyungmin; Liu, Frank; Hodges, Ben R.
2017-04-01
In extending dynamic river modeling with the 1D Saint-Venant equations from a single reach to a large watershed there are critical questions as to how much bathymetric knowledge is necessary and how it should be represented parsimoniously. The ideal model will include the detail necessary to provide realism, but not include extraneous detail that should not exert a control on a 1D (cross-section averaged) solution. In a Saint-Venant model, the overall complexity of the river channel morphometry is typically abstracted into metrics for the channel slope, cross-sectional area, hydraulic radius, and roughness. In stream segments where cross-section surveys are closely spaced, it is not uncommon to have sharp changes in slope or even negative values (where a positive slope is the downstream direction). However, solving river flow with the Saint-Venant equations requires a degree of smoothness in the equation parameters or the equation set with the directly measured channel slopes may not be Lipschitz continuous. The results of non-smoothness are typically extended computational time to converge solutions (or complete failure to converge) and/or numerical instabilities under transient conditions. We have investigated using cubic splines to smooth the bottom slope and ensure always positive reference slopes within a 1D model. This method has been implemented in the Simulation Program for River Networks (SPRNT) and is compared to the standard HEC-RAS river solver. It is shown that the reformulation of the reference slope is both in keeping with the underlying derivation of the Saint-Venant equations and provides practical numerical stability without altering the realism of the simulation. This research was supported in part by the National Science Foundation under grant number CCF-1331610.
Testable solution of the cosmological constant and coincidence problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaw, Douglas J.; Barrow, John D.
2011-02-15
We present a new solution to the cosmological constant (CC) and coincidence problems in which the observed value of the CC, {Lambda}, is linked to other observable properties of the Universe. This is achieved by promoting the CC from a parameter that must be specified, to a field that can take many possible values. The observed value of {Lambda}{approx_equal}(9.3 Gyrs){sup -2}[{approx_equal}10{sup -120} in Planck units] is determined by a new constraint equation which follows from the application of a causally restricted variation principle. When applied to our visible Universe, the model makes a testable prediction for the dimensionless spatial curvaturemore » of {Omega}{sub k0}=-0.0056({zeta}{sub b}/0.5), where {zeta}{sub b}{approx}1/2 is a QCD parameter. Requiring that a classical history exist, our model determines the probability of observing a given {Lambda}. The observed CC value, which we successfully predict, is typical within our model even before the effects of anthropic selection are included. When anthropic selection effects are accounted for, we find that the observed coincidence between t{sub {Lambda}={Lambda}}{sup -1/2} and the age of the Universe, t{sub U}, is a typical occurrence in our model. In contrast to multiverse explanations of the CC problems, our solution is independent of the choice of a prior weighting of different {Lambda} values and does not rely on anthropic selection effects. Our model includes no unnatural small parameters and does not require the introduction of new dynamical scalar fields or modifications to general relativity, and it can be tested by astronomical observations in the near future.« less
Effect of end-ring stiffness on buckling of pressure-loaded stiffened conical shells
NASA Technical Reports Server (NTRS)
Davis, R. C.; Williams, J. G.
1977-01-01
Buckling studies were conducted on truncated 120 deg conical shells having large end rings and many interior reinforcing rings that are typical of aeroshells used as spacecraft decelerators. Changes in base-end-ring stiffness were accomplished by simply machining away a portion of the base ring between successive buckling tests. Initial imperfection measurements from the test cones were included in the analytical model.
ERIC Educational Resources Information Center
Tsai, Hsiao-Wei Joy; Cebula, Katie; Fletcher-Watson, Sue
2017-01-01
The influence of the broader autism phenotype (BAP) on the adjustment of siblings of children with autism has previously been researched mainly in Western cultures. The present research evaluated a diathesis-stress model of sibling adjustment using a questionnaire study including 80 and 75 mother-typically developing sibling dyads in Taiwan and…
Unfurlable satellite antennas - A review
NASA Technical Reports Server (NTRS)
Roederer, Antoine G.; Rahmat-Samii, Yahia
1989-01-01
A review of unfurlable satellite antennas is presented. Typical application requirements for future space missions are first outlined. Then, U.S. and European mesh and inflatable antenna concepts are described. Precision deployables using rigid panels or petals are not included in the survey. RF modeling and performance analysis of gored or faceted mesh reflector antennas are then reviewed. Finally, both on-ground and in-orbit RF test techniques for large unfurlable antennas are discussed.
Evaluating white LEDs for outdoor landscape lighting application
NASA Astrophysics Data System (ADS)
Shakir, Insiya; Narendran, Nadarajah
2002-11-01
A laboratory experiment was conducted to understand the acceptability of different white light emitting diodes (LEDs) for outdoor landscape lighting. The study used a scaled model setup. The scene was designed to replicate the exterior of a typical upscale suburban restaurant including the exterior facade of the building, an approach with steps, and a garden. The lighting was designed to replicate light levels commonly found in nighttime outdoor conditions. The model had a central dividing partition with symmetrical scenes on both sides for side-by-side evaluations of the two scenes with different light sources. While maintaining equal luminance levels and distribution between the two scenes, four types of light sources were evaluated. These include, halogen, phosphor white LED, and two white light systems using RGB LEDs. These light sources were tested by comparing two sources at a time placed side-by-side and by individual assessment of each lighting condition. The results showed that the RGB LEDs performed equal or better than the most widely used halogen light source in this given setting. A majority of the subjects found slightly dimmer ambient lighting to be more typical for restaurants and therefore found RGB LED and halogen light sources to be more inviting. The phosphor white LEDs made the space look brighter, however a majority of the subjects disliked them.
NASA Astrophysics Data System (ADS)
Pouliaris, Christos; Schumann, Philipp; Danneberg, Nils-Christian; Foglia, Laura; Kallioras, Andreas; Schüth, Christoph
2015-04-01
Groundwater management in arid areas has become a major issue worldwide, and it is expected to be exacerbated due to climate change. Low annual precipitation and high evaporation potential are the key features of these areas, with additional pressure added to the system due to abstractions for irrigation and water supply purposes. Typical example of such scenarios exist in the Mediterranean area, where drought and water scarcity, especially in the warm period of the hydrological year, give rise to major management issues in coastal areas. Among the different solutions, the implementation of Managed Aquifer Recharge (MAR) schemes have been suggested in the EU FP7 project MARSOL (Demonstrating Managed Aquifer Recharge as a Solution to Water Scarcity and Drought). In the project, different sites across the Mediterranean are tested for investigating the viability of various MAR techniques in different hydrological systems facing qualitative and quantitative deterioration of their groundwater resources. The coastal hydrosystem of Lavrion was selected due to its typical Mediterranean characteristics (climatic, hydrologic, hydrogeological, geological etc.); all within a rather small area of extent ( < 50km2), that render it as a reference site for hydrologic modeling applications. It consists of a set of aquifer layers (karstified limestone and alluvial) which are hydraulically connected to the sea and an ephemeral torrent (wadi) that flows through a typical small Mediterranean alluvial valley. The major water resources problems of the area are mainly qualitative issues of the groundwaters; in specific: (i) seawater intrusion, (ii) nitrate contamination and (iii) heavy metal pollution due to past and recent mining and metallurgical activities The modelling approach will include the development of three distinct models that will be integrated. The aim is to depict how systems with characteristics like the ones mentioned above perform and, which different scenarios can be applied, aiming at identifying the most viable (with respect to water budget) MAR strategy for the specific area. Meteorological data, field data and site investigations provide the input data for all the different models. The field activities already conducted included: an inventory of all existing pumping wells; the development of a monitoring network for qualitative and quantitative environmental data acquisition at different scales and hydrologic zones; installation of multi-level piezometers for tailored monitoring of the seawater wedge; and geophysical surveys of subsurface characterization. The combination of literature review and field investigations led to the development of the conceptual model of the area along with the realization of the spatial distribution of each model. The hydraulic connections of the two aquifers, the surface water system and the sea have been identified and the upcoming activities aim in quantifying them and include them in the models being under development. The groundwater chemical characteristics have been examined, with results showing the major influence from seawater intrusion. All the data mentioned above are used for the development of the integrated hydrological model of the Lavrion area.
Engelman, Catherine A.; Grant, William E.; Mora, Miguel A.; Woodin, Marc
2012-01-01
We describe an ecotoxicological model that simulates the sublethal and lethal effects of chronic, low-level, chemical exposure on birds wintering in agricultural landscapes. Previous models estimating the impact on wildlife of chemicals used in agro-ecosystems typically have not included the variety of pathways, including both dermal and oral, by which individuals are exposed. The present model contains four submodels simulating (1) foraging behavior of individual birds, (2) chemical applications to crops, (3) transfers of chemicals among soil, insects, and small mammals, and (4) transfers of chemicals to birds via ingestion and dermal exposure. We demonstrate use of the model by simulating the impacts of a variety of commonly used herbicides, insecticides, growth regulators, and defoliants on western burrowing owls (Athene cunicularia hypugaea) that winter in agricultural landscapes in southern Texas, United States. The model generated reasonable movement patterns for each chemical through soil, water, insects, and rodents, as well as into the owl via consumption and dermal absorption. Sensitivity analysis suggested model predictions were sensitive to uncertainty associated with estimates of chemical half-lives in birds, soil, and prey, sensitive to parameters associated with estimating dermal exposure, and relatively insensitive to uncertainty associated with details of chemical application procedures (timing of application, amount of drift). Nonetheless, the general trends in chemical accumulations and the relative impacts of the various chemicals were robust to these parameter changes. Simulation results suggested that insecticides posed a greater potential risk to owls of both sublethal and lethal effects than do herbicides, defoliants, and growth regulators under crop scenarios typical of southern Texas, and that use of multiple indicators, or endpoints provided a more accurate assessment of risk due to agricultural chemical exposure. The model should prove useful in helping prioritize the chemicals and transfer pathways targeted in future studies and also, as these new data become available, in assessing the relative danger to other birds of exposure to different types of agricultural chemicals.
Understanding and quantifying foliar temperature acclimation for Earth System Models
NASA Astrophysics Data System (ADS)
Smith, N. G.; Dukes, J.
2015-12-01
Photosynthesis and respiration on land are the two largest carbon fluxes between the atmosphere and Earth's surface. The parameterization of these processes represent major uncertainties in the terrestrial component of the Earth System Models used to project future climate change. Research has shown that much of this uncertainty is due to the parameterization of the temperature responses of leaf photosynthesis and autotrophic respiration, which are typically based on short-term empirical responses. Here, we show that including longer-term responses to temperature, such as temperature acclimation, can help to reduce this uncertainty and improve model performance, leading to drastic changes in future land-atmosphere carbon feedbacks across multiple models. However, these acclimation formulations have many flaws, including an underrepresentation of many important global flora. In addition, these parameterizations were done using multiple studies that employed differing methodology. As such, we used a consistent methodology to quantify the short- and long-term temperature responses of maximum Rubisco carboxylation (Vcmax), maximum rate of Ribulos-1,5-bisphosphate regeneration (Jmax), and dark respiration (Rd) in multiple species representing each of the plant functional types used in global-scale land surface models. Short-term temperature responses of each process were measured in individuals acclimated for 7 days at one of 5 temperatures (15-35°C). The comparison of short-term curves in plants acclimated to different temperatures were used to evaluate long-term responses. Our analyses indicated that the instantaneous response of each parameter was highly sensitive to the temperature at which they were acclimated. However, we found that this sensitivity was larger in species whose leaves typically experience a greater range of temperatures over the course of their lifespan. These data indicate that models using previous acclimation formulations are likely incorrectly simulating leaf carbon exchange responses to future warming. Therefore, our data, if used to parameterize large-scale models, are likely to provide an even greater improvement in model performance, resulting in more reliable projections of future carbon-clime feedbacks.
NASA Astrophysics Data System (ADS)
Koster, Randal D.; Eagleson, Peter S.; Broecker, Wallace S.
1988-03-01
A capability is developed for monitoring tracer water movement in the three-dimensional Goddard Institute for Space Science Atmospheric General Circulation Model (GCM). A typical experiment with the tracer water model follows water evaporating from selected grid squares and determines where this water first returns to the Earth's surface as precipitation or condensate, thereby providing information on the lateral scales of hydrological transport in the GCM. Through a comparison of model results with observations in nature, inferences can be drawn concerning real world water transport. Tests of the tracer water model include a comparison of simulated and observed vertically-integrated vapor flux fields and simulations of atomic tritium transport from the stratosphere to the oceans. The inter-annual variability of the tracer water model results is also examined.
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Eagleson, Peter S.; Broecker, Wallace S.
1988-01-01
A capability is developed for monitoring tracer water movement in the three-dimensional Goddard Institute for Space Science Atmospheric General Circulation Model (GCM). A typical experiment with the tracer water model follows water evaporating from selected grid squares and determines where this water first returns to the Earth's surface as precipitation or condensate, thereby providing information on the lateral scales of hydrological transport in the GCM. Through a comparison of model results with observations in nature, inferences can be drawn concerning real world water transport. Tests of the tracer water model include a comparison of simulated and observed vertically-integrated vapor flux fields and simulations of atomic tritium transport from the stratosphere to the oceans. The inter-annual variability of the tracer water model results is also examined.
Recent transonic unsteady pressure measurements at the NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Sandford, M. C.; Ricketts, R. H.; Hess, R. W.
1985-01-01
Four semispan wing model configurations were studied in the Transonic Dynamics Tunnel (TDT). The first model had a clipped delta planform with a circular arc airfoil, the second model had a high aspect ratio planform with a supercritical airfoil, the third model has a rectangular planform with a supercritical airfoil and the fourth model had a high aspect ratio planform with a supercritical airfoil. To generate unsteady flow, the first and third models were equipped with pitch oscillation mechanisms and the first, second and fourth models were equipped with control surface oscillation mechanisms. The fourth model was similar in planform and airfoil shape to the second model, but it is the only one of the four models that has an elastic wing structure. The unsteady pressure studies of the four models are described and some typical results for each model are presented. Comparison of selected experimental data with analytical results also are included.
Evaluating scaling models in biology using hierarchical Bayesian approaches
Price, Charles A; Ogle, Kiona; White, Ethan P; Weitz, Joshua S
2009-01-01
Theoretical models for allometric relationships between organismal form and function are typically tested by comparing a single predicted relationship with empirical data. Several prominent models, however, predict more than one allometric relationship, and comparisons among alternative models have not taken this into account. Here we evaluate several different scaling models of plant morphology within a hierarchical Bayesian framework that simultaneously fits multiple scaling relationships to three large allometric datasets. The scaling models include: inflexible universal models derived from biophysical assumptions (e.g. elastic similarity or fractal networks), a flexible variation of a fractal network model, and a highly flexible model constrained only by basic algebraic relationships. We demonstrate that variation in intraspecific allometric scaling exponents is inconsistent with the universal models, and that more flexible approaches that allow for biological variability at the species level outperform universal models, even when accounting for relative increases in model complexity. PMID:19453621
NASA Astrophysics Data System (ADS)
Hobley, Daniel E. J.; Adams, Jordan M.; Nudurupati, Sai Siddhartha; Hutton, Eric W. H.; Gasparini, Nicole M.; Istanbulluoglu, Erkan; Tucker, Gregory E.
2017-01-01
The ability to model surface processes and to couple them to both subsurface and atmospheric regimes has proven invaluable to research in the Earth and planetary sciences. However, creating a new model typically demands a very large investment of time, and modifying an existing model to address a new problem typically means the new work is constrained to its detriment by model adaptations for a different problem. Landlab is an open-source software framework explicitly designed to accelerate the development of new process models by providing (1) a set of tools and existing grid structures - including both regular and irregular grids - to make it faster and easier to develop new process components, or numerical implementations of physical processes; (2) a suite of stable, modular, and interoperable process components that can be combined to create an integrated model; and (3) a set of tools for data input, output, manipulation, and visualization. A set of example models built with these components is also provided. Landlab's structure makes it ideal not only for fully developed modelling applications but also for model prototyping and classroom use. Because of its modular nature, it can also act as a platform for model intercomparison and epistemic uncertainty and sensitivity analyses. Landlab exposes a standardized model interoperability interface, and is able to couple to third-party models and software. Landlab also offers tools to allow the creation of cellular automata, and allows native coupling of such models to more traditional continuous differential equation-based modules. We illustrate the principles of component coupling in Landlab using a model of landform evolution, a cellular ecohydrologic model, and a flood-wave routing model.
Nichols, Jessica N; Deshane, Alok S; Niedzielko, Tracy L; Smith, Cory D; Floyd, Candace L
2016-02-01
Mild traumatic brain injury (mTBI) accounts for the majority of all brain injuries and affected individuals typically experience some extent of cognitive and/or neuropsychiatric deficits. Given that repeated mTBIs often result in worsened prognosis, the cumulative effect of repeated mTBIs is an area of clinical concern and on-going pre-clinical research. Animal models are critical in elucidating the underlying mechanisms of single and repeated mTBI-associated deficits, but the neurobehavioral sequelae produced by these models have not been well characterized. Thus, we sought to evaluate the behavioral changes incurred after single and repeated mTBIs in mice utilizing a modified impact-acceleration model. Mice in the mTBI group received 1 impact while the repeated mTBI group received 3 impacts with an inter-injury interval of 24h. Classic behavior evaluations included the Morris water maze (MWM) to assess learning and memory, elevated plus maze (EPM) for anxiety, and forced swim test (FST) for depression/helplessness. Additionally, species-typical behaviors were evaluated with the marble-burying and nestlet shredding tests to determine motivation and apathy. Non-invasive vibration platforms were used to examine sleep patterns post-mTBI. We found that the repeated mTBI mice demonstrated deficits in MWM testing and poorer performance on species-typical behaviors. While neither single nor repeated mTBI affected behavior in the EPM or FST, sleep disturbances were observed after both single and repeated mTBI. Here, we conclude that behavioral alterations shown after repeated mTBI resemble several of the deficits or disturbances reported by patients, thus demonstrating the relevance of this murine model to study repeated mTBIs. Copyright © 2015 Elsevier B.V. All rights reserved.
First Higher-Multipole Model of Gravitational Waves from Spinning and Coalescing Black-Hole Binaries
NASA Astrophysics Data System (ADS)
London, Lionel; Khan, Sebastian; Fauchon-Jones, Edward; García, Cecilio; Hannam, Mark; Husa, Sascha; Jiménez-Forteza, Xisco; Kalaghatgi, Chinmay; Ohme, Frank; Pannarale, Francesco
2018-04-01
Gravitational-wave observations of binary black holes currently rely on theoretical models that predict the dominant multipoles (ℓ=2 ,|m |=2 ) of the radiation during inspiral, merger, and ringdown. We introduce a simple method to include the subdominant multipoles to binary black hole gravitational waveforms, given a frequency-domain model for the dominant multipoles. The amplitude and phase of the original model are appropriately stretched and rescaled using post-Newtonian results (for the inspiral), perturbation theory (for the ringdown), and a smooth transition between the two. No additional tuning to numerical-relativity simulations is required. We apply a variant of this method to the nonprecessing PhenomD model. The result, PhenomHM, constitutes the first higher-multipole model of spinning and coalescing black-hole binaries, and currently includes the (ℓ,|m |)=(2 ,2 ),(3 ,3 ),(4 ,4 ),(2 ,1 ),(3 ,2 ),(4 ,3 ) radiative moments. Comparisons with numerical-relativity waveforms demonstrate that PhenomHM is more accurate than dominant-multipole-only models for all binary configurations, and typically improves the measurement of binary properties.
London, Lionel; Khan, Sebastian; Fauchon-Jones, Edward; García, Cecilio; Hannam, Mark; Husa, Sascha; Jiménez-Forteza, Xisco; Kalaghatgi, Chinmay; Ohme, Frank; Pannarale, Francesco
2018-04-20
Gravitational-wave observations of binary black holes currently rely on theoretical models that predict the dominant multipoles (ℓ=2,|m|=2) of the radiation during inspiral, merger, and ringdown. We introduce a simple method to include the subdominant multipoles to binary black hole gravitational waveforms, given a frequency-domain model for the dominant multipoles. The amplitude and phase of the original model are appropriately stretched and rescaled using post-Newtonian results (for the inspiral), perturbation theory (for the ringdown), and a smooth transition between the two. No additional tuning to numerical-relativity simulations is required. We apply a variant of this method to the nonprecessing PhenomD model. The result, PhenomHM, constitutes the first higher-multipole model of spinning and coalescing black-hole binaries, and currently includes the (ℓ,|m|)=(2,2),(3,3),(4,4),(2,1),(3,2),(4,3) radiative moments. Comparisons with numerical-relativity waveforms demonstrate that PhenomHM is more accurate than dominant-multipole-only models for all binary configurations, and typically improves the measurement of binary properties.
Kapoor, Abhijeet; Travesset, Alex
2014-03-01
We develop an intermediate resolution model, where the backbone is modeled with atomic resolution but the side chain with a single bead, by extending our previous model (Proteins (2013) DOI: 10.1002/prot.24269) to properly include proline, preproline residues and backbone rigidity. Starting from random configurations, the model properly folds 19 proteins (including a mutant 2A3D sequence) into native states containing β sheet, α helix, and mixed α/β. As a further test, the stability of H-RAS (a 169 residue protein, critical in many signaling pathways) is investigated: The protein is stable, with excellent agreement with experimental B-factors. Despite that proteins containing only α helices fold to their native state at lower backbone rigidity, and other limitations, which we discuss thoroughly, the model provides a reliable description of the dynamics as compared with all atom simulations, but does not constrain secondary structures as it is typically the case in more coarse-grained models. Further implications are described. Copyright © 2013 Wiley Periodicals, Inc.
Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?
NASA Technical Reports Server (NTRS)
Lum, Karen; Hihn, Jairus; Menzies, Tim
2006-01-01
While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.
Numerical Modeling of a Vortex Stabilized Arcjet. Ph.D. Thesis, 1991 Final Report
NASA Technical Reports Server (NTRS)
Pawlas, Gary E.
1992-01-01
Arcjet thrusters are being actively considered for use in Earth orbit maneuvering applications. Experimental studies are currently the chief means of determining an optimal thruster configuration. Earlier numerical studies have failed to include all of the effects found in typical arcjets including complex geometries, viscosity, and swirling flow. Arcjet geometries are large area ratio converging nozzles with centerbodies in the subsonic portion of the nozzle. The nozzle walls serve as the anode while the centerbody functions as the cathode. Viscous effects are important because the Reynolds number, based on the throat radius, is typically less than 1,000. Experimental studies have shown that a swirl or circumferential velocity component stabilizes a constricted arc. This dissertation describes the equations governing flow through a constricted arcjet thruster. An assumption that the flowfield is in local thermodynamic equilibrium leads to a single fluid plasma temperature model. An order of magnitude analysis reveals the governing fluid mechanics equations are uncoupled from the electromagnetic field equations. A numerical method is developed to solve the governing fluid mechanics equations, the Thin Layer Navier-Stokes equations. A coordinate transformation is employed in deriving the governing equations to simplify the application of boundary conditions in complex geometries. An axisymmetric formulation is employed to include the swirl velocity component as well as the axial and radial velocity components. The numerical method is an implicit finite-volume technique and allows for large time steps to reach a converged steady-state solution. The inviscid fluxes are flux-split, and Gauss-Seidel line relaxation is used to accelerate convergence. Converging-diverging nozzles with exit-to-throat area ratios up to 100:1 and annular nozzles were examined. Quantities examined included Mach number and static wall pressure distributions, and oblique shock structures. As the level of swirl and viscosity in the flowfield increased the mass flow rate and thrust decreased. The technique was used to predict the flow through a typical arcjet thruster geometry. Results indicate swirl and viscosity play an important role in the complex geometry of an arcjet.
Numerical modeling of a vortex stabilized arcjet
NASA Astrophysics Data System (ADS)
Pawlas, Gary E.
1992-11-01
Arcjet thrusters are being actively considered for use in Earth orbit maneuvering applications. Experimental studies are currently the chief means of determining an optimal thruster configuration. Earlier numerical studies have failed to include all of the effects found in typical arcjets including complex geometries, viscosity, and swirling flow. Arcjet geometries are large area ratio converging nozzles with centerbodies in the subsonic portion of the nozzle. The nozzle walls serve as the anode while the centerbody functions as the cathode. Viscous effects are important because the Reynolds number, based on the throat radius, is typically less than 1,000. Experimental studies have shown that a swirl or circumferential velocity component stabilizes a constricted arc. This dissertation describes the equations governing flow through a constricted arcjet thruster. An assumption that the flowfield is in local thermodynamic equilibrium leads to a single fluid plasma temperature model. An order of magnitude analysis reveals the governing fluid mechanics equations are uncoupled from the electromagnetic field equations. A numerical method is developed to solve the governing fluid mechanics equations, the Thin Layer Navier-Stokes equations. A coordinate transformation is employed in deriving the governing equations to simplify the application of boundary conditions in complex geometries. An axisymmetric formulation is employed to include the swirl velocity component as well as the axial and radial velocity components. The numerical method is an implicit finite-volume technique and allows for large time steps to reach a converged steady-state solution. The inviscid fluxes are flux-split, and Gauss-Seidel line relaxation is used to accelerate convergence. Converging-diverging nozzles with exit-to-throat area ratios up to 100:1 and annular nozzles were examined. Quantities examined included Mach number and static wall pressure distributions, and oblique shock structures. As the level of swirl and viscosity in the flowfield increased the mass flow rate and thrust decreased.
Diagnostic criteria for cryopyrin-associated periodic syndrome (CAPS).
Kuemmerle-Deschner, Jasmin B; Ozen, Seza; Tyrrell, Pascal N; Kone-Paut, Isabelle; Goldbach-Mansky, Raphaela; Lachmann, Helen; Blank, Norbert; Hoffman, Hal M; Weissbarth-Riedel, Elisabeth; Hugle, Boris; Kallinich, Tilmann; Gattorno, Marco; Gul, Ahmet; Ter Haar, Nienke; Oswald, Marlen; Dedeoglu, Fatma; Cantarini, Luca; Benseler, Susanne M
2017-06-01
Cryopyrin-associated periodic syndrome (CAPS) is a rare, heterogeneous disease entity associated with NLRP3 gene mutations and increased interleukin-1 (IL-1) secretion. Early diagnosis and rapid initiation of IL-1 inhibition prevent organ damage. The aim of the study was to develop and validate diagnostic criteria for CAPS. An innovative process was followed including interdisciplinary team building, item generation: review of CAPS registries, systematic literature review, expert surveys, consensus conferences for item refinement, item reduction and weighting using 1000Minds decision software. Resulting CAPS criteria were tested in large cohorts of CAPS cases and controls using correspondence analysis. Diagnostic models were explored using sensitivity analyses. The international team included 16 experts. Systematic literature and registry review identified 33 CAPS-typical items; the consensus conferences reduced these to 14. 1000Minds exercises ranked variables based on importance for the diagnosis. Correspondence analysis determined variables consistently associated with the diagnosis of CAPS using 284 cases and 837 controls. Seven variables were significantly associated with CAPS (p<0.001). The best diagnosis model included: Raised inflammatory markers (C-reactive protein/serum amyloid A) plus ≥two of six CAPS-typical symptoms: urticaria-like rash, cold-triggered episodes, sensorineural hearing loss, musculoskeletal symptoms, chronic aseptic meningitis and skeletal abnormalities. Sensitivity was 81%, specificity 94%. It performed well for all CAPS subtypes and regardless of NLRP3 mutation. The novel approach integrated traditional methods of evidence synthesis with expert consensus, web-based decision tools and innovative statistical methods and may serve as model for other rare diseases. These criteria will enable a rapid diagnosis for children and adults with CAPS. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Detection of fatty product falsifications using a portable near infrared spectrometer
NASA Astrophysics Data System (ADS)
Kalinin, A. V.; Krasheninnikov, V. N.
2017-01-01
Spreading sales of counterfeited fatty-oil foods leads to a development of portable and operational analyzer of typical fatty acids (FA) which may be a near infrared (NIR) spectrometer. In this work the calibration models for prediction of named FA were built with the spectra of FT-NIR spectrometer for different absorption bands of the FA. The best parameters were obtained for the wavelength sub-band 1.0-1.8 μ, which includes the 2nd and 3rd overtones of C-H stretching vibrations (near 1.7 and 1.2 μ) and the combination band (1.42 μ). Applicability of the portable spectrometer based on linear NIR array photosensor for the quality analysis of spread, butter and fish oil by the typical FA has been tested.
Demonstration of reduced-order urban scale building energy models
Heidarinejad, Mohammad; Mattise, Nicholas; Dahlhausen, Matthew; ...
2017-09-08
The aim of this study is to demonstrate a developed framework to rapidly create urban scale reduced-order building energy models using a systematic summary of the simplifications required for the representation of building exterior and thermal zones. These urban scale reduced-order models rely on the contribution of influential variables to the internal, external, and system thermal loads. OpenStudio Application Programming Interface (API) serves as a tool to automate the process of model creation and demonstrate the developed framework. The results of this study show that the accuracy of the developed reduced-order building energy models varies only up to 10% withmore » the selection of different thermal zones. In addition, to assess complexity of the developed reduced-order building energy models, this study develops a novel framework to quantify complexity of the building energy models. Consequently, this study empowers the building energy modelers to quantify their building energy model systematically in order to report the model complexity alongside the building energy model accuracy. An exhaustive analysis on four university campuses suggests that the urban neighborhood buildings lend themselves to simplified typical shapes. Specifically, building energy modelers can utilize the developed typical shapes to represent more than 80% of the U.S. buildings documented in the CBECS database. One main benefits of this developed framework is the opportunity for different models including airflow and solar radiation models to share the same exterior representation, allowing a unifying exchange data. Altogether, the results of this study have implications for a large-scale modeling of buildings in support of urban energy consumption analyses or assessment of a large number of alternative solutions in support of retrofit decision-making in the building industry.« less
Demonstration of reduced-order urban scale building energy models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heidarinejad, Mohammad; Mattise, Nicholas; Dahlhausen, Matthew
The aim of this study is to demonstrate a developed framework to rapidly create urban scale reduced-order building energy models using a systematic summary of the simplifications required for the representation of building exterior and thermal zones. These urban scale reduced-order models rely on the contribution of influential variables to the internal, external, and system thermal loads. OpenStudio Application Programming Interface (API) serves as a tool to automate the process of model creation and demonstrate the developed framework. The results of this study show that the accuracy of the developed reduced-order building energy models varies only up to 10% withmore » the selection of different thermal zones. In addition, to assess complexity of the developed reduced-order building energy models, this study develops a novel framework to quantify complexity of the building energy models. Consequently, this study empowers the building energy modelers to quantify their building energy model systematically in order to report the model complexity alongside the building energy model accuracy. An exhaustive analysis on four university campuses suggests that the urban neighborhood buildings lend themselves to simplified typical shapes. Specifically, building energy modelers can utilize the developed typical shapes to represent more than 80% of the U.S. buildings documented in the CBECS database. One main benefits of this developed framework is the opportunity for different models including airflow and solar radiation models to share the same exterior representation, allowing a unifying exchange data. Altogether, the results of this study have implications for a large-scale modeling of buildings in support of urban energy consumption analyses or assessment of a large number of alternative solutions in support of retrofit decision-making in the building industry.« less
The Hydrofacies Approach and Why ln K σ 2 <5-10 is Unlikely
NASA Astrophysics Data System (ADS)
Fogg, G. E.
2004-12-01
When heterogeneity of geologic systems is characterized in terms of hydrofacies rather than solely based on K measurements, the resulting flow and transport models typically contain not only aquifer materials but also significant volumes (10-70%) of aquitard materials. This leads to clear, heuristic rationale for the ln K σ 2 commonly exceeding 5 to 10, contradicting published data on ln K σ 2. I will explain the inconsistencies between commonly held assumptions of low (<1-2) ln K σ 2 and abundant geologic and hydrologic field data that indicate substantially larger values. The K data commonly cited in support of the low ln K σ 2 assumption have been misinterpreted because of unintentional, biased sampling. Geologic fundamentals and field data indicate that ln K σ 2 is commonly >10 and can easily exceed 20 in typical sedimentary deposits (not surficial soils) at spatial scales on the order of 101 to 102 m. Presence of large ln K σ 2 can be paramount in transport models and is often requisite for modeling observed transport phenomena such as preferential flow, extreme tailing, difficult remediation including frequent pump-and-treat failure, and significant, unanticipated mixing of groundwater ages.
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close “neighborhood” of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa. PMID:26327290
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close "neighborhood" of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa.
An integrated biomechanical modeling approach to the ergonomic evaluation of drywall installation.
Yuan, Lu; Buchholz, Bryan; Punnett, Laura; Kriebel, David
2016-03-01
Three different methodologies: work sampling, computer simulation and biomechanical modeling, were integrated to study the physical demands of drywall installation. PATH (Posture, Activity, Tools, and Handling), a work-sampling based method, was used to quantify the percent of time that the drywall installers were conducting different activities with different body segment (trunk, arm, and leg) postures. Utilizing Monte-Carlo simulation to convert the categorical PATH data into continuous variables as inputs for the biomechanical models, the required muscle contraction forces and joint reaction forces at the low back (L4/L5) and shoulder (glenohumeral and sternoclavicular joints) were estimated for a typical eight-hour workday. To demonstrate the robustness of this modeling approach, a sensitivity analysis was conducted to examine the impact of some quantitative assumptions that have been made to facilitate the modeling approach. The results indicated that the modeling approach seemed to be the most sensitive to both the distribution of work cycles for a typical eight-hour workday and the distribution and values of Euler angles that are used to determine the "shoulder rhythm." Other assumptions including the distribution of trunk postures did not appear to have a significant impact on the model outputs. It was concluded that the integrated approach might provide an applicable examination of physical loads during the non-routine construction work, especially for those operations/tasks that have certain patterns/sequences for the workers to follow. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Axisymmetric whole pin life modelling of advanced gas-cooled reactor nuclear fuel
NASA Astrophysics Data System (ADS)
Mella, R.; Wenman, M. R.
2013-06-01
Thermo-mechanical contributions to pellet-clad interaction (PCI) in advanced gas-cooled reactors (AGRs) are modelled in the ABAQUS finite element (FE) code. User supplied sub-routines permit the modelling of the non-linear behaviour of AGR fuel through life. Through utilisation of ABAQUS's well-developed pre- and post-processing ability, the behaviour of the axially constrained steel clad fuel was modelled. The 2D axisymmetric model includes thermo-mechanical behaviour of the fuel with time and condition dependent material properties. Pellet cladding gap dynamics and thermal behaviour are also modelled. The model treats heat up as a fully coupled temperature-displacement study. Dwell time and direct power cycling was applied to model the impact of online refuelling, a key feature of the AGR. The model includes the visco-plastic behaviour of the fuel under the stress and irradiation conditions within an AGR core and a non-linear heat transfer model. A multiscale fission gas release model is applied to compute pin pressure; this model is coupled to the PCI gap model through an explicit fission gas inventory code. Whole pin, whole life, models are able to show the impact of the fuel on all segments of cladding including weld end caps and cladding pellet locking mechanisms (unique to AGR fuel). The development of this model in a commercial FE package shows that the development of a potentially verified and future-proof fuel performance code can be created and used. The usability of a FE based fuel performance code would be an enhancement over past codes. Pre- and post-processors have lowered the entry barrier for the development of a fuel performance model to permit the ability to model complicated systems. Typical runtimes for a 5 year axisymmetric model takes less than one hour on a single core workstation. The current model has implemented: Non-linear fuel thermal behaviour, including a complex description of heat flow in the fuel. Coupled with a variety of different FE and finite difference models. Non-linear mechanical behaviour of the fuel and cladding including, fuel creep and swelling and cladding creep and plasticity each with dependencies on a variety of different properties. A fission gas release model which takes inputs from first principles calculations. Explicitly integrated inventory calculations performed in a coupled manner. Freedom to model steady state and transient behaviour using implicit time integration. The whole pin geometry is considered over an entire typical fuel life. The model showed by examination of normal operation and a subsequent transient chosen for software demonstration purposes: ABAQUS may be a sufficiently flexible platform to develop a complete and verified fuel performance code. The importance and effectiveness of the geometry of the fuel spacer pellets was characterised. The fuels performance under normal conditions (high friction no power spikes) would not suggest serious degradation of the cladding in fuel life. Large plastic strains were found when pellet bonding was strong, these would appear at all pellets cladding triple points and all pellet radial crack and cladding interfaces thus showing a possible axial direction to cracks forming from ductility exhaustion.
NASA Technical Reports Server (NTRS)
Mulcay, W.; Rose, R.
1979-01-01
Aerodynamic characteristics obtained in a rotational flow environment utilizing a rotary balance located in the Langley spin tunnel are presented in plotted form for a 1/5-scale, single-engine, high-wing, general aviation airplane model. The configurations tested included various tail designs and fuselage shapes. Data are presented without analysis for an angle of attack range of 8 to 90 degrees and clockwise and counter-clockwise rotations covering an Omega b/2 v range from 0 to 0.85.
NASA Astrophysics Data System (ADS)
Zhu, Na
This thesis presents an overview of the previous research work on dynamic characteristics and energy performance of buildings due to the integration of PCMs. The research work on dynamic characteristics and energy performance of buildings using PCMs both with and without air-conditioning is reviewed. Since the particular interest in using PCMs for free cooling and peak load shifting, specific research efforts on both subjects are reviewed separately. A simplified physical dynamic model of building structures integrated with SSPCM (shaped-stabilized phase change material) is developed and validated in this study. The simplified physical model represents the wall by 3 resistances and 2 capacitances and the PCM layer by 4 resistances and 2 capacitances respectively while the key issue is the parameter identification of the model. This thesis also presents the studies on the thermodynamic characteristics of buildings enhanced by PCM and on the investigation of the impacts of PCM on the building cooling load and peak cooling demand at different climates and seasons as well as the optimal operation and control strategies to reduce the energy consumption and energy cost by reducing the air-conditioning energy consumption and peak load. An office building floor with typical variable air volume (VAV) air-conditioning system is used and simulated as the reference building in the comparison study. The envelopes of the studied building are further enhanced by integrating the PCM layers. The building system is tested in two selected cities of typical climates in China including Hong Kong and Beijing. The cold charge and discharge processes, the operation and control strategies of night ventilation and the air temperature set-point reset strategy for minimizing the energy consumption and electricity cost are studied. This thesis presents the simulation test platform, the test results on the cold storage and discharge processes, the air-conditioning energy consumption and demand reduction potentials in typical air-conditioning seasons in typical China cites as well as the impacts of operation and control strategies.
Development of a category 2 approach system model
NASA Technical Reports Server (NTRS)
Johnson, W. A.; Mcruer, D. T.
1972-01-01
An analytical model is presented which provides, as its primary output, the probability of a successful Category II approach. Typical applications are included using several example systems (manual and automatic) which are subjected to random gusts and deterministic wind shear. The primary purpose of the approach system model is to establish a structure containing the system elements, command inputs, disturbances, and their interactions in an analytical framework so that the relative effects of changes in the various system elements on precision of control and available margins of safety can be estimated. The model is intended to provide insight for the design and integration of suitable autopilot, display, and navigation elements; and to assess the interaction of such elements with the pilot/copilot.
Illustrative visualization of 3D city models
NASA Astrophysics Data System (ADS)
Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian
2005-03-01
This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.
Learning Setting-Generalized Activity Models for Smart Spaces
Cook, Diane J.
2011-01-01
The data mining and pervasive computing technologies found in smart homes offer unprecedented opportunities for providing context-aware services, including health monitoring and assistance to individuals experiencing difficulties living independently at home. In order to provide these services, smart environment algorithms need to recognize and track activities that people normally perform as part of their daily routines. However, activity recognition has typically involved gathering and labeling large amounts of data in each setting to learn a model for activities in that setting. We hypothesize that generalized models can be learned for common activities that span multiple environment settings and resident types. We describe our approach to learning these models and demonstrate the approach using eleven CASAS datasets collected in seven environments. PMID:21461133
Disk flexibility effects on the rotordynamics of the SSME high pressure turbopumps
NASA Technical Reports Server (NTRS)
Flowers, George T.
1990-01-01
Rotordynamical analyses are typically performed using rigid disk models. Studies of rotor models in which the effects of disk flexibility were included indicate that it may be an important effect for many systems. This issue is addressed with respect to the Space Shuttle Main Engine high pressure turbopumps. Finite element analyses were performed for a simplified free-free flexible disk rotor models and the modes and frequencies compared to those of a rigid disk model. Equations were developed to account for disk flexibility in rotordynamical analysis. Simulation studies were conducted to assess the influence of disk flexibility on the HPOTP. Some recommendations are given as to the importance of disk flexibility and for how this project should proceed.
Models of stochastic gene expression
NASA Astrophysics Data System (ADS)
Paulsson, Johan
2005-06-01
Gene expression is an inherently stochastic process: Genes are activated and inactivated by random association and dissociation events, transcription is typically rare, and many proteins are present in low numbers per cell. The last few years have seen an explosion in the stochastic modeling of these processes, predicting protein fluctuations in terms of the frequencies of the probabilistic events. Here I discuss commonalities between theoretical descriptions, focusing on a gene-mRNA-protein model that includes most published studies as special cases. I also show how expression bursts can be explained as simplistic time-averaging, and how generic approximations can allow for concrete interpretations without requiring concrete assumptions. Measures and nomenclature are discussed to some extent and the modeling literature is briefly reviewed.
Lithological and Surface Geometry Joint Inversions Using Multi-Objective Global Optimization Methods
NASA Astrophysics Data System (ADS)
Lelièvre, Peter; Bijani, Rodrigo; Farquharson, Colin
2016-04-01
Geologists' interpretations about the Earth typically involve distinct rock units with contacts (interfaces) between them. In contrast, standard minimum-structure geophysical inversions are performed on meshes of space-filling cells (typically prisms or tetrahedra) and recover smoothly varying physical property distributions that are inconsistent with typical geological interpretations. There are several approaches through which mesh-based minimum-structure geophysical inversion can help recover models with some of the desired characteristics. However, a more effective strategy may be to consider two fundamentally different types of inversions: lithological and surface geometry inversions. A major advantage of these two inversion approaches is that joint inversion of multiple types of geophysical data is greatly simplified. In a lithological inversion, the subsurface is discretized into a mesh and each cell contains a particular rock type. A lithological model must be translated to a physical property model before geophysical data simulation. Each lithology may map to discrete property values or there may be some a priori probability density function associated with the mapping. Through this mapping, lithological inverse problems limit the parameter domain and consequently reduce the non-uniqueness from that presented by standard mesh-based inversions that allow physical property values on continuous ranges. Furthermore, joint inversion is greatly simplified because no additional mathematical coupling measure is required in the objective function to link multiple physical property models. In a surface geometry inversion, the model comprises wireframe surfaces representing contacts between rock units. This parameterization is then fully consistent with Earth models built by geologists, which in 3D typically comprise wireframe contact surfaces of tessellated triangles. As for the lithological case, the physical properties of the units lying between the contact surfaces are set to a priori values. The inversion is tasked with calculating the geometry of the contact surfaces instead of some piecewise distribution of properties in a mesh. Again, no coupling measure is required and joint inversion is simplified. Both of these inverse problems involve high nonlinearity and discontinuous or non-obtainable derivatives. They can also involve the existence of multiple minima. Hence, one can not apply the standard descent-based local minimization methods used to solve typical minimum-structure inversions. Instead, we are applying Pareto multi-objective global optimization (PMOGO) methods, which generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. While there are definite advantages to PMOGO joint inversion approaches, the methods come with significantly increased computational requirements. We are researching various strategies to ameliorate these computational issues including parallelization and problem dimension reduction.
Tao, S; Trzasko, J D; Gunter, J L; Weavers, P T; Shu, Y; Huston, J; Lee, S K; Tan, E T; Bernstein, M A
2017-01-21
Due to engineering limitations, the spatial encoding gradient fields in conventional magnetic resonance imaging cannot be perfectly linear and always contain higher-order, nonlinear components. If ignored during image reconstruction, gradient nonlinearity (GNL) manifests as image geometric distortion. Given an estimate of the GNL field, this distortion can be corrected to a degree proportional to the accuracy of the field estimate. The GNL of a gradient system is typically characterized using a spherical harmonic polynomial model with model coefficients obtained from electromagnetic simulation. Conventional whole-body gradient systems are symmetric in design; typically, only odd-order terms up to the 5th-order are required for GNL modeling. Recently, a high-performance, asymmetric gradient system was developed, which exhibits more complex GNL that requires higher-order terms including both odd- and even-orders for accurate modeling. This work characterizes the GNL of this system using an iterative calibration method and a fiducial phantom used in ADNI (Alzheimer's Disease Neuroimaging Initiative). The phantom was scanned at different locations inside the 26 cm diameter-spherical-volume of this gradient, and the positions of fiducials in the phantom were estimated. An iterative calibration procedure was utilized to identify the model coefficients that minimize the mean-squared-error between the true fiducial positions and the positions estimated from images corrected using these coefficients. To examine the effect of higher-order and even-order terms, this calibration was performed using spherical harmonic polynomial of different orders up to the 10th-order including even- and odd-order terms, or odd-order only. The results showed that the model coefficients of this gradient can be successfully estimated. The residual root-mean-squared-error after correction using up to the 10th-order coefficients was reduced to 0.36 mm, yielding spatial accuracy comparable to conventional whole-body gradients. The even-order terms were necessary for accurate GNL modeling. In addition, the calibrated coefficients improved image geometric accuracy compared with the simulation-based coefficients.
Armstrong, Edward P
2010-04-01
Vaccines have demonstrated cost-effectiveness in managed care through the prevention of disease. As new vaccines for previously untargeted conditions are developed, pharmacoeconomic modeling is becoming even more critical for the quantification of value in the health care industry. Two recently developed vaccines aimed at prevention of infection from human papillomavirus (HPV) types 16 and 18 have proven to be highly efficacious. HPV 16 and 18 are the 2 most common oncogenic strains of HPV and are responsible for 70% of cervical cancer cases worldwide. Persistent infection with an oncogenic HPV type is a known cause of cervical cancer. Therefore, prevention of cervical cancer via HPV vaccination may have a significant financial impact. To qualitatively review existing mathematical models of the cost effectiveness of prophylactic HPV vaccination, with an emphasis on the impact on managed care in the United States. Mathematical models of the cost-effectiveness of HPV vaccination based on U.S. data were reviewed. A search of the PubMed database was conducted using the search terms "HPV," "vaccine," and "cost-effectiveness" for articles published before February 22, 2010. Studies employing mathematical models to estimate the cost-effectiveness of HPV vaccination in healthy subjects from the United States were included. Models based on data or populations from outside of the United States were excluded. Outcomes were measured with incremental cost-effectiveness ratios (ICERs), typically in units of quality-adjusted life expectancy (quality-adjusted life years [QALYs] gained). Most studies included in this review modeled vaccination of a cohort or population of females aged 12 years. Assessment of catch-up vaccination in females (through aged 24 to 26 years) was included in a couple of reports. One study examined vaccination in older females (aged 35, 40, and 45 years). Models typically compared a strategy of HPV vaccination with the current practice of cervical screening (sampling of cervical cells for disease detection) alone. 11 studies of cost-effectiveness modeling of HPV vaccination were included in this review. A direct quantitative comparison of model results is challenging due to the utilization of different model types as well as differences in variables selected within the same model type. Each model produced a range of cost-effectiveness ratios, dependent on variables included in sensitivity analyses and model assumptions. Sensitivity analyses revealed the lowest ICER to be $997 per QALY gained and the highest ICER to be $12,749,000 per QALY gained. This enormous range highlights the need to clarify what model assumptions are being made. The 2 studies that included modeling of catch-up vaccination scenarios in females older than age 12 years also produced a wide range of ICERs. One study, assuming 90% efficacy, 100% coverage, and lifelong immunity, modeled catch-up vaccination in all females aged 12 to 24 years and yielded an ICER of $4,666 per QALY. If the duration of protection was limited to 10 years, then costs increased to $21,121 per QALY. The other study modeling catch-up HPV vaccination assumed 100% efficacy, 75% coverage, and lifelong immunity. ICERs in this study for outcomes relating to cervical cancer ranged from $43,600 per QALY in the base model vaccinating only 12 year olds with no catch-up vaccination, to $152,700 in a model including catch-up vaccination through age 26 years. Although catch-up to age 21 years resulted in a cost of $120,400 per QALY, the ICER decreased to $101,300 per QALY if model outcomes related to prevention of genital warts were also included. The lone study modeling vaccination in women aged 35 to 45 years resulted in an ICER range of $116,950 to $272,350 per QALY when compared with annual and biennial cytological screening. Cost-effectiveness was defined as an ICER at or below $100,000 per QALY gained. All models of female adolescent vaccination were able to produce vaccination strategies that would be cost-effective according to this definition in addition to many strategies that would be cost-prohibitive. Variables influential in determining cost-effectiveness of HPV vaccination included the frequency of accompanying cervical screening, the age at which screening is initiated, vaccination efficacy, duration of vaccine protection, and the age range of females to be vaccinated. The actual effectiveness of HPV vaccination in the female population will also depend on levels of vaccine uptake or coverage and compliance in completing all vaccine doses. Clinical studies have shown HPV vaccination to be highly efficacious and potentially lifesaving if administered to females naive or unexposed to vaccine HPV types. Modeling studies have also shown that HPV vaccination can be cost-effective with an ICER of $100,000 or less per QALY gained if administered to females aged 12 years in the context of cervical screening intervals typically greater than 1 year. Catch-up vaccination through 21 years of age increases the cost per QALY to more than $100,000. Until real-world coverage rates increase, cost-effectiveness modeling of HPV vaccination underestimates the actual cost per QALY.
A physical and economic model of the nuclear fuel cycle
NASA Astrophysics Data System (ADS)
Schneider, Erich Alfred
A model of the nuclear fuel cycle that is suitable for use in strategic planning and economic forecasting is presented. The model, to be made available as a stand-alone software package, requires only a small set of fuel cycle and reactor specific input parameters. Critical design criteria include ease of use by nonspecialists, suppression of errors to within a range dictated by unit cost uncertainties, and limitation of runtime to under one minute on a typical desktop computer. Collision probability approximations to the neutron transport equation that lead to a computationally efficient decoupling of the spatial and energy variables are presented and implemented. The energy dependent flux, governed by coupled integral equations, is treated by multigroup or continuous thermalization methods. The model's output includes a comprehensive nuclear materials flowchart that begins with ore requirements, calculates the buildup of 24 actinides as well as fission products, and concludes with spent fuel or reprocessed material composition. The costs, direct and hidden, of the fuel cycle under study are also computed. In addition to direct disposal and plutonium recycling strategies in current use, the model addresses hypothetical cycles. These include cycles chosen for minor actinide burning and for their low weapons-usable content.
Glynne-Jones, Peter; Mishra, Puja P; Boltryk, Rosemary J; Hill, Martyn
2013-04-01
A finite element based method is presented for calculating the acoustic radiation force on arbitrarily shaped elastic and fluid particles. Importantly for future applications, this development will permit the modeling of acoustic forces on complex structures such as biological cells, and the interactions between them and other bodies. The model is based on a non-viscous approximation, allowing the results from an efficient, numerical, linear scattering model to provide the basis for the second-order forces. Simulation times are of the order of a few seconds for an axi-symmetric structure. The model is verified against a range of existing analytical solutions (typical accuracy better than 0.1%), including those for cylinders, elastic spheres that are of significant size compared to the acoustic wavelength, and spheroidal particles.
Goldstein, Benjamin A; Navar, Ann Marie; Carter, Rickey E
2017-06-14
Risk prediction plays an important role in clinical cardiology research. Traditionally, most risk models have been based on regression models. While useful and robust, these statistical methods are limited to using a small number of predictors which operate in the same way on everyone, and uniformly throughout their range. The purpose of this review is to illustrate the use of machine-learning methods for development of risk prediction models. Typically presented as black box approaches, most machine-learning methods are aimed at solving particular challenges that arise in data analysis that are not well addressed by typical regression approaches. To illustrate these challenges, as well as how different methods can address them, we consider trying to predicting mortality after diagnosis of acute myocardial infarction. We use data derived from our institution's electronic health record and abstract data on 13 regularly measured laboratory markers. We walk through different challenges that arise in modelling these data and then introduce different machine-learning approaches. Finally, we discuss general issues in the application of machine-learning methods including tuning parameters, loss functions, variable importance, and missing data. Overall, this review serves as an introduction for those working on risk modelling to approach the diffuse field of machine learning. © The Author 2016. Published by Oxford University Press on behalf of the European Society of Cardiology.
Cazon, Aitor; Kelly, Sarah; Paterson, Abby M; Bibb, Richard J; Campbell, R Ian
2017-09-01
Rheumatoid arthritis is a chronic disease affecting the joints. Treatment can include immobilisation of the affected joint with a custom-fitting splint, which is typically fabricated by hand from low temperature thermoplastic, but the approach poses several limitations. This study focused on the evaluation, by finite element analysis, of additive manufacturing techniques for wrist splints in order to improve upon the typical splinting approach. An additive manufactured/3D printed splint, specifically designed to be built using Objet Connex multi-material technology and a virtual model of a typical splint, digitised from a real patient-specific splint using three-dimensional scanning, were modelled in computer-aided design software. Forty finite element analysis simulations were performed in flexion-extension and radial-ulnar wrist movements to compare the displacements and the stresses. Simulations have shown that for low severity loads, the additive manufacturing splint has 25%, 76% and 27% less displacement in the main loading direction than the typical splint in flexion, extension and radial, respectively, while ulnar values were 75% lower in the traditional splint. For higher severity loads, the flexion and extension movements resulted in deflections that were 24% and 60%, respectively, lower in the additive manufacturing splint. However, for higher severity loading, the radial defection values were very similar in both splints and ulnar movement deflection was higher in the additive manufacturing splint. A physical prototype of the additive manufacturing splint was also manufactured and was tested under normal conditions to validate the finite element analysis data. Results from static tests showed maximum displacements of 3.46, 0.97, 3.53 and 2.51 mm flexion, extension, radial and ulnar directions, respectively. According to these results, the present research argues that from a technical point of view, the additive manufacturing splint design stands at the same or even better level of performance in displacements and stress values in comparison to the typical low temperature thermoplastic approach and is therefore a feasible approach to splint design and manufacture.
Burns, A.W.
1988-01-01
This report describes an interactive-accounting model used to simulate streamflow, chemical-constituent concentrations and loads, and water-supply operations in a river basin. The model uses regression equations to compute flow from incremental (internode) drainage areas. Conservative chemical constituents (typically dissolved solids) also are computed from regression equations. Both flow and water quality loads are accumulated downstream. Optionally, the model simulates the water use and the simplified groundwater systems of a basin. Water users include agricultural, municipal, industrial, and in-stream users , and reservoir operators. Water users list their potential water sources, including direct diversions, groundwater pumpage, interbasin imports, or reservoir releases, in the order in which they will be used. Direct diversions conform to basinwide water law priorities. The model is interactive, and although the input data exist in files, the user can modify them interactively. A major feature of the model is its color-graphic-output options. This report includes a description of the model, organizational charts of subroutines, and examples of the graphics. Detailed format instructions for the input data, example files of input data, definitions of program variables, and listing of the FORTRAN source code are Attachments to the report. (USGS)
LOSCAR: Long-term Ocean-atmosphere-Sediment CArbon cycle Reservoir Model
NASA Astrophysics Data System (ADS)
Zeebe, R. E.
2011-06-01
The LOSCAR model is designed to efficiently compute the partitioning of carbon between ocean, atmosphere, and sediments on time scales ranging from centuries to millions of years. While a variety of computationally inexpensive carbon cycle models are already available, many are missing a critical sediment component, which is indispensable for long-term integrations. One of LOSCAR's strengths is the coupling of ocean-atmosphere routines to a computationally efficient sediment module. This allows, for instance, adequate computation of CaCO3 dissolution, calcite compensation, and long-term carbon cycle fluxes, including weathering of carbonate and silicate rocks. The ocean component includes various biogeochemical tracers such as total carbon, alkalinity, phosphate, oxygen, and stable carbon isotopes. We have previously published applications of the model tackling future projections of ocean chemistry and weathering, pCO2 sensitivity to carbon cycle perturbations throughout the Cenozoic, and carbon/calcium cycling during the Paleocene-Eocene Thermal Maximum. The focus of the present contribution is the detailed description of the model including numerical architecture, processes and parameterizations, tuning, and examples of input and output. Typical CPU integration times of LOSCAR are of order seconds for several thousand model years on current standard desktop machines. The LOSCAR source code in C can be obtained from the author by sending a request to loscar.model@gmail.com.
NASA Astrophysics Data System (ADS)
Scales, W.; Mahmoudian, A.; Fu, H.; Bordikar, M. R.; Samimi, A.; Bernhardt, P. A.; Briczinski, S. J., Jr.; Kosch, M. J.; Senior, A.; Isham, B.
2014-12-01
There has been significant interest in so-called narrowband Stimulated Electromagnetic Emission SEE over the past several years due to recent discoveries at the High Frequency Active Auroral Research Program HAARP facility near Gakone, Alaska. Narrowband SEE (NSEE) has been defined as spectral features in the SEE spectrum typically within 1 kHz of the transmitter (or pump) frequency. SEE is due to nonlinear processes leading to re-radiation at frequencies other than the pump wave frequency during heating the ionospheric plasma with high power HF radio waves. Although NSEE exhibits a richly complex structure, it has now been shown after a substantial number of observations at HAARP, that NSEE can be grouped into two basic classes. The first are those spectral features, associated with Stimulated Brillouin Scatter SBS, which typically occur when the pump frequency is not close to electron gyro-harmonic frequencies. Typically, these spectral features are within roughly 50 Hz of the pump wave frequency where it is to be noted that the O+ ion gyro-frequency is roughly 50 Hz. The second class of spectral features corresponds to the case when the pump wave frequency is typically within roughly 10 kHz of electron gyro-harmonic frequencies. In this case, spectral features ordered by harmonics of ion gyro-frequencies are typically observed, and termed Stimulated Ion Bernstein Scatter SIBS. This presentation will first provide an overview of the recent NSEE experimental observations at HAARP. Both Stimulated Brillouin Scatter SBS and Stimulated Ion Bernstein Scatter SIBS observations will be discussed as well as their relationship to each other. Possible theoretical formulation in terms of parametric decay instabilities and computational modeling will be provided. Possible applications of NSEE will be pointed out including triggering diagnostics for artificial ionization layer formation, proton precipitation event diagnostics, electron temperature measurements in the heated volume and detection of heavy ion species. Finally potential for observing such SEE at the European Incoherent Scatter EISCAT facility will be discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gowardhan, Akshay; Neuscamman, Stephanie; Donetti, John
Aeolus is an efficient three-dimensional computational fluid dynamics code based on finite volume method developed for predicting transport and dispersion of contaminants in a complex urban area. It solves the time dependent incompressible Navier-Stokes equation on a regular Cartesian staggered grid using a fractional step method. It also solves a scalar transport equation for temperature and using the Boussinesq approximation. The model also includes a Lagrangian dispersion model for predicting the transport and dispersion of atmospheric contaminants. The model can be run in an efficient Reynolds Average Navier-Stokes (RANS) mode with a run time of several minutes, or a moremore » detailed Large Eddy Simulation (LES) mode with run time of hours for a typical simulation. This report describes the model components, including details on the physics models used in the code, as well as several model validation efforts. Aeolus wind and dispersion predictions are compared to field data from the Joint Urban Field Trials 2003 conducted in Oklahoma City (Allwine et al 2004) including both continuous and instantaneous releases. Newly implemented Aeolus capabilities include a decay chain model and an explosive Radiological Dispersal Device (RDD) source term; these capabilities are described. Aeolus predictions using the buoyant explosive RDD source are validated against two experimental data sets: the Green Field explosive cloud rise experiments conducted in Israel (Sharon et al 2012) and the Full-Scale RDD Field Trials conducted in Canada (Green et al 2016).« less
Signature analysis of ballistic missile warhead with micro-nutation in terahertz band
NASA Astrophysics Data System (ADS)
Li, Ming; Jiang, Yue-song
2013-08-01
In recent years, the micro-Doppler effect has been proposed as a new technique for signature analysis and extraction of radar targets. The ballistic missile is known as a typical radar target and has been paid many attentions for the complexities of its motions in current researches. The trajectory of a ballistic missile can be generally divided into three stages: boost phase, midcourse phase and terminal phase. The midcourse phase is the most important phase for radar target recognition and interception. In this stage, the warhead forms a typical micro-motion called micro-nutation which consists of three basic micro-motions: spinning, coning and wiggle. This paper addresses the issue of signature analysis of ballistic missile warhead in terahertz band via discussing the micro-Doppler effect. We establish a simplified model (cone-shaped) for the missile warhead followed by the micro-motion models including of spinning, coning and wiggle. Based on the basic formulas of these typical micro-motions, we first derive the theoretical formula of micro-nutation which is the main micro-motion of the missile warhead. Then, we calculate the micro-Doppler frequency in both X band and terahertz band via these micro-Doppler formulas. The simulations are given to show the superiority of our proposed method for the recognition and detection of radar micro targets in terahertz band.
Moody, C T; Baker, B L; Blacher, J
2018-05-10
Despite studies of how parent-child interactions relate to early child language development, few have examined the continued contribution of parenting to more complex language skills through the preschool years. The current study explored how positive and negative parenting behaviours relate to growth in complex syntax learning from child age 3 to age 4 years, for children with typical development or developmental delays (DDs). Participants were children with or without DD (N = 60) participating in a longitudinal study of development. Parent-child interactions were transcribed and coded for parenting domains and child language. Multiple regression analyses were used to identify the contribution of parenting to complex syntax growth in children with typical development or DD. Analyses supported a final model, F(9,50) = 11.90, P < .001, including a significant three-way interaction between positive parenting behaviours, negative parenting behaviours and child delay status. This model explained 68.16% of the variance in children's complex syntax at age 4. Simple two-way interactions indicated differing effects of parenting variables for children with or without DD. Results have implications for understanding of complex syntax acquisition in young children, as well as implications for interventions. © 2018 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Khatri, Raina; Henderson, Charles; Cole, Renée; Froyd, Jeffrey E.; Friedrichsen, Debra; Stanford, Courtney
2016-06-01
[This paper is part of the Focused Collection on Preparing and Supporting University Physics Educators.] The physics education research community has produced a wealth of knowledge about effective teaching and learning of college level physics. Based on this knowledge, many research-proven instructional strategies and teaching materials have been developed and are currently available to instructors. Unfortunately, these intensive research and development activities have failed to influence the teaching practices of many physics instructors. This paper describes interim results of a larger study to develop a model of designing materials for successful propagation. The larger study includes three phases, the first two of which are reported here. The goal of the first phase was to characterize typical propagation practices of education developers, using data from a survey of 1284 National Science Foundation (NSF) principal investigators and focus group data from eight disciplinary groups of NSF program directors. The goal of the second phase was to develop an understanding of successful practice by studying three instructional strategies that have been well propagated. The result of the first two phases is a tentative model of designing for successful propagation, which will be further validated in the third phase through purposeful sampling of additional well-propagated instructional strategies along with typical education development projects. We found that interaction with potential adopters was one of the key missing ingredients in typical education development activities. Education developers often develop a polished product before getting feedback, rely on mass-market communication channels for dissemination, and do not plan for supporting adopters during implementation. The tentative model resulting from this study identifies three key propagation activities: interactive development, interactive dissemination, and support of adopters. Interactive development uses significant feedback from potential adopters to develop a strong product suitable for use in many settings. Interactive dissemination uses personal interactions to reach and motivate potential users. Support of adopters is missing from typical propagation practice and is important to reduce the burden of implementation and increases the likelihood of successful adoption.
Infrared weak corrections to strongly interacting gauge boson scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciafaloni, Paolo; Urbano, Alfredo
2010-04-15
We evaluate the impact of electroweak corrections of infrared origin on strongly interacting longitudinal gauge boson scattering, calculating all-order resummed expressions at the double log level. As a working example, we consider the standard model with a heavy Higgs. At energies typical of forthcoming experiments (LHC, International Linear Collider, Compact Linear Collider), the corrections are in the 10%-40% range, with the relative sign depending on the initial state considered and on whether or not additional gauge boson emission is included. We conclude that the effect of radiative electroweak corrections should be included in the analysis of longitudinal gauge boson scattering.
A Combination Therapy of JO-1 and Chemotherapy in Ovarian Cancer Models
2014-12-01
stress , as it was mild and resolved quickly. Other minor histologic changes as stated are unlikely to have been clinically significant...in the hematology data, with WBC count of 25,440 on July 24th. Stress may be playing a role in the leukocytosis (but is not typically associated...multifactorial, with potential factors including nutritional, stress or possibly food intolerances or allergies. Other minor changes are interpreted to
Large-Scale Physical Models of Thermal Remediation of DNAPL Source Zones in Aquitards
2009-05-01
pressure at the bottom of the tank. The higher pressure is reflected in higher measured water levels in external gauges . Figure 63: 3D Cross...than atmospheric. This higher pressure can raise the apparent water level in a sight gauge or external overflow and can even drive more fluid through...the water table. All met or exceeded their goals. Typical turnkey unit costs (including design, permitting, fabrication, mobilization, drilling
2012-06-01
polyolefin layer, typically polypropylene or polyethylene. The separator keeps the anodic and cathodic layers from touching. An internal short-circuit is...be seen that there are both spot welds and laser welds are used in the construction of the individual cylindrical cell. When constructing larger...manufacturing, to include resistance welding, laser welding, ultrasonic welding, and mechanical joining are detailed in Shawn Lee, S., et al(2010) (9
Arcjet thruster research and technology, phase 2
NASA Technical Reports Server (NTRS)
Yano, Steve E.
1991-01-01
The principle objective of Phase 2 was to produce an engineering model N2H4 arcjet system which met typical performance, lifetime, environmental, and interface specifications required to support a 10-year N-S stationkeeping mission for a communications spacecraft. The system includes an N2H4 arcjet thruster, power conditioning unit (PCU), and the interconnecting power cable assembly. This objective was met with the successful conclusion of an extensive system test series.
To the theory of particle lifting by terrestrial and Martian dust devils
NASA Astrophysics Data System (ADS)
Kurgansky, M. V.
2018-01-01
The combined Rankine vortex model is applied to describe the radial profile of azimuthal velocity in atmospheric dust devils, and a simplified model version is proposed of the turbulent surface boundary layer beneath the Rankine vortex periphery that corresponds to the potential vortex. Based on the results by Burggraf et al. (1971), it is accepted that the radial velocity near the ground in the potential vortex greatly exceeds the azimuthal velocity, which makes tractable the problem of the surface shear stress determination, including the case of the turbulent surface boundary layer. The constructed model explains exceeding the threshold shear velocity for aeolian transport in typical dust-devil vortices both on Earth and on Mars.
An interactive environment for the analysis of large Earth observation and model data sets
NASA Technical Reports Server (NTRS)
Bowman, Kenneth P.; Walsh, John E.; Wilhelmson, Robert B.
1993-01-01
We propose to develop an interactive environment for the analysis of large Earth science observation and model data sets. We will use a standard scientific data storage format and a large capacity (greater than 20 GB) optical disk system for data management; develop libraries for coordinate transformation and regridding of data sets; modify the NCSA X Image and X DataSlice software for typical Earth observation data sets by including map transformations and missing data handling; develop analysis tools for common mathematical and statistical operations; integrate the components described above into a system for the analysis and comparison of observations and model results; and distribute software and documentation to the scientific community.
An interactive environment for the analysis of large Earth observation and model data sets
NASA Technical Reports Server (NTRS)
Bowman, Kenneth P.; Walsh, John E.; Wilhelmson, Robert B.
1992-01-01
We propose to develop an interactive environment for the analysis of large Earth science observation and model data sets. We will use a standard scientific data storage format and a large capacity (greater than 20 GB) optical disk system for data management; develop libraries for coordinate transformation and regridding of data sets; modify the NCSA X Image and X Data Slice software for typical Earth observation data sets by including map transformations and missing data handling; develop analysis tools for common mathematical and statistical operations; integrate the components described above into a system for the analysis and comparison of observations and model results; and distribute software and documentation to the scientific community.
NASA Technical Reports Server (NTRS)
Mulcay, W. J.; Rose, R. A.
1980-01-01
Aerodynamic characteristics obtained in a helical flow environment utilizing a rotary balance located in the Langley spin tunnel are presented in plotted form for a 1/6 scale, single engine, low wing, general aviation model (model C). The configurations tested included the basic airplane and control deflections, wing leading edge and fuselage modification devices, tail designs and airplane components. Data are presented without analysis for an angle of attack range of 8 deg to 90 deg and clockwise and counter clockwise rotations covering an omega b/2v range from 0 to .9.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kornreich, Drew E; Vaidya, Rajendra U; Ammerman, Curtt N
Integrated Computational Materials Engineering (ICME) is a novel overarching approach to bridge length and time scales in computational materials science and engineering. This approach integrates all elements of multi-scale modeling (including various empirical and science-based models) with materials informatics to provide users the opportunity to tailor material selections based on stringent application needs. Typically, materials engineering has focused on structural requirements (stress, strain, modulus, fracture toughness etc.) while multi-scale modeling has been science focused (mechanical threshold strength model, grain-size models, solid-solution strengthening models etc.). Materials informatics (mechanical property inventories) on the other hand, is extensively data focused. All of thesemore » elements are combined within the framework of ICME to create architecture for the development, selection and design new composite materials for challenging environments. We propose development of the foundations for applying ICME to composite materials development for nuclear and high-radiation environments (including nuclear-fusion energy reactors, nuclear-fission reactors, and accelerators). We expect to combine all elements of current material models (including thermo-mechanical and finite-element models) into the ICME framework. This will be accomplished through the use of a various mathematical modeling constructs. These constructs will allow the integration of constituent models, which in tum would allow us to use the adaptive strengths of using a combinatorial scheme (fabrication and computational) for creating new composite materials. A sample problem where these concepts are used is provided in this summary.« less
Zhang, Zhenzhen; O'Neill, Marie S; Sánchez, Brisa N
2016-04-01
Factor analysis is a commonly used method of modelling correlated multivariate exposure data. Typically, the measurement model is assumed to have constant factor loadings. However, from our preliminary analyses of the Environmental Protection Agency's (EPA's) PM 2.5 fine speciation data, we have observed that the factor loadings for four constituents change considerably in stratified analyses. Since invariance of factor loadings is a prerequisite for valid comparison of the underlying latent variables, we propose a factor model that includes non-constant factor loadings that change over time and space using P-spline penalized with the generalized cross-validation (GCV) criterion. The model is implemented using the Expectation-Maximization (EM) algorithm and we select the multiple spline smoothing parameters by minimizing the GCV criterion with Newton's method during each iteration of the EM algorithm. The algorithm is applied to a one-factor model that includes four constituents. Through bootstrap confidence bands, we find that the factor loading for total nitrate changes across seasons and geographic regions.
Modelling groundwater fractal flow with fractional differentiation via Mittag-Leffler law
NASA Astrophysics Data System (ADS)
Ahokposi, D. P.; Atangana, Abdon; Vermeulen, D. P.
2017-04-01
Modelling the flow of groundwater within a network of fractures is perhaps one of the most difficult exercises within the field of geohydrology. This physical problem has attracted the attention of several scientists across the globe. Already two different types of differentiations have been used to attempt modelling this problem including the classical and the fractional differentiation. In this paper, we employed the most recent concept of differentiation based on the non-local and non-singular kernel called the generalized Mittag-Leffler function, to reshape the model of groundwater fractal flow. We presented the existence of positive solution of the new model. Using the fixed-point approach, we established the uniqueness of the positive solution. We solve the new model with three different numerical schemes including implicit, explicit and Crank-Nicholson numerical methods. Experimental data collected from four constant discharge tests conducted in a typical fractured crystalline rock aquifer of the Northern Limb (Bushveld Complex) in the Limpopo Province (South Africa) are compared with the numerical solutions. It is worth noting that the four boreholes (BPAC1, BPAC2, BPAC3, and BPAC4) are located on Faults.
Effects of a Weak Planetary Field on a Model Venus Ionosphere
NASA Astrophysics Data System (ADS)
Luhmann, Janet G.; Ma, Yingjuan; Villarreal, Michaela
2014-05-01
There are a number of attributes of the near-Venus space environment and upper atmosphere that remain mysterious, including occasional large polar magnetic field stuctures seen on VEX and nightside ionospheric holes seen on PVO. We have been exploring the consequences of a weak global dipole magnetic field of Venus using results of BATS-R-US MHD simulations. An advantage of these models is that they include the effects on a realistic ionosphere. We compare some of the weak magnetosphere's ionospheric properties with the typical unmagnetized ionsphere case. The results show the differences can be quite subtle for dipole fields less than ~10 nT at the equator, as might be expected. Nevertheless the dipole fields do produce distinctive details, especially in the upper regions.
A Comparative Study of Some Dynamic Stall Models
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Kaza, K. R. V.
1987-01-01
Three semi-empirical aerodynamic stall models are compared with respect to their lift and moment hysteresis loop prediction, limit cycle behavior, easy implementation, and feasibility in developing the parameters required for stall flutter prediction of advanced turbines. For the comparison of aeroelastic response prediction including stall, a typical section model and a plate structural model are considered. The response analysis includes both plunging and pitching motions of the blades. In model A, a correction to the angle of attack is applied when the angle of attack exceeds the static stall angle. In model B, a synthesis procedure is used for angles of attack above static stall angles and the time history effects are accounted through the Wagner function. In both models the life and moment coefficients for angle of attack below stall are obtained from tabular data for a given Mach number and angle of attack. In model C, referred to an the ONERA model, the life and moment coefficients are given in the form of two differential equations, one for angles below stall, and the other for angles above stall. The parameters of those equations are nonlinear functions of the angle of attack.
A simple microbial fuel cell model for improvement of biomedical device powering times.
Roxby, Daniel N; Tran, Nham; Nguyen, Hung T
2014-01-01
This study describes a Matlab based Microbial Fuel Cell (MFC) model for a suspended microbial population, in the anode chamber for the use of the MFC in powering biomedical devices. The model contains three main sections including microbial growth, microbial chemical uptake and secretion and electrochemical modeling. The microbial growth portion is based on a Continuously Stirred Tank Reactor (CSTR) model for the microbial growth with substrate and electron acceptors. Microbial stoichiometry is used to determine chemical concentrations and their rates of change and transfer within the MFC. These parameters are then used in the electrochemical modeling for calculating current, voltage and power. The model was tested for typically exhibited MFC characteristics including increased electrode distances and surface areas, overpotentials and operating temperatures. Implantable biomedical devices require long term powering which is the main objective for MFCs. Towards this end, our model was tested with different initial substrate and electron acceptor concentrations, revealing a four-fold increase in concentrations decreased the power output time by 50%. Additionally, the model also predicts that for a 35.7% decrease in specific growth rate, a 50% increase in power longevity is possible.
Zhang, Hongshen; Chen, Ming
2013-11-01
In-depth studies on the recycling of typical automotive exterior plastic parts are significant and beneficial for environmental protection, energy conservation, and sustainable development of China. In the current study, several methods were used to analyze the recycling industry model for typical exterior parts of passenger vehicles in China. The strengths, weaknesses, opportunities, and challenges of the current recycling industry for typical exterior parts of passenger vehicles were analyzed comprehensively based on the SWOT method. The internal factor evaluation matrix and external factor evaluation matrix were used to evaluate the internal and external factors of the recycling industry. The recycling industry was found to respond well to all the factors and it was found to face good developing opportunities. Then, the cross-link strategies analysis for the typical exterior parts of the passenger car industry of China was conducted based on the SWOT analysis strategies and established SWOT matrix. Finally, based on the aforementioned research, the recycling industry model led by automobile manufacturers was promoted. Copyright © 2013 Elsevier Ltd. All rights reserved.
Elegent—An elastic event generator
NASA Astrophysics Data System (ADS)
Kašpar, J.
2014-03-01
Although elastic scattering of nucleons may look like a simple process, it presents a long-lasting challenge for theory. Due to missing hard energy scale, the perturbative QCD cannot be applied. Instead, many phenomenological/theoretical models have emerged. In this paper we present a unified implementation of some of the most prominent models in a C++ library, moreover extended to account for effects of the electromagnetic interaction. The library is complemented with a number of utilities. For instance, programs to sample many distributions of interest in four-momentum transfer squared, t, impact parameter, b, and collision energy √{s}. These distributions at ISR, Spp¯S, RHIC, Tevatron and LHC energies are available for download from the project web site. Both in the form of ROOT files and PDF figures providing comparisons among the models. The package includes also a tool for Monte-Carlo generation of elastic scattering events, which can easily be embedded in any other program framework. Catalogue identifier: AERT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERT_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 10551 No. of bytes in distributed program, including test data, etc.: 126316 Distribution format: tar.gz Programming language: C++. Computer: Any in principle, tested on x86-64 architecture. Operating system: Any in principle, tested on GNU/Linux. RAM: Strongly depends on the task, but typically below 20MB Classification: 11.6. External routines: ROOT, HepMC Nature of problem: Monte-Carlo simulation of elastic nucleon-nucleon collisions Solution method: Implementation of some of the most prominent phenomenological/theoretical models providing cumulative distribution function that is used for random event generation. Running time: Strongly depends on the task, but typically below 1 h.
NASA Astrophysics Data System (ADS)
Ortiz, J. P.; Ortega, A. D.; Harp, D. R.; Boukhalfa, H.; Stauffer, P. H.
2017-12-01
Gas transport in unsaturated fractured media plays an important role in a variety of applications, including detection of underground nuclear explosions, transport from volatile contaminant plumes, shallow CO2 leakage from carbon sequestration sites, and methane leaks from hydraulic fracturing operations. Gas breakthrough times are highly sensitive to uncertainties associated with a variety of hydrogeologic parameters, including: rock type, fracture aperture, matrix permeability, porosity, and saturation. Furthermore, a couple simplifying assumptions are typically employed when representing fracture flow and transport. Aqueous phase transport is typically considered insignificant compared to gas phase transport in unsaturated fracture flow regimes, and an assumption of instantaneous dissolution/volatilization of radionuclide gas is commonly used to reduce computational expense. We conduct this research using a twofold approach that combines laboratory gas experimentation and numerical modeling to verify and refine these simplifying assumptions in our current models of gas transport. Using a gas diffusion cell, we are able to measure air pressure transmission through fractured tuff core samples while also measuring Xe gas breakthrough measured using a mass spectrometer. We can thus create synthetic barometric fluctuations akin to those observed in field tests and measure the associated gas flow through the fracture and matrix pore space for varying degrees of fluid saturation. We then attempt to reproduce the experimental results using numerical models in PLFOTRAN and FEHM codes to better understand the importance of different parameters and assumptions on gas transport. Our numerical approaches represent both single-phase gas flow with immobile water, as well as full multi-phase transport in order to test the validity of assuming immobile pore water. Our approaches also include the ability to simulate the reaction equilibrium kinetics of dissolution/volatilization in order to identify when the assumption of instantaneous equilibrium is reasonable. These efforts will aid us in our application of such models to larger, field-scale tests and improve our ability to predict gas breakthrough times.
Comparative analysis of LWR and FBR spent fuels for nuclear forensics evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Permana, Sidik; Suzuki, Mitsutoshi; Su'ud, Zaki
2012-06-06
Some interesting issues are attributed to nuclide compositions of spent fuels from thermal reactors as well as fast reactors such as a potential to reuse as recycled fuel, and a possible capability to be manage as a fuel for destructive devices. In addition, analysis on nuclear forensics which is related to spent fuel compositions becomes one of the interesting topics to evaluate the origin and the composition of spent fuels from the spent fuel foot-prints. Spent fuel compositions of different fuel types give some typical spent fuel foot prints and can be estimated the origin of source of those spentmore » fuel compositions. Some technics or methods have been developing based on some science and technological capability including experimental and modeling or theoretical aspects of analyses. Some foot-print of nuclear forensics will identify the typical information of spent fuel compositions such as enrichment information, burnup or irradiation time, reactor types as well as the cooling time which is related to the age of spent fuels. This paper intends to evaluate the typical spent fuel compositions of light water (LWR) and fast breeder reactors (FBR) from the view point of some foot prints of nuclear forensics. An established depletion code of ORIGEN is adopted to analyze LWR spent fuel (SF) for several burnup constants and decay times. For analyzing some spent fuel compositions of FBR, some coupling codes such as SLAROM code, JOINT and CITATION codes including JFS-3-J-3.2R as nuclear data library have been adopted. Enriched U-235 fuel composition of oxide type is used for fresh fuel of LWR and a mixed oxide fuel (MOX) for FBR fresh fuel. Those MOX fuels of FBR come from the spent fuels of LWR. Some typical spent fuels from both LWR and FBR will be compared to distinguish some typical foot-prints of SF based on nuclear forensic analysis.« less
A mathematical model of a lithium/thionyl chloride primary cell
NASA Technical Reports Server (NTRS)
Evans, T. I.; Nguyen, T. V.; White, R. E.
1987-01-01
A 1-D mathematical model for the lithium/thionyl chloride primary cell was developed to investigate methods of improving its performance and safety. The model includes many of the components of a typical lithium/thionyl chloride cell such as the porous lithium chloride film which forms on the lithium anode surface. The governing equations are formulated from fundamental conservation laws using porous electrode theory and concentrated solution theory. The model is used to predict 1-D, time dependent profiles of concentration, porosity, current, and potential as well as cell temperature and voltage. When a certain discharge rate is required, the model can be used to determine the design criteria and operating variables which yield high cell capacities. Model predictions can be used to establish operational and design limits within which the thermal runaway problem, inherent in these cells, can be avoided.
Profiling Campus Administration: A Demographic Survey of Campus Police Chiefs
ERIC Educational Resources Information Center
Linebach, Jared A.; Kovacsiss, Lea M.; Tesch, Brian P.
2011-01-01
Campus law enforcement faces unique challenges, as there are different societal expectations compared to municipal law enforcement. Municipal law enforcement models typically focus on traditionally reactive law and order, while campus law enforcement models typically focus on proactive responses to crime and its deterrence. Stressors experienced…
Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.
2009-01-01
We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall‐runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error‐based weighting of observation and prior information data, local sensitivity analysis, and single‐objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.
Crowell, Sheila E.; Baucom, Brian R.; Yaptangco, Mona; Bride, Daniel; Hsiao, Ray; McCauley, Elizabeth; Beauchaine, Theodore P.
2014-01-01
Many depressed adolescents experience difficulty regulating their emotions. These emotion regulation difficulties appear to emerge in part from socialization processes within families and then generalize to other contexts. However, emotion dysregulation is typically assessed within the individual, rather than in the social relationships that shape and maintain dysregulation. In this study, we evaluated concordance of physiological and observational measures of emotion dysregulation during interpersonal conflict, using a multilevel actor-partner interdependence model (APIM). Participants were 75 mother-daughter dyads, including 50 depressed adolescents with or without a history of self-injury, and 25 typically developing controls. Behavior dysregulation was operationalized as observed aversiveness during a conflict discussion, and physiological dysregulation was indexed by respiratory sinus arrhythmia (RSA). Results revealed different patterns of concordance for control versus depressed participants. Controls evidenced a concordant partner (between-person) effect, and showed increased physiological regulation during minutes when their partner was more aversive. In contrast, clinical dyad members displayed a concordant actor (within-person) effect, becoming simultaneously physiologically and behaviorally dysregulated. Results inform current understanding of emotion dysregulation across multiple levels of analysis. PMID:24607894
Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.
Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi
2015-02-01
We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.
NASA Astrophysics Data System (ADS)
Maples, S.; Fogg, G. E.; Maxwell, R. M.; Liu, Y.
2017-12-01
Civilizations have typically obtained water from natural and constructed surface-water resources throughout most of human history. Only during the last 50-70 years has a significant quantity of water for humans been obtained through pumping from wells. During this short time, alarming levels of groundwater depletion have been observed worldwide, especially in some semi-arid and arid regions that rely heavily on groundwater pumping from clastic sedimentary basins. In order to reverse the negative effects of over-exploitation of groundwater resources, we must transition from treating groundwater mainly as an extractive resource to one in which recharge and subsurface storage are pursued more aggressively. However, this remains a challenge because unlike surface-water reservoirs which are typically replenished over annual timescales, the complex geologic architecture of clastic sedimentary basins impedes natural groundwater recharge rates resulting in decadal or longer timescales for aquifer replenishment. In parts of California's Central Valley alluvial aquifer system, groundwater pumping has outpaced natural groundwater recharge for decades. Managed aquifer recharge (MAR) has been promoted to offset continued groundwater overdraft, but MAR to the confined aquifer system remains a challenge because multiple laterally-extensive silt and clay aquitards limit recharge rates in most locations. Here, we simulate the dynamics of MAR and identify potential recharge pathways in this system using a novel combination of (1) a high-resolution model of the subsurface geologic heterogeneity and (2) a physically-based model of variably-saturated, three-dimensional water flow. Unlike most groundwater models, which have coarse spatial resolution that obscures the detailed subsurface geologic architecture of these systems, our high-resolution model can pinpoint specific geologic features and locations that have the potential to `short-circuit' aquitards and provide orders-of-magnitude greater recharge rates and volumes than would be possible over the rest of the landscape. Our results highlight the importance of capturing detailed geologic heterogeneity and physical processes that are not typically included in groundwater models when evaluating groundwater recharge potential.
NASA Astrophysics Data System (ADS)
Kalinauskaite, Eimante; Murphy, Anthony; McAuley, Ian; Trappe, Neil A.; Bracken, Colm P.; McCarthy, Darragh N.; Doherty, Stephen; Gradziel, Marcin L.; O'Sullivan, Creidhe; Maffei, Bruno; Lamarre, Jean-Michel A.; Ade, Peter A. R.; Savini, Giorgio
2016-07-01
Multimode horn antennas can be utilized as high efficiency feeds for bolometric detectors, providing increased throughput and sensitivity over single mode feeds, while also ensuring good control of beam pattern characteristics. Multimode horns were employed in the highest frequency channels of the European Space Agency Planck Telescope, and have been proposed for future terahertz instrumentation, such as SAFARI for SPICA. The radiation pattern of a multimode horn is affected by the details of the coupling of the higher order waveguide modes to the bolometer making the modeling more complicated than in the case of a single mode system. A typical cavity coupled bolometer system can be most efficiently simulated using mode matching, typically with smooth walled waveguide modes as the basis and computing an overall scattering matrix for the horn-waveguide-cavity system that includes the power absorption by the absorber. In this paper we present how to include a cavity coupled bolometer, modelled as a thin absorbing film with particular interest in investigating the cavity configuration for optimizing power absorption. As an example, the possible improvements from offsetting the axis of a cylindrically symmetric absorbing cavity from that of a circular waveguide feeding it (thus trapping more power in the cavity) are discussed. Another issue is the effect on the optical efficiency of the detectors of the presence of any gaps, through which power can escape. To model these effects required that existing in-house mode matching software, which calculates the scattering matrices for axially symmetric waveguide structures, be extended to be able to handle offset junctions and free space gaps. As part of this process the complete software code 'PySCATTER' was developed in Python. The approach can be applied to proposed terahertz systems, such as SPICASAFARI.
Snow in Earth System Models: Recent Progress and Future Challenges
NASA Astrophysics Data System (ADS)
Clark, M. P.; Slater, A. G.
2016-12-01
Snow is the most variable of terrestrial boundary conditions. Some 50 million km^2 of the Northern Hemisphere typically sees periods of accumulation and ablation in the annual cycle. The wonderous properties of snow, such as high albedo, thermal insulation and its ability to act as a water store make it an integral part of the global climate system. Earliest inclusions of snow within climate models were simple adjustments to albedo and a moisture store. Modern Earth Syetem Models now represent snow through a myriad of model architectures and parameterizations that span a broad range of complexity. Understanding the impacts of modeling decisions upon simulation of snow and other Earth System components (either directly or via feedbacks) is an ongoing area of research. Snow models are progressing with multi-layer representations and capabilities such as complex albedo schemes that include contaminants. While considerable advances have been made, numerous challenges also remain. Simply getting a grasp on the mass of snow (seasonal or permanent) has proved more difficult than expected over the past 30 years. Snow interactions with vegetation has improved but the details of vegetation masking and emergence are still limited. Inclusion of blowing snow processes, in terms of transport and sublimation, is typically rare and sublimation remains a difficult quantity to measure. Contemplation of snow crystal form within models and integration with radiative transfer schemes for better understanding of full spectrum interations (from UV to long microwave) may simultaneously advance simulation and remote sensing. A series of international modeling experiments and directed field campaigns are planned in the near future with the aim of pushing our knowledge forward.
Lepton flavour violation in RS models with a brane- or nearly brane-localized Higgs
NASA Astrophysics Data System (ADS)
Beneke, M.; Moch, P.; Rohrwild, J.
2016-05-01
We perform a comprehensive study of charged lepton flavour violation in Randall-Sundrum (RS) models in a fully 5D quantum-field-theoretical framework. We consider the RS model with minimal field content and a ;custodially protected; extension as well as three implementations of the IR-brane localized Higgs field, including the non-decoupling effect of the KK excitations of a narrow bulk Higgs. Our calculation provides the first complete result for the flavour-violating electromagnetic dipole operator in Randall-Sundrum models. It contains three contributions with different dependence on the magnitude of the anarchic 5D Yukawa matrix, which can all be important in certain parameter regions. We study the typical range for the branching fractions of μ → eγ, μ → 3 e, μN → eN as well as τ → μγ, τ → 3 μ and the electron electric dipole moment by a numerical scan in both the minimal and the custodial RS model. The combination of μ → eγ and μN → eN currently provides the most stringent constraint on the parameter space of the model. A typical lower limit on the KK scale T is around 2 TeV in the minimal model (up to 4 TeV in the bulk Higgs case with large Yukawa couplings), and around 4 TeV in the custodially protected model, which corresponds to a mass of about 10 TeV for the first KK excitations, far beyond the lower limit from the non-observation of direct production at the LHC.
Björnsson, Marcus A; Simonsson, Ulrika S H
2011-01-01
AIMS To describe pain intensity (PI) measured on a visual analogue scale (VAS) and dropout due to request for rescue medication after administration of naproxcinod, naproxen or placebo in 242 patients after wisdom tooth removal. METHODS Non-linear mixed effects modelling was used to describe the plasma concentrations of naproxen, either formed from naproxcinod or from naproxen itself, and their relationship to PI and dropout. Goodness of fit was assessed by simultaneous simulations of PI and dropout. RESULTS Baseline PI for the typical patient was 52.7 mm. The PI was influenced by placebo effects, using an exponential model, and by naproxen concentrations using a sigmoid Emax model. Typical maximal placebo effect was a decrease in PI by 20.2%, with an onset rate constant of 0.237 h−1. EC50 was 0.135 µmol l−1. A Weibull time-to-event model was used for the dropout, where the hazard was dependent on the predicted PI and by the PI at baseline. Since the dropout was not at random, it was necessary to include the simulated dropout in visual predictive checks (VPC) of PI. CONCLUSIONS This model describes the relationship between drug effects, PI and the likelihood of dropout after naproxcinod, naproxen and placebo administration. The model provides an opportunity to describe the effects of other doses or formulations, after dental extraction. VPC created by simultaneous simulations of PI and dropout provides a good way of assessing the goodness of fit when there is informative dropout. PMID:21272053
Electrostatic Estimation of Intercalant Jump-Diffusion Barriers Using Finite-Size Ion Models.
Zimmermann, Nils E R; Hannah, Daniel C; Rong, Ziqin; Liu, Miao; Ceder, Gerbrand; Haranczyk, Maciej; Persson, Kristin A
2018-02-01
We report on a scheme for estimating intercalant jump-diffusion barriers that are typically obtained from demanding density functional theory-nudged elastic band calculations. The key idea is to relax a chain of states in the field of the electrostatic potential that is averaged over a spherical volume using different finite-size ion models. For magnesium migrating in typical intercalation materials such as transition-metal oxides, we find that the optimal model is a relatively large shell. This data-driven result parallels typical assumptions made in models based on Onsager's reaction field theory to quantitatively estimate electrostatic solvent effects. Because of its efficiency, our potential of electrostatics-finite ion size (PfEFIS) barrier estimation scheme will enable rapid identification of materials with good ionic mobility.
Linking Geomechanical Models with Observations of Microseismicity during CCS Operations
NASA Astrophysics Data System (ADS)
Verdon, J.; Kendall, J.; White, D.
2012-12-01
During CO2 injection for the purposes of carbon capture and storage (CCS), injection-induced fracturing of the overburden represents a key risk to storage integrity. Fractures in a caprock provide a pathway along which buoyant CO2 can rise and escape the storage zone. Therefore the ability to link field-scale geomechanical models with field geophysical observations is of paramount importance to guarantee secure CO2 storage. Accurate location of microseismic events identifies where brittle failure has occurred on fracture planes. This is a manifestation of the deformation induced by CO2 injection. As the pore pressure is increased during injection, effective stress is decreased, leading to inflation of the reservoir and deformation of surrounding rocks, which creates microseismicity. The deformation induced by injection can be simulated using finite-element mechanical models. Such a model can be used to predict when and where microseismicity is expected to occur. However, typical elements in a field scale mechanical models have decameter scales, while the rupture size for microseismic events are typically of the order of 1 square meter. This means that mapping modeled stress changes to predictions of microseismic activity can be challenging. Where larger scale faults have been identified, they can be included explicitly in the geomechanical model. Where movement is simulated along these discrete features, it can be assumed that microseismicity will occur. However, microseismic events typically occur on fracture networks that are too small to be simulated explicitly in a field-scale model. Therefore, the likelihood of microseismicity occurring must be estimated within a finite element that does not contain explicitly modeled discontinuities. This can be done in a number of ways, including the utilization of measures such as closeness on the stress state to predetermined failure criteria, either for planes with a defined orientation (the Mohr-Coulomb criteria) for planes with arbitrary orientation (the Fracture Potential). Inelastic deformation may be incorporated within the constitutive models of the mechanical model itself in the form of plastic deformation criteria. Under such a system yield, plastic deformation, and strain hardening/weakening can be incorporated explicitly into the mechanical model, where the assumption is that the onset of inelastic processes corresponds with the onset of microseismicity within a particular element. Alternatively, an elastic geomechanical model may be used, where the resulting stress states after deformation are post-processed for a microseismicity analysis. In this paper we focus on CO2 injection for CCS and Enhanced Oil Recovery in the Weyburn Field, Canada. We generate field-scale geomechanical models to simulate the response to CO2 injection. We compare observations of microseismicity to the predictions made by the models, showing how geomechanical models can improve interpretation and understanding of microseismic observations, as well as how microseismic observations can be used to ground-truth models (a model that provides predictions with observations can be deemed more reliable than one that does not). By tuning material properties within acceptable ranges, we are able to find models that match microseismic and other geophysical observations most accurately.
Mathematical modelling of the maternal cardiovascular system in the three stages of pregnancy.
Corsini, Chiara; Cervi, Elena; Migliavacca, Francesco; Schievano, Silvia; Hsia, Tain-Yen; Pennati, Giancarlo
2017-09-01
In this study, a mathematical model of the female circulation during pregnancy is presented in order to investigate the hemodynamic response to the cardiovascular changes associated with each trimester of pregnancy. First, a preliminary lumped parameter model of the circulation of a non-pregnant female was developed, including the heart, the systemic circulation with a specific block for the uterine district and the pulmonary circulation. The model was first tested at rest; then heart rate and vascular resistances were individually varied to verify the correct response to parameter alterations characterising pregnancy. In order to simulate hemodynamics during pregnancy at each trimester, the main changes applied to the model consisted in reducing vascular resistances, and simultaneously increasing heart rate and ventricular wall volumes. Overall, reasonable agreement was found between model outputs and in vivo data, with the trends of the cardiac hemodynamic quantities suggesting correct response of the heart model throughout pregnancy. Results were reported for uterine hemodynamics, with flow tracings resembling typical Doppler velocity waveforms at each stage, including pulsatility indexes. Such a model may be used to explore the changes that happen during pregnancy in female with cardiovascular diseases. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chang, Ailian; Sun, HongGuang; Zheng, Chunmiao; Lu, Bingqing; Lu, Chengpeng; Ma, Rui; Zhang, Yong
2018-07-01
Fractional-derivative models have been developed recently to interpret various hydrologic dynamics, such as dissolved contaminant transport in groundwater. However, they have not been applied to quantify other fluid dynamics, such as gas transport through complex geological media. This study reviewed previous gas transport experiments conducted in laboratory columns and real-world oil-gas reservoirs and found that gas dynamics exhibit typical sub-diffusive behavior characterized by heavy late-time tailing in the gas breakthrough curves (BTCs), which cannot be effectively captured by classical transport models. Numerical tests and field applications of the time fractional convection-diffusion equation (fCDE) have shown that the fCDE model can capture the observed gas BTCs including their apparent positive skewness. Sensitivity analysis further revealed that the three parameters used in the fCDE model, including the time index, the convection velocity, and the diffusion coefficient, play different roles in interpreting the delayed gas transport dynamics. In addition, the model comparison and analysis showed that the time fCDE model is efficient in application. Therefore, the time fractional-derivative models can be conveniently extended to quantify gas transport through natural geological media such as complex oil-gas reservoirs.
Hydrodynamic clustering of droplets in turbulence
NASA Astrophysics Data System (ADS)
Kunnen, Rudie; Yavuz, Altug; van Heijst, Gertjan; Clercx, Herman
2017-11-01
Small, inertial particles are known to cluster in turbulent flows: particles are centrifuged out of eddies and gather in the strain-dominated regions. This so-called preferential concentration is reflected in the radial distribution function (RDF; a quantitative measure of clustering). We study clustering of water droplets in a loudspeaker-driven turbulence chamber. We track the motion of droplets in 3D and calculate the RDF. At moderate scales (a few Kolmogorov lengths) we find the typical power-law scaling of preferential concentration in the RDF. However, at even smaller scales (a few droplet diameters), we encounter a hitherto unobserved additional clustering. We postulate that the additional clustering is due to hydrodynamic interactions, an effect which is typically disregarded in modeling. Using a perturbative expansion of inertial effects in a Stokes-flow description of two interacting spheres, we obtain an expression for the RDF which indeed includes the additional clustering. The additional clustering enhances the collision probability of droplets, which enhances their growth rate due to coalescence. The additional clustering is thus an essential effect in precipitation modeling.
Marcus, David K; Fulton, Jessica J; Edens, John F
2013-01-01
Psychopathy or psychopathic personality disorder represents a constellation of traits characterized by superficial charm, egocentricity, irresponsibility, fearlessness, persistent violation of social norms, and a lack of empathy, guilt, and remorse. Factor analyses of the Psychopathic Personality Inventory (PPI)typically yield two factors: Fearless Dominance (FD) and Self-Centered Impulsivity (SCI). Additionally, the Coldheartedness (CH) subscale typically does not load on either factor. The current paper includes a meta-analysis of studies that have examined theoretically important correlates of the two PPI factors and CH. Results suggest that (a) FD and SCI are orthogonal or weakly correlated, (b) each factor predicts distinct (and sometimes opposite) correlates, and (c) the FD factor is not highly correlated with most other measures of psychopathy. This pattern of results raises important questions about the relation between FD and SCI and the role of FD in conceptualizations of psychopathy. Our findings also indicate the need for future studies using the two-factor model of the PPI to conduct moderational analyses to examine potential interactions between FD and SCI in the prediction of important criterion measures.
An investigation on the fuel savings potential of hybrid hydraulic refuse collection vehicles.
Bender, Frank A; Bosse, Thomas; Sawodny, Oliver
2014-09-01
Refuse trucks play an important role in the waste collection process. Due to their typical driving cycle, these vehicles are characterized by large fuel consumption, which strongly affects the overall waste disposal costs. Hybrid hydraulic refuse vehicles offer an interesting alternative to conventional diesel trucks, because they are able to recuperate, store and reuse braking energy. However, the expected fuel savings can vary strongly depending on the driving cycle and the operational mode. Therefore, in order to assess the possible fuel savings, a typical driving cycle was measured in a conventional vehicle run by the waste authority of the City of Stuttgart, and a dynamical model of the considered vehicle was built up. Based on the measured driving cycle and the vehicle model including the hybrid powertrain components, simulations for both the conventional and the hybrid vehicle were performed. Fuel consumption results that indicate savings of about 20% are presented and analyzed in order to evaluate the benefit of hybrid hydraulic vehicles used for refuse collection. Copyright © 2014 Elsevier Ltd. All rights reserved.
Weak lensing calibration of mass bias in the REFLEX+BCS X-ray galaxy cluster catalogue
NASA Astrophysics Data System (ADS)
Simet, Melanie; Battaglia, Nicholas; Mandelbaum, Rachel; Seljak, Uroš
2017-04-01
The use of large, X-ray-selected Galaxy cluster catalogues for cosmological analyses requires a thorough understanding of the X-ray mass estimates. Weak gravitational lensing is an ideal method to shed light on such issues, due to its insensitivity to the cluster dynamical state. We perform a weak lensing calibration of 166 galaxy clusters from the REFLEX and BCS cluster catalogue and compare our results to the X-ray masses based on scaled luminosities from that catalogue. To interpret the weak lensing signal in terms of cluster masses, we compare the lensing signal to simple theoretical Navarro-Frenk-White models and to simulated cluster lensing profiles, including complications such as cluster substructure, projected large-scale structure and Eddington bias. We find evidence of underestimation in the X-ray masses, as expected, with
The effect of small-wave modulation on the electromagnetic bias
NASA Technical Reports Server (NTRS)
Rodriguez, Ernesto; Kim, Yunjin; Martin, Jan M.
1992-01-01
The effect of the modulation of small ocean waves by large waves on the physical mechanism of the EM bias is examined by conducting a numerical scattering experiment which does not assume the applicability of geometric optics. The modulation effect of the large waves on the small waves is modeled using the principle of conservation of wave action and includes the modulation of gravity-capillary waves. The frequency dependence and magnitude of the EM bias is examined for a simplified ocean spectral model as a function of wind speed. These calculations make it possible to assess the validity of previous assumptions made in the theory of the EM bias, with respect to both scattering and hydrodynamic effects. It is found that the geometric optics approximation is inadequate for predictions of the EM bias at typical radar altimeter frequencies, while the improved scattering calculations provide a frequency dependence of the EM bias which is in qualitative agreement with observation. For typical wind speeds, the EM bias contribution due to small-wave modulation is of the same order as that due to modulation by the nonlinearities of the large-scale waves.
A time-spectral approach to numerical weather prediction
NASA Astrophysics Data System (ADS)
Scheffel, Jan; Lindvall, Kristoffer; Yik, Hiu Fai
2018-05-01
Finite difference methods are traditionally used for modelling the time domain in numerical weather prediction (NWP). Time-spectral solution is an attractive alternative for reasons of accuracy and efficiency and because time step limitations associated with causal CFL-like criteria, typical for explicit finite difference methods, are avoided. In this work, the Lorenz 1984 chaotic equations are solved using the time-spectral algorithm GWRM (Generalized Weighted Residual Method). Comparisons of accuracy and efficiency are carried out for both explicit and implicit time-stepping algorithms. It is found that the efficiency of the GWRM compares well with these methods, in particular at high accuracy. For perturbative scenarios, the GWRM was found to be as much as four times faster than the finite difference methods. A primary reason is that the GWRM time intervals typically are two orders of magnitude larger than those of the finite difference methods. The GWRM has the additional advantage to produce analytical solutions in the form of Chebyshev series expansions. The results are encouraging for pursuing further studies, including spatial dependence, of the relevance of time-spectral methods for NWP modelling.
Influence of conservative corrections on parameter estimation for extreme-mass-ratio inspirals
NASA Astrophysics Data System (ADS)
Huerta, E. A.; Gair, Jonathan R.
2009-04-01
We present an improved numerical kludge waveform model for circular, equatorial extreme-mass-ratio inspirals (EMRIs). The model is based on true Kerr geodesics, augmented by radiative self-force corrections derived from perturbative calculations, and in this paper for the first time we include conservative self-force corrections that we derive by comparison to post-Newtonian results. We present results of a Monte Carlo simulation of parameter estimation errors computed using the Fisher matrix and also assess the theoretical errors that would arise from omitting the conservative correction terms we include here. We present results for three different types of system, namely, the inspirals of black holes, neutron stars, or white dwarfs into a supermassive black hole (SMBH). The analysis shows that for a typical source (a 10M⊙ compact object captured by a 106M⊙ SMBH at a signal to noise ratio of 30) we expect to determine the two masses to within a fractional error of ˜10-4, measure the spin parameter q to ˜10-4.5, and determine the location of the source on the sky and the spin orientation to within 10-3 steradians. We show that, for this kludge model, omitting the conservative corrections leads to a small error over much of the parameter space, i.e., the ratio R of the theoretical model error to the Fisher matrix error is R<1 for all ten parameters in the model. For the few systems with larger errors typically R<3 and hence the conservative corrections can be marginally ignored. In addition, we use our model and first-order self-force results for Schwarzschild black holes to estimate the error that arises from omitting the second-order radiative piece of the self-force. This indicates that it may not be necessary to go beyond first order to recover accurate parameter estimates.
McClure, James E.; Berrill, Mark A.; Gray, William G.; ...
2016-09-02
Here, multiphase flow in porous medium systems is typically modeled using continuum mechanical representations at the macroscale in terms of averaged quantities. These models require closure relations to produce solvable forms. One of these required closure relations is an expression relating fluid pressures, fluid saturations, and, in some cases, the interfacial area between the fluid phases, and the Euler characteristic. An unresolved question is whether the inclusion of these additional morphological and topological measures can lead to a non-hysteretic closure relation compared to the hysteretic forms that are used in traditional models, which typically do not include interfacial areas, ormore » the Euler characteristic. We develop a lattice-Boltzmann (LB) simulation approach to investigate the equilibrium states of a two-fluid-phase porous medium system, which include disconnected now- wetting phase features. The proposed approach is applied to a synthetic medium consisting of 1,964 spheres arranged in a random, non-overlapping, close-packed manner, yielding a total of 42,908 different equilibrium points. This information is evaluated using a generalized additive modeling approach to determine if a unique function from this family exists, which can explain the data. The variance of various model estimates is computed, and we conclude that, except for the limiting behavior close to a single fluid regime, capillary pressure can be expressed as a deterministic and non-hysteretic function of fluid saturation, interfacial area between the fluid phases, and the Euler characteristic. This work is unique in the methods employed, the size of the data set, the resolution in space and time, the true equilibrium nature of the data, the parameterizations investigated, and the broad set of functions examined. The conclusion of essentially non-hysteretic behavior provides support for an evolving class of two-fluid-phase flow in porous medium systems models.« less
Model Parameter Variability for Enhanced Anaerobic Bioremediation of DNAPL Source Zones
NASA Astrophysics Data System (ADS)
Mao, X.; Gerhard, J. I.; Barry, D. A.
2005-12-01
The objective of the Source Area Bioremediation (SABRE) project, an international collaboration of twelve companies, two government agencies and three research institutions, is to evaluate the performance of enhanced anaerobic bioremediation for the treatment of chlorinated ethene source areas containing dense, non-aqueous phase liquids (DNAPL). This 4-year, 5.7 million dollars research effort focuses on a pilot-scale demonstration of enhanced bioremediation at a trichloroethene (TCE) DNAPL field site in the United Kingdom, and includes a significant program of laboratory and modelling studies. Prior to field implementation, a large-scale, multi-laboratory microcosm study was performed to determine the optimal system properties to support dehalogenation of TCE in site soil and groundwater. This statistically-based suite of experiments measured the influence of key variables (electron donor, nutrient addition, bioaugmentation, TCE concentration and sulphate concentration) in promoting the reductive dechlorination of TCE to ethene. As well, a comprehensive biogeochemical numerical model was developed for simulating the anaerobic dehalogenation of chlorinated ethenes. An appropriate (reduced) version of this model was combined with a parameter estimation method based on fitting of the experimental results. Each of over 150 individual microcosm calibrations involved matching predicted and observed time-varying concentrations of all chlorinated compounds. This study focuses on an analysis of this suite of fitted model parameter values. This includes determining the statistical correlation between parameters typically employed in standard Michaelis-Menten type rate descriptions (e.g., maximum dechlorination rates, half-saturation constants) and the key experimental variables. The analysis provides insight into the degree to which aqueous phase TCE and cis-DCE inhibit dechlorination of less-chlorinated compounds. Overall, this work provides a database of the numerical modelling parameters typically employed for simulating TCE dechlorination relevant for a range of system conditions (e.g, bioaugmented, high TCE concentrations, etc.). The significance of the obtained variability of parameters is illustrated with one-dimensional simulations of enhanced anaerobic bioremediation of residual TCE DNAPL.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClure, James E.; Berrill, Mark A.; Gray, William G.
Here, multiphase flow in porous medium systems is typically modeled using continuum mechanical representations at the macroscale in terms of averaged quantities. These models require closure relations to produce solvable forms. One of these required closure relations is an expression relating fluid pressures, fluid saturations, and, in some cases, the interfacial area between the fluid phases, and the Euler characteristic. An unresolved question is whether the inclusion of these additional morphological and topological measures can lead to a non-hysteretic closure relation compared to the hysteretic forms that are used in traditional models, which typically do not include interfacial areas, ormore » the Euler characteristic. We develop a lattice-Boltzmann (LB) simulation approach to investigate the equilibrium states of a two-fluid-phase porous medium system, which include disconnected now- wetting phase features. The proposed approach is applied to a synthetic medium consisting of 1,964 spheres arranged in a random, non-overlapping, close-packed manner, yielding a total of 42,908 different equilibrium points. This information is evaluated using a generalized additive modeling approach to determine if a unique function from this family exists, which can explain the data. The variance of various model estimates is computed, and we conclude that, except for the limiting behavior close to a single fluid regime, capillary pressure can be expressed as a deterministic and non-hysteretic function of fluid saturation, interfacial area between the fluid phases, and the Euler characteristic. This work is unique in the methods employed, the size of the data set, the resolution in space and time, the true equilibrium nature of the data, the parameterizations investigated, and the broad set of functions examined. The conclusion of essentially non-hysteretic behavior provides support for an evolving class of two-fluid-phase flow in porous medium systems models.« less
A scoping review of malaria forecasting: past work and future directions
Zinszer, Kate; Verma, Aman D; Charland, Katia; Brewer, Timothy F; Brownstein, John S; Sun, Zhuoyu; Buckeridge, David L
2012-01-01
Objectives There is a growing body of literature on malaria forecasting methods and the objective of our review is to identify and assess methods, including predictors, used to forecast malaria. Design Scoping review. Two independent reviewers searched information sources, assessed studies for inclusion and extracted data from each study. Information sources Search strategies were developed and the following databases were searched: CAB Abstracts, EMBASE, Global Health, MEDLINE, ProQuest Dissertations & Theses and Web of Science. Key journals and websites were also manually searched. Eligibility criteria for included studies We included studies that forecasted incidence, prevalence or epidemics of malaria over time. A description of the forecasting model and an assessment of the forecast accuracy of the model were requirements for inclusion. Studies were restricted to human populations and to autochthonous transmission settings. Results We identified 29 different studies that met our inclusion criteria for this review. The forecasting approaches included statistical modelling, mathematical modelling and machine learning methods. Climate-related predictors were used consistently in forecasting models, with the most common predictors being rainfall, relative humidity, temperature and the normalised difference vegetation index. Model evaluation was typically based on a reserved portion of data and accuracy was measured in a variety of ways including mean-squared error and correlation coefficients. We could not compare the forecast accuracy of models from the different studies as the evaluation measures differed across the studies. Conclusions Applying different forecasting methods to the same data, exploring the predictive ability of non-environmental variables, including transmission reducing interventions and using common forecast accuracy measures will allow malaria researchers to compare and improve models and methods, which should improve the quality of malaria forecasting. PMID:23180505
Running of the scalar spectral index in bouncing cosmologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehners, Jean-Luc; Wilson-Ewing, Edward, E-mail: jean-luc.lehners@aei.mpg.de, E-mail: wilson-ewing@aei.mpg.de
We calculate the running of the scalar index in the ekpyrotic and matter bounce cosmological scenarios, and find that it is typically negative for ekpyrotic models, while it is typically positive for realizations of the matter bounce where multiple fields are present. This can be compared to inflation, where the observationally preferred models typically predict a negative running. The magnitude of the running is expected to be between 10{sup −4} and up to 10{sup −2}, leading in some cases to interesting expectations for near-future observations.
NASA Astrophysics Data System (ADS)
DeForest, Craig; Seaton, Daniel B.; Darnell, John A.
2017-08-01
I present and demonstrate a new, general purpose post-processing technique, "3D noise gating", that can reduce image noise by an order of magnitude or more without effective loss of spatial or temporal resolution in typical solar applications.Nearly all scientific images are, ultimately, limited by noise. Noise can be direct Poisson "shot noise" from photon counting effects, or introduced by other means such as detector read noise. Noise is typically represented as a random variable (perhaps with location- or image-dependent characteristics) that is sampled once per pixel or once per resolution element of an image sequence. Noise limits many aspects of image analysis, including photometry, spatiotemporal resolution, feature identification, morphology extraction, and background modeling and separation.Identifying and separating noise from image signal is difficult. The common practice of blurring in space and/or time works because most image "signal" is concentrated in the low Fourier components of an image, while noise is evenly distributed. Blurring in space and/or time attenuates the high spatial and temporal frequencies, reducing noise at the expense of also attenuating image detail. Noise-gating exploits the same property -- "coherence" -- that we use to identify features in images, to separate image features from noise.Processing image sequences through 3-D noise gating results in spectacular (more than 10x) improvements in signal-to-noise ratio, while not blurring bright, resolved features in either space or time. This improves most types of image analysis, including feature identification, time sequence extraction, absolute and relative photometry (including differential emission measure analysis), feature tracking, computer vision, correlation tracking, background modeling, cross-scale analysis, visual display/presentation, and image compression.I will introduce noise gating, describe the method, and show examples from several instruments (including SDO/AIA , SDO/HMI, STEREO/SECCHI, and GOES-R/SUVI) that explore the benefits and limits of the technique.
ERIC Educational Resources Information Center
Shimron, Joseph; Chernitsky, Roberto
1995-01-01
Investigates changes in the internal structure of semantic categories as a result of cultural transition. Examines typicality shifts in semantic categories of Jewish Argentine immigrants in Israel. Presents a model mapping typicality shift patterns onto acculturation patterns. (HB)
A conflict analysis of 4D descent strategies in a metered, multiple-arrival route environment
NASA Technical Reports Server (NTRS)
Izumi, K. H.; Harris, C. S.
1990-01-01
A conflict analysis was performed on multiple arrival traffic at a typical metered airport. The Flow Management Evaluation Model (FMEM) was used to simulate arrival operations using Denver Stapleton's arrival route structure. Sensitivities of conflict performance to three different 4-D descent strategies (clear-idle Mach/Constant AirSpeed (CAS), constant descent angle Mach/CAS and energy optimal) were examined for three traffic mixes represented by those found at Denver Stapleton, John F. Kennedy and typical en route metering (ERM) airports. The Monte Carlo technique was used to generate simulation entry point times. Analysis results indicate that the clean-idle descent strategy offers the best compromise in overall performance. Performance measures primarily include susceptibility to conflict and conflict severity. Fuel usage performance is extrapolated from previous descent strategy studies.
ICT Is Not Participation Is Not Democracy - eParticipation Development Models Revisited
NASA Astrophysics Data System (ADS)
Grönlund, Åke
There exist several models to describe “progress” in eParticipation. Models are typically ladder type and share two assumptions; progress is equalled with more sophisticated use of technology, and direct democracy is seen as the most advanced democracy model. None of the assumptions are true, considering democratic theory, and neither is fruitful as the simplification disturbs analysis and hence obscures actual progress made. The models convey a false impression of progress, but neither the goal, nor the path or the stakeholders driving the development are clearly understood, presented or evidenced. This paper analyses commonly used models based on democratic theory and eParticipation practice, and concludes that all are biased and fail to distinguish between the three dimensions an eParticipation progress model must include; relevance to democracy by any definition, applicability to different processes, (capacity building as well as decision making), and measuring different levels of participation without direct democracy bias.
Thinking outside the channel: Modeling nitrogen cycling in networked river ecosystems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helton, Ashley; Poole, Geoffrey C.; Meyer, Judy
2011-01-01
Agricultural and urban development alters nitrogen and other biogeochemical cycles in rivers worldwide. Because such biogeochemical processes cannot be measured empirically across whole river networks, simulation models are critical tools for understanding river-network biogeochemistry. However, limitations inherent in current models restrict our ability to simulate biogeochemical dynamics among diverse river networks. We illustrate these limitations using a river-network model to scale up in situ measures of nitrogen cycling in eight catchments spanning various geophysical and land-use conditions. Our model results provide evidence that catchment characteristics typically excluded from models may control river-network biogeochemistry. Based on our findings, we identify importantmore » components of a revised strategy for simulating biogeochemical dynamics in river networks, including approaches to modeling terrestrial-aquatic linkages, hydrologic exchanges between the channel, floodplain/riparian complex, and subsurface waters, and interactions between coupled biogeochemical cycles.« less
From Prototypes to Caricatures: Geometrical Models for Concept Typicality
ERIC Educational Resources Information Center
Ameel, Eef; Storms, Gert
2006-01-01
In three studies, we investigated to what extent a geometrical representation in a psychological space succeeds in predicting typicality in animal, natural food and artifact concepts and whether contrast categories contribute to the prediction. In Study 1, we compared the predictive value of a family resemblance-based prototype model with a…
ERIC Educational Resources Information Center
Morse, Anthony F.; Cangelosi, Angelo
2017-01-01
Most theories of learning would predict a gradual acquisition and refinement of skills as learning progresses, and while some highlight exponential growth, this fails to explain why natural cognitive development typically progresses in stages. Models that do span multiple developmental stages typically have parameters to "switch" between…
Modelling Typical Online Language Learning Activity
ERIC Educational Resources Information Center
Montoro, Carlos; Hampel, Regine; Stickler, Ursula
2014-01-01
This article presents the methods and results of a four-year-long research project focusing on the language learning activity of individual learners using online tasks conducted at the University of Guanajuato (Mexico) in 2009-2013. An activity-theoretical model (Blin, 2010; Engeström, 1987) of the typical language learning activity was used to…
Influence of Disorder on DNA Conductance
NASA Technical Reports Server (NTRS)
Adessi, Christophe; Anantram, M. P.; Biegel, Bryan A. (Technical Monitor)
2003-01-01
Disorder along a DNA strand due to non uniformity associated with the counter ion type and location, and in rise and twist are investigated using density functional theory. We then model the conductance through a poly(G) DNA strand by including the influence of disorder. We show that the conductance drops by a few orders of magnitude between typical lengths of 10 and 100 nm. Such a decrease occurs with on-site potential disorder that is larger than 100 meV.
Management, Maintenance, and Upkeep of the Baseline COMO III Air Defense Model.
1986-10-20
weapon subsystems. The sensor subsystems include passive, infrared (IR), television, and a nonimaging sensor and observer, typically the vehicle driver...initially scheduled from the enter game event (DGO) and is rescheduled on a cyclic basis. When radar target detection occurs, the optical search process (DG9...one search cycle in elevation by the track radar/gunner’s optics . DG1 constantly monitors the radar surveillance search volume and when a higher
2012-09-27
time patients could reach a temperature near 103°F. The fever was typically 5 accompanied by headache, backache, vomiting , and prostration. A...were co-housed with prairie dogs . Infected prairie dogs were sold and distributed across multiple states including Wisconsin, Illinois, Indiana...deletion of C3L from the Congo Basin clade virus reduced morbidity and mortality in prairie dogs infected intranasally (29). Since 1986, passive
Flight testing a V/STOL aircraft to identify a full-envelope aerodynamic model
NASA Technical Reports Server (NTRS)
Mcnally, B. David; Bach, Ralph E., Jr.
1988-01-01
Flight-test techniques are being used to generate a data base for identification of a full-envelope aerodynamic model of a V/STOL fighter aircraft, the YAV-8B Harrier. The flight envelope to be modeled includes hover, transition to conventional flight and back to hover, STOL operation, and normal cruise. Standard V/STOL procedures such as vertical takeoff and landings, and short takeoff and landings are used to gather data in the powered-lift flight regime. Long (3 to 5 min) maneuvers which include a variety of input types are used to obtain large-amplitude control and response excitations. The aircraft is under continuous radar tracking; a laser tracker is used for V/STOL operations near the ground. Tracking data are used with state-estimation techniques to check data consistency and to derive unmeasured variables, for example, angular accelerations. A propulsion model of the YAV-8B's engine and reaction control system is used to isolate aerodynamic forces and moments for model identification. Representative V/STOL flight data are presented. The processing of a typical short takeoff and slow landing maneuver is illustrated.
Modeling Climate Change in the Absence of Climate Change Data. Editorial Comment
NASA Technical Reports Server (NTRS)
Skiles, J. W.
1995-01-01
Practitioners of climate change prediction base many of their future climate scenarios on General Circulation Models (GCM's), each model with differing assumptions and parameter requirements. For representing the atmosphere, GCM's typically contain equations for calculating motion of particles, thermodynamics and radiation, and continuity of water vapor. Hydrology and heat balance are usually included for continents, and sea ice and heat balance are included for oceans. The current issue of this journal contains a paper by Van Blarcum et al. (1995) that predicts runoff from nine high-latitude rivers under a doubled CO2 atmosphere. The paper is important since river flow is an indicator variable for climate change. The authors show that precipitation will increase under the imposed perturbations and that owing to higher temperatures earlier in the year that cause the snow pack to melt sooner, runoff will also increase. They base their simulations on output from a GCM coupled with an interesting water routing scheme they have devised. Climate change models have been linked to other models to predict deforestation.
Multi-model inference for incorporating trophic and climate uncertainty into stock assessments
NASA Astrophysics Data System (ADS)
Ianelli, James; Holsman, Kirstin K.; Punt, André E.; Aydin, Kerim
2016-12-01
Ecosystem-based fisheries management (EBFM) approaches allow a broader and more extensive consideration of objectives than is typically possible with conventional single-species approaches. Ecosystem linkages may include trophic interactions and climate change effects on productivity for the relevant species within the system. Presently, models are evolving to include a comprehensive set of fishery and ecosystem information to address these broader management considerations. The increased scope of EBFM approaches is accompanied with a greater number of plausible models to describe the systems. This can lead to harvest recommendations and biological reference points that differ considerably among models. Model selection for projections (and specific catch recommendations) often occurs through a process that tends to adopt familiar, often simpler, models without considering those that incorporate more complex ecosystem information. Multi-model inference provides a framework that resolves this dilemma by providing a means of including information from alternative, often divergent models to inform biological reference points and possible catch consequences. We apply an example of this approach to data for three species of groundfish in the Bering Sea: walleye pollock, Pacific cod, and arrowtooth flounder using three models: 1) an age-structured "conventional" single-species model, 2) an age-structured single-species model with temperature-specific weight at age, and 3) a temperature-specific multi-species stock assessment model. The latter two approaches also include consideration of alternative future climate scenarios, adding another dimension to evaluate model projection uncertainty. We show how Bayesian model-averaging methods can be used to incorporate such trophic and climate information to broaden single-species stock assessments by using an EBFM approach that may better characterize uncertainty.
Concrete Open-Wall Systems Wrapped with FRP under Torsional Loads
Mancusi, Geminiano; Feo, Luciano; Berardi, Valentino P.
2012-01-01
The static behavior of reinforced concrete (RC) beams plated with layers of fiber-reinforced composite material (FRP) is widely investigated in current literature, which deals with both its numerical modeling as well as experiments. Scientific interest in this topic is explained by the increasing widespread use of composite materials in retrofitting techniques, as well as the consolidation and upgrading of existing reinforced concrete elements to new service conditions. The effectiveness of these techniques is typically influenced by the debonding of the FRP at the interface with concrete, where the transfer of stresses occurs from one element (RC member) to the other (FRP strengthening). In fact, the activation of the well-known premature failure modes can be regarded as a consequence of high peak values of the interfacial interactions. Until now, typical applications of FRP structural plating have included cases of flexural or shear-flexural strengthening. Within this context, the present study aims at extending the investigation to the case of wall-systems with open cross-section under torsional loads. It includes the results of some numerical analyses carried out by means of a finite element approximation.
NASA Technical Reports Server (NTRS)
Free, April M.; Flowers, George T.; Trent, Victor S.
1995-01-01
Auxiliary bearings are a critical feature of any magnetic bearing system. They protect the soft iron core of the magnetic bearing during an overload or failure. An auxiliary bearing typically consists of a rolling element bearing or bushing with a clearance gap between the rotor and the inner race of the support. The dynamics of such systems can be quite complex. It is desired to develop a rotordynamic model which describes the dynamic behavior of a flexible rotor system with magnetic bearings including auxiliary bearings. The model is based upon an experimental test facility. Some simulation studies are presented to illustrate the behavior of the model. In particular, the effects of introducing sideloading from the magnetic bearing when one coil fails is studied.
Rotordynamic Modelling and Response Characteristics of an Active Magnetic Bearing Rotor System
NASA Technical Reports Server (NTRS)
Free, April M.; Flowers, George T.; Trent, Victor S.
1996-01-01
Auxiliary bearings are a critical feature of any magnetic bearing system. They protect the soft iron core of the magnetic bearing during an overload or failure. An auxiliary bearing typically consists of a rolling element bearing or bushing with a clearance gap between the rotor and the inner race of the support. The dynamics of such systems can be quite complex. It is desired to develop a rotordynamic model which describes the dynamic behavior of a flexible rotor system with magnetic bearings including auxiliary bearings. The model is based upon an experimental test facility. Some simulation studies are presented to illustrate the behavior of the model. In particular, the effects of introducing sideloading from the magnetic bearing when one coil fails is studied. These results are presented and discussed.
NASA Technical Reports Server (NTRS)
Smialek, James L.
2002-01-01
An equation has been developed to model the iterative scale growth and spalling process that occurs during cyclic oxidation of high temperature materials. Parabolic scale growth and spalling of a constant surface area fraction have been assumed. Interfacial spallation of the only the thickest segments was also postulated. This simplicity allowed for representation by a simple deterministic summation series. Inputs are the parabolic growth rate constant, the spall area fraction, oxide stoichiometry, and cycle duration. Outputs include the net weight change behavior, as well as the total amount of oxygen and metal consumed, the total amount of oxide spalled, and the mass fraction of oxide spalled. The outputs all follow typical well-behaved trends with the inputs and are in good agreement with previous interfacial models.
Dark Matter "Collider" from Inelastic Boosted Dark Matter.
Kim, Doojin; Park, Jong-Chul; Shin, Seodong
2017-10-20
We propose a novel dark matter (DM) detection strategy for models with a nonminimal dark sector. The main ingredients in the underlying DM scenario are a boosted DM particle and a heavier dark sector state. The relativistic DM impinged on target material scatters off inelastically to the heavier state, which subsequently decays into DM along with lighter states including visible (standard model) particles. The expected signal event, therefore, accompanies a visible signature by the secondary cascade process associated with a recoiling of the target particle, differing from the typical neutrino signal not involving the secondary signature. We then discuss various kinematic features followed by DM detection prospects at large-volume neutrino detectors with a model framework where a dark gauge boson is the mediator between the standard model particles and DM.
Stephensen, C B; Welter, J; Thaker, S R; Taylor, J; Tartaglia, J; Paoletti, E
1997-01-01
Canine distemper virus (CDV) infection of ferrets causes an acute systemic disease involving multiple organ systems, including the respiratory tract, lymphoid system, and central nervous system (CNS). We have tested candidate CDV vaccines incorporating the fusion (F) and hemagglutinin (HA) proteins in the highly attenuated NYVAC strain of vaccinia virus and in the ALVAC strain of canarypox virus, which does not productively replicate in mammalian hosts. Juvenile ferrets were vaccinated twice with these constructs, or with an attenuated live-virus vaccine, while controls received saline or the NYVAC and ALVAC vectors expressing rabies virus glycoprotein. Control animals did not develop neutralizing antibody and succumbed to distemper after developing fever, weight loss, leukocytopenia, decreased activity, conjunctivitis, an erythematous rash typical of distemper, CNS signs, and viremia in peripheral blood mononuclear cells (as measured by reverse transcription-PCR). All three CDV vaccines elicited neutralizing titers of at least 1:96. All vaccinated ferrets survived, and none developed viremia. Both recombinant vaccines also protected against the development of symptomatic distemper. However, ferrets receiving the live-virus vaccine lost weight, became lymphocytopenic, and developed the erythematous rash typical of CDV. These data show that ferrets are an excellent model for evaluating the ability of CDV vaccines to protect against symptomatic infection. Because the pathogenesis and clinical course of CDV infection of ferrets is quite similar to that of other Morbillivirus infections, including measles, this model will be useful in testing new candidate Morbillivirus vaccines. PMID:8995676
Stephensen, C B; Welter, J; Thaker, S R; Taylor, J; Tartaglia, J; Paoletti, E
1997-02-01
Canine distemper virus (CDV) infection of ferrets causes an acute systemic disease involving multiple organ systems, including the respiratory tract, lymphoid system, and central nervous system (CNS). We have tested candidate CDV vaccines incorporating the fusion (F) and hemagglutinin (HA) proteins in the highly attenuated NYVAC strain of vaccinia virus and in the ALVAC strain of canarypox virus, which does not productively replicate in mammalian hosts. Juvenile ferrets were vaccinated twice with these constructs, or with an attenuated live-virus vaccine, while controls received saline or the NYVAC and ALVAC vectors expressing rabies virus glycoprotein. Control animals did not develop neutralizing antibody and succumbed to distemper after developing fever, weight loss, leukocytopenia, decreased activity, conjunctivitis, an erythematous rash typical of distemper, CNS signs, and viremia in peripheral blood mononuclear cells (as measured by reverse transcription-PCR). All three CDV vaccines elicited neutralizing titers of at least 1:96. All vaccinated ferrets survived, and none developed viremia. Both recombinant vaccines also protected against the development of symptomatic distemper. However, ferrets receiving the live-virus vaccine lost weight, became lymphocytopenic, and developed the erythematous rash typical of CDV. These data show that ferrets are an excellent model for evaluating the ability of CDV vaccines to protect against symptomatic infection. Because the pathogenesis and clinical course of CDV infection of ferrets is quite similar to that of other Morbillivirus infections, including measles, this model will be useful in testing new candidate Morbillivirus vaccines.
Discrete-time modelling of musical instruments
NASA Astrophysics Data System (ADS)
Välimäki, Vesa; Pakarinen, Jyri; Erkut, Cumhur; Karjalainen, Matti
2006-01-01
This article describes physical modelling techniques that can be used for simulating musical instruments. The methods are closely related to digital signal processing. They discretize the system with respect to time, because the aim is to run the simulation using a computer. The physics-based modelling methods can be classified as mass-spring, modal, wave digital, finite difference, digital waveguide and source-filter models. We present the basic theory and a discussion on possible extensions for each modelling technique. For some methods, a simple model example is chosen from the existing literature demonstrating a typical use of the method. For instance, in the case of the digital waveguide modelling technique a vibrating string model is discussed, and in the case of the wave digital filter technique we present a classical piano hammer model. We tackle some nonlinear and time-varying models and include new results on the digital waveguide modelling of a nonlinear string. Current trends and future directions in physical modelling of musical instruments are discussed.
Demographics of reintroduced populations: estimation, modeling, and decision analysis
Converse, Sarah J.; Moore, Clinton T.; Armstrong, Doug P.
2013-01-01
Reintroduction can be necessary for recovering populations of threatened species. However, the success of reintroduction efforts has been poorer than many biologists and managers would hope. To increase the benefits gained from reintroduction, management decision making should be couched within formal decision-analytic frameworks. Decision analysis is a structured process for informing decision making that recognizes that all decisions have a set of components—objectives, alternative management actions, predictive models, and optimization methods—that can be decomposed, analyzed, and recomposed to facilitate optimal, transparent decisions. Because the outcome of interest in reintroduction efforts is typically population viability or related metrics, models used in decision analysis efforts for reintroductions will need to include population models. In this special section of the Journal of Wildlife Management, we highlight examples of the construction and use of models for informing management decisions in reintroduced populations. In this introductory contribution, we review concepts in decision analysis, population modeling for analysis of decisions in reintroduction settings, and future directions. Increased use of formal decision analysis, including adaptive management, has great potential to inform reintroduction efforts. Adopting these practices will require close collaboration among managers, decision analysts, population modelers, and field biologists.
NASA Technical Reports Server (NTRS)
Cline, M. C.
1981-01-01
A computer program, VNAP2, for calculating turbulent (as well as laminar and inviscid), steady, and unsteady flow is presented. It solves the two dimensional, time dependent, compressible Navier-Stokes equations. The turbulence is modeled with either an algebraic mixing length model, a one equation model, or the Jones-Launder two equation model. The geometry may be a single or a dual flowing stream. The interior grid points are computed using the unsplit MacCormack scheme. Two options to speed up the calculations for high Reynolds number flows are included. The boundary grid points are computed using a reference plane characteristic scheme with the viscous terms treated as source functions. An explicit artificial viscosity is included for shock computations. The fluid is assumed to be a perfect gas. The flow boundaries may be arbitrary curved solid walls, inflow/outflow boundaries, or free jet envelopes. Typical problems that can be solved concern nozzles, inlets, jet powered afterbodies, airfoils, and free jet expansions. The accuracy and efficiency of the program are shown by calculations of several inviscid and turbulent flows. The program and its use are described completely, and six sample cases and a code listing are included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
P. H. Titus, S. Avasaralla, A.Brooks, R. Hatcher
2010-09-22
The National Spherical Torus Experiment (NSTX) project is planning upgrades to the toroidal field, plasma current and pulse length. This involves the replacement of the center-stack, including the inner legs of the TF, OH, and inner PF coils. A second neutral beam will also be added. The increased performance of the upgrade requires qualification of the remaining components including the vessel, passive plates, and divertor for higher disruption loads. The hardware needing qualification is more complex than is typically accessible by large scale electromagnetic (EM) simulations of the plasma disruptions. The usual method is to include simplified representations of componentsmore » in the large EM models and attempt to extract forces to apply to more detailed models. This paper describes a more efficient approach of combining comprehensive modeling of the plasma and tokamak conducting structures, using the 2D OPERA code, with much more detailed treatment of individual components using ANSYS electromagnetic (EM) and mechanical analysis. This capture local eddy currents and resulting loads in complex details, and allows efficient non-linear, and dynamic structural analyses.« less
Computational Modeling of Low-Density Ultracold Plasmas
NASA Astrophysics Data System (ADS)
Witte, Craig
In this dissertation I describe a number of different computational investigations which I have undertaken during my time at Colorado State University. Perhaps the most significant of my accomplishments was the development of a general molecular dynamic model that simulates a wide variety of physical phenomena in ultracold plasmas (UCPs). This model formed the basis of most of the numerical investigations discussed in this thesis. The model utilized the massively parallel architecture of GPUs to achieve significant computing speed increases (up to 2 orders of magnitude) above traditional single core computing. This increased computing power allowed for each particle in an actual UCP experimental system to be explicitly modeled in simulations. By using this model, I was able to undertake a number of theoretical investigations into ultracold plasma systems. Chief among these was our lab's investigation of electron center-of-mass damping, in which the molecular dynamics model was an essential tool in interpreting the results of the experiment. Originally, it was assumed that this damping would solely be a function of electron-ion collisions. However, the model was able to identify an additional collisionless damping mechanism that was determined to be significant in the first iteration of our experiment. To mitigate this collisionless damping, the model was used to find a new parameter range where this mechanism was negligible. In this new parameter range, the model was an integral part in verifying the achievement of a record low measured UCP electron temperature of 1.57 +/- 0.28K and a record high electron strong coupling parameter, Gamma, of 0.35 +/-0.08$. Additionally, the model, along with experimental measurements, was used to verify the breakdown of the standard weak coupling approximation for Coulomb collisions. The general molecular dynamics model was also used in other contexts. These included the modeling of both the formation process of ultracold plasmas and the thermalization of the electron component of an ultracold plasma. Our modeling of UCP formation is still in its infancy, and there is still much outstanding work. However, we have already discovered a previously unreported electron heating mechanism that arises from an external electric field being applied during UCP formation. Thermalization modeling showed that the ion density distribution plays a role in the thermalization of electrons in ultracold plasma, a consideration not typically included in plasma modeling. A Gaussian ion density distribution was shown to lead to a slightly faster electron thermalization rate than an equivalent uniform ion density distribution as a result of collisionless effects. Three distinct phases of UCP electron thermalization during formation were identified. Finally, the dissertation will describe additional computational investigations that preceded the general molecular dynamics model. These include simulations of ultracold plasma ion expansion driven by non-neutrality, as well as an investigation into electron evaporation. To test the effects of non-neutrality on ion expansion, a numerical model was developed that used the King model of the electron to describe the electron distribution for an arbitrary charge imbalance. The model found that increased non-neutrality of the plasma led to the rapid expansion of ions on the plasma exterior, which in turn led to a sharp ion cliff-like spatial structure. Additionally, this rapid expansion led to additional cooling of the electron component of the plasma. The evaporation modeling was used to test the underlying assumptions of previously developed analytical expression for charged particle evaporation. The model used Monte Carlo techniques to simulate the collisions and the evaporation process. The model found that neither of the underlying assumption of the charged particle evaporation expressions held true for typical ultracold plasma parameters and provides a route for computations in spite of the breakdown of these two typical assumptions.
NASA Astrophysics Data System (ADS)
Mitra, Aditee; Castellani, Claudia; Gentleman, Wendy C.; Jónasdóttir, Sigrún H.; Flynn, Kevin J.; Bode, Antonio; Halsband, Claudia; Kuhn, Penelope; Licandro, Priscilla; Agersted, Mette D.; Calbet, Albert; Lindeque, Penelope K.; Koppelmann, Rolf; Møller, Eva F.; Gislason, Astthor; Nielsen, Torkel Gissel; St. John, Michael
2014-12-01
Exploring climate and anthropogenic impacts on marine ecosystems requires an understanding of how trophic components interact. However, integrative end-to-end ecosystem studies (experimental and/or modelling) are rare. Experimental investigations often concentrate on a particular group or individual species within a trophic level, while tropho-dynamic field studies typically employ either a bottom-up approach concentrating on the phytoplankton community or a top-down approach concentrating on the fish community. Likewise the emphasis within modelling studies is usually placed upon phytoplankton-dominated biogeochemistry or on aspects of fisheries regulation. In consequence the roles of zooplankton communities (protists and metazoans) linking phytoplankton and fish communities are typically under-represented if not (especially in fisheries models) ignored. Where represented in ecosystem models, zooplankton are usually incorporated in an extremely simplistic fashion, using empirical descriptions merging various interacting physiological functions governing zooplankton growth and development, and thence ignoring physiological feedback mechanisms. Here we demonstrate, within a modelled plankton food-web system, how trophic dynamics are sensitive to small changes in parameter values describing zooplankton vital rates and thus the importance of using appropriate zooplankton descriptors. Through a comprehensive review, we reveal the mismatch between empirical understanding and modelling activities identifying important issues that warrant further experimental and modelling investigation. These include: food selectivity, kinetics of prey consumption and interactions with assimilation and growth, form of voided material, mortality rates at different age-stages relative to prior nutrient history. In particular there is a need for dynamic data series in which predator and prey of known nutrient history are studied interacting under varied pH and temperature regimes.
A neutrinophilic 2HDM as a UV completion for the inverse seesaw mechanism
Bertuzzo, Enrico; Machado, Pedro A. N.; Tabrizi, Zahra; ...
2017-11-06
In Neutrinophilic Two Higgs Doublet Models, Dirac neutrino masses are obtained by forbidding a Majorana mass term for the right-handed neutrinos via a symmetry. We study a variation of such models in which that symmetry is taken to be a local U(1), leading naturally to the typical Lagrangian of the inverse seesaw scenario. Here, the presence of a new gauge boson and of an extended scalar sector result in a rich phenomenology, including modifications to Z, Higgs and kaon decays as well as to electroweak precision parameters, and a pseudoscalar associated to the breaking of lepton number.
Modeling the effect of varying swim speeds on fish passage through velocity barriers
Castro-Santos, T.
2006-01-01
The distance fish can swim through zones of high-velocity flow is an important factor limiting the distribution and conservation of riverine and diadromous fishes. Often, these barriers are characterized by nonuniform flow conditions, and it is likely that fish will swim at varying speeds to traverse them. Existing models used to predict passage success, however, typically include the unrealistic assumption that fish swim at a constant speed regardless of the speed of flow. This paper demonstrates how the maximum distance of ascent through velocity barriers can be estimated from the swim speed-fatigue time relationship, allowing for variation in both swim speed and water velocity.
Thermal Destruction Of CB Contaminants Bound On Building ...
Symposium Paper An experimental and theoretical program has been initiated by the U.S. EPA to investigate issues of chemical/biological agent destruction in incineration systems when the agent in question is bound on common porous building interior materials. This program includes 3-dimensional computational fluid dynamics modeling with matrix-bound agent destruction kinetics, bench-scale experiments to determine agent destruction kinetics while bound on various matrices, and pilot-scale experiments to scale-up the bench-scale experiments to a more practical scale. Finally, model predictions are made to predict agent destruction and combustion conditions in two full-scale incineration systems that are typical of modern combustor design.
Holographic Techni-Dilaton, or Conformal Higgs
NASA Astrophysics Data System (ADS)
Habaa, Kazumoto; Matsuzaki, Shinya; Yamawaki, Koichi
2011-01-01
We study a holographic model dual to walking/conformal technicolor (W/C TC) deforming a hard-wall type of bottom-up setup by including effects from techni-gluon condensation. We calculate masses of (techni-) ρ meson, a1 meson, and flavor/chiral-singlet scalar meson identified with techni-dilaton (TD)/conformal Higgs boson, as well as the S parameter. It is shown that gluon contributions and large anomalous dimension tend to decrease specifically mass of the TD. In the typical model with S ≃ 0.1, we find mTD ≃ 600 GeV, while mρ, m
Porcine cadaver iris model for iris heating during corneal surgery with a femtosecond laser
NASA Astrophysics Data System (ADS)
Sun, Hui; Fan, Zhongwei; Wang, Jiang; Yan, Ying; Juhasz, Tibor; Kurtz, Ron
2015-03-01
Multiple femtosecond lasers have now been cleared for use for ophthalmic surgery, including for creation of corneal flaps in LASIK surgery. Preliminary study indicated that during typical surgical use, laser energy may pass beyond the cornea with potential effects on the iris. As a model for laser exposure of the iris during femtosecond corneal surgery, we simulated the temperature rise in porcine cadaver iris during direct illumination by the femtosecond laser. Additionally, ex-vivo iris heating due to femtosecond laser irradiation was measured with an infrared thermal camera (Fluke corp. Everett, WA) as a validation of the simulation.
NASA Technical Reports Server (NTRS)
Hultberg, R. S.; Chu, J.
1980-01-01
Aerodynamic characteristics obtained in a helical flow environment utilizing a rotary balance located in the Langley spin g tunnel are presented in plotted form for a 1/6 scale, single engine, high wing, general aviation model. The configurations tested included the basic airplane and control deflections, wing leading edge devices, tail designs, and airplane components. Data are presented without analysis for an angle of attack range of 8 deg to 90 deg and clockwise and counter clockwise rotations covering a spin coefficient range from 0 to 0.9.
Hysteresis phenomena of the intelligent driver model for traffic flow
NASA Astrophysics Data System (ADS)
Dahui, Wang; Ziqiang, Wei; Ying, Fan
2007-07-01
We present hysteresis phenomena of the intelligent driver model for traffic flow in a circular one-lane roadway. We show that the microscopic structure of traffic flow is dependent on its initial state by plotting the fraction of congested vehicles over the density, which shows a typical hysteresis loop, and by investigating the trajectories of vehicles on the velocity-over-headway plane. We find that the trajectories of vehicles on the velocity-over-headway plane, which usually show a hysteresis loop, include multiple loops. We also point out the relations between these hysteresis loops and the congested jams or high-density clusters in traffic flow.
RF Frequency Oscillations in the Early Stages of Vacuum Arc Collapse
NASA Technical Reports Server (NTRS)
Griffin, Steven T.; Thio, Y. C. Francis
2003-01-01
RF frequency oscillations may be produced in a typical capacitive charging / discharging pulsed power system. These oscillations may be benign, parasitic, destructive or crucial to energy deposition. In some applications, proper damping of oscillations may be critical to proper plasma formation. Because the energy deposited into the plasma is a function of plasma and circuit conditions, the entire plasma / circuit system needs to be considered as a unit To accomplish this, the initiation of plasma is modeled as a time-varying, non-linear element in a circuit analysis model. The predicted spectra are compared to empirical power density spectra including those obtained from vacuum arcs.
On the use of tower-flux measurements to assess the performance of global ecosystem models
NASA Astrophysics Data System (ADS)
El Maayar, M.; Kucharik, C.
2003-04-01
Global ecosystem models are important tools for the study of biospheric processes and their responses to environmental changes. Such models typically translate knowledge, gained from local observations, into estimates of regional or even global outcomes of ecosystem processes. A typical test of ecosystem models consists of comparing their output against tower-flux measurements of land surface-atmosphere exchange of heat and mass. To perform such tests, models are typically run using detailed information on soil properties (texture, carbon content,...) and vegetation structure observed at the experimental site (e.g., vegetation height, vegetation phenology, leaf photosynthetic characteristics,...). In global simulations, however, earth's vegetation is typically represented by a limited number of plant functional types (PFT; group of plant species that have similar physiological and ecological characteristics). For each PFT (e.g., temperate broadleaf trees, boreal conifer evergreen trees,...), which can cover a very large area, a set of typical physiological and physical parameters are assigned. Thus, a legitimate question arises: How does the performance of a global ecosystem model run using detailed site-specific parameters compare with the performance of a less detailed global version where generic parameters are attributed to a group of vegetation species forming a PFT? To answer this question, we used a multiyear dataset, measured at two forest sites with contrasting environments, to compare seasonal and interannual variability of surface-atmosphere exchange of water and carbon predicted by the Integrated BIosphere Simulator-Dynamic Global Vegetation Model. Two types of simulations were, thus, performed: a) Detailed runs: observed vegetation characteristics (leaf area index, vegetation height,...) and soil carbon content, in addition to climate and soil type, are specified for model run; and b) Generic runs: when only observed climates and soil types at the measurement sites are used to run the model. The generic runs were performed for the number of years equal to the current age of the forests, initialized with no vegetation and a soil carbon density equal to zero.
Nonlinear plasma wave models in 3D fluid simulations of laser-plasma interaction
NASA Astrophysics Data System (ADS)
Chapman, Thomas; Berger, Richard; Arrighi, Bill; Langer, Steve; Banks, Jeffrey; Brunner, Stephan
2017-10-01
Simulations of laser-plasma interaction (LPI) in inertial confinement fusion (ICF) conditions require multi-mm spatial scales due to the typical laser beam size and durations of order 100 ps in order for numerical laser reflectivities to converge. To be computationally achievable, these scales necessitate a fluid-like treatment of light and plasma waves with a spatial grid size on the order of the light wave length. Plasma waves experience many nonlinear phenomena not naturally described by a fluid treatment, such as frequency shifts induced by trapping, a nonlinear (typically suppressed) Landau damping, and mode couplings leading to instabilities that can cause the plasma wave to decay rapidly. These processes affect the onset and saturation of stimulated Raman and Brillouin scattering, and are of direct interest to the modeling and prediction of deleterious LPI in ICF. It is not currently computationally feasible to simulate these Debye length-scale phenomena in 3D across experimental scales. Analytically-derived and/or numerically benchmarked models of processes occurring at scales finer than the fluid simulation grid offer a path forward. We demonstrate the impact of a range of kinetic processes on plasma reflectivity via models included in the LPI simulation code pF3D. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
HELICITY CONSERVATION IN NONLINEAR MEAN-FIELD SOLAR DYNAMO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pipin, V. V.; Sokoloff, D. D.; Zhang, H.
It is believed that magnetic helicity conservation is an important constraint on large-scale astrophysical dynamos. In this paper, we study a mean-field solar dynamo model that employs two different formulations of the magnetic helicity conservation. In the first approach, the evolution of the averaged small-scale magnetic helicity is largely determined by the local induction effects due to the large-scale magnetic field, turbulent motions, and the turbulent diffusive loss of helicity. In this case, the dynamo model shows that the typical strength of the large-scale magnetic field generated by the dynamo is much smaller than the equipartition value for the magneticmore » Reynolds number 10{sup 6}. This is the so-called catastrophic quenching (CQ) phenomenon. In the literature, this is considered to be typical for various kinds of solar dynamo models, including the distributed-type and the Babcock-Leighton-type dynamos. The problem can be resolved by the second formulation, which is derived from the integral conservation of the total magnetic helicity. In this case, the dynamo model shows that magnetic helicity propagates with the dynamo wave from the bottom of the convection zone to the surface. This prevents CQ because of the local balance between the large-scale and small-scale magnetic helicities. Thus, the solar dynamo can operate in a wide range of magnetic Reynolds numbers up to 10{sup 6}.« less
Distilled Water Distribution Systems. Laboratory Design Notes.
ERIC Educational Resources Information Center
Sell, J.C.
Factors concerning water distribution systems, including an evaluation of materials and a recommendation of materials best suited for service in typical facilities are discussed. Several installations are discussed in an effort to bring out typical features in selected applications. The following system types are included--(1) industrial…
Automated Vocal Analysis of Children with Hearing Loss and Their Typical and Atypical Peers
VanDam, Mark; Oller, D. Kimbrough; Ambrose, Sophie E.; Gray, Sharmistha; Richards, Jeffrey A.; Xu, Dongxin; Gilkerson, Jill; Silbert, Noah H.; Moeller, Mary Pat
2014-01-01
Objectives This study investigated automatic assessment of vocal development in children with hearing loss as compared with children who are typically developing, have language delays, and autism spectrum disorder. Statistical models are examined for performance in a classification model and to predict age within the four groups of children. Design The vocal analysis system analyzed over 1900 whole-day, naturalistic acoustic recordings from 273 toddlers and preschoolers comprising children who were typically developing, hard of hearing, language delayed, or autistic. Results Samples from children who were hard-of-hearing patterned more similarly to those of typically-developing children than to the language-delayed or autistic samples. The statistical models were able to classify children from the four groups examined and estimate developmental age based on automated vocal analysis. Conclusions This work shows a broad similarity between children with hearing loss and typically developing children, although children with hearing loss show some delay in their production of speech. Automatic acoustic analysis can now be used to quantitatively compare vocal development in children with and without speech-related disorders. The work may serve to better distinguish among various developmental disorders and ultimately contribute to improved intervention. PMID:25587667
Photogrammetry Applied to Wind Tunnel Testing
NASA Technical Reports Server (NTRS)
Liu, Tian-Shu; Cattafesta, L. N., III; Radeztsky, R. H.; Burner, A. W.
2000-01-01
In image-based measurements, quantitative image data must be mapped to three-dimensional object space. Analytical photogrammetric methods, which may be used to accomplish this task, are discussed from the viewpoint of experimental fluid dynamicists. The Direct Linear Transformation (DLT) for camera calibration, used in pressure sensitive paint, is summarized. An optimization method for camera calibration is developed that can be used to determine the camera calibration parameters, including those describing lens distortion, from a single image. Combined with the DLT method, this method allows a rapid and comprehensive in-situ camera calibration and therefore is particularly useful for quantitative flow visualization and other measurements such as model attitude and deformation in production wind tunnels. The paper also includes a brief description of typical photogrammetric applications to temperature- and pressure-sensitive paint measurements and model deformation measurements in wind tunnels.
Anti-arrhythmic strategies for atrial fibrillation
Grandi, Eleonora; Maleckar, Mary M.
2016-01-01
Atrial fibrillation (AF), the most common cardiac arrhythmia, is associated with increased risk of cerebrovascular stroke, and with several other pathologies, including heart failure. Current therapies for AF are targeted at reducing risk of stroke (anticoagulation) and tachycardia-induced cardiomyopathy (rate or rhythm control). Rate control, typically achieved by atrioventricular nodal blocking drugs, is often insufficient to alleviate symptoms. Rhythm control approaches include antiarrhythmic drugs, electrical cardioversion, and ablation strategies. Here, we offer several examples of how computational modeling can provide a quantitative framework for integrating multi scale data to: (a) gain insight into multi-scale mechanisms of AF; (b) identify and test pharmacological and electrical therapy and interventions; and (c) support clinical decisions. We review how modeling approaches have evolved and contributed to the research pipeline and preclinical development and discuss future directions and challenges in the field. PMID:27612549
Global constraints on vector-like WIMP effective interactions
Blennow, Mattias; Coloma, Pilar; Fernandez-Martinez, Enrique; ...
2016-04-07
In this work we combine information from relic abundance, direct detection, cosmic microwave background, positron fraction, gamma rays, and colliders to explore the existing constraints on couplings between Dark Matter and Standard Model constituents when no underlying model or correlation is assumed. For definiteness, we include independent vector-like effective interactions for each Standard Model fermion. Our results show that low Dark Matter masses below 20 GeV are disfavoured at the 3 σ level with respect to higher masses, due to the tension between the relic abundance requirement and upper constraints on the Dark Matter couplings. Lastly, large couplings are typically onlymore » allowed in combinations which avoid effective couplings to the nuclei used in direct detection experiments.« less
A modified homogeneous relaxation model for CO2 two-phase flow in vapour ejector
NASA Astrophysics Data System (ADS)
Haida, M.; Palacz, M.; Smolka, J.; Nowak, A. J.; Hafner, A.; Banasiak, K.
2016-09-01
In this study, the homogenous relaxation model (HRM) for CO2 flow in a two-phase ejector was modified in order to increase the accuracy of the numerical simulations The two- phase flow model was implemented on the effective computational tool called ejectorPL for fully automated and systematic computations of various ejector shapes and operating conditions. The modification of the HRM was performed by a change of the relaxation time and the constants included in the relaxation time equation based on the experimental result under the operating conditions typical for the supermarket refrigeration system. The modified HRM was compared to the HEM results, which were performed based on the comparison of motive nozzle and suction nozzle mass flow rates.
Improving the performances of autofocus based on adaptive retina-like sampling model
NASA Astrophysics Data System (ADS)
Hao, Qun; Xiao, Yuqing; Cao, Jie; Cheng, Yang; Sun, Ce
2018-03-01
An adaptive retina-like sampling model (ARSM) is proposed to balance autofocusing accuracy and efficiency. Based on the model, we carry out comparative experiments between the proposed method and the traditional method in terms of accuracy, the full width of the half maxima (FWHM) and time consumption. Results show that the performances of our method are better than that of the traditional method. Meanwhile, typical autofocus functions, including sum-modified-Laplacian (SML), Laplacian (LAP), Midfrequency-DCT (MDCT) and Absolute Tenengrad (ATEN) are compared through comparative experiments. The smallest FWHM is obtained by the use of LAP, which is more suitable for evaluating accuracy than other autofocus functions. The autofocus function of MDCT is most suitable to evaluate the real-time ability.
Toughened and corrosion- and wear-resistant composite structures and fabrication methods thereof
Seals, Roland D; Ripley, Edward B; Hallman, Russell L
2014-04-08
Composite structures having a reinforced material interjoined with a substrate and methods of creating a composite material interjoined with a substrate. In some embodiments the composite structure may be a line or a spot or formed by reinforced material interjoined with the substrate. The methods typically include disposing a precursor material comprising titanium diboride and/or titanium monoboride on at least a portion of the substrate and heating the precursor material and the at least a portion of the substrate in the presence of an oxidation preventative until at least a portion of the precursor material forms reinforced material interjoined with the substrate. The precursor material may be disposed on the substrate as a sheet or a tape or a slurry or a paste. Localized surface heating may be used to heat the precursor material. The reinforced material typically comprises a titanium boron compound, such as titanium monoboride, and preferably comprises .beta.-titanium. The substrate is typically titanium-bearing, iron-bearing, or aluminum-bearing. A welding rod is provided as an embodiment. The welding rod includes a metal electrode and a precursor material is disposed adjacent at least a portion of the metal electrode. A material for use in forming a composite structure is provided. The material typically includes a precursor material that includes one or more materials selected from the following group: titanium diboride and titanium monoboride. The material also typically includes a flux.
Space Radiation Analysis for the Mark III Spacesuit
NASA Technical Reports Server (NTRS)
Atwell, Bill; Boeder, Paul; Ross, Amy
2013-01-01
NASA has continued the development of space systems by applying and integrating improved technologies that include safety issues, lightweight materials, and electronics. One such area is extravehicular (EVA) spacesuit development with the most recent Mark III spacesuit. In this paper the Mark III spacesuit is discussed in detail that includes the various components that comprise the spacesuit, materials and their chemical composition that make up the spacesuit, and a discussion of the 3-D CAD model of the Mark III spacesuit. In addition, the male (CAM) and female (CAF) computerized anatomical models are also discussed in detail. We combined the spacesuit and the human models, that is, we developed a method of incorporating the human models in the Mark III spacesuit and performed a ray-tracing technique to determine the space radiation shielding distributions for all of the critical body organs. These body organ shielding distributions include the BFO (Blood-Forming Organs), skin, eye, lungs, stomach, and colon, to name a few, for both the male and female. Using models of the trapped (Van Allen) proton and electron environments, radiation exposures were computed for a typical low earth orbit (LEO) EVA mission scenario including the geostationary (GEO) high electron environment. A radiation exposure assessment of these mission scenarios is made to determine whether or not the crew radiation exposure limits are satisfied, and if not, the additional shielding material that would be required to satisfy the crew limits.
Real-Time Simulation of the X-33 Aerospace Engine
NASA Technical Reports Server (NTRS)
Aguilar, Robert
1999-01-01
This paper discusses the development and performance of the X-33 Aerospike Engine RealTime Model. This model was developed for the purposes of control law development, six degree-of-freedom trajectory analysis, vehicle system integration testing, and hardware-in-the loop controller verification. The Real-Time Model uses time-step marching solution of non-linear differential equations representing the physical processes involved in the operation of a liquid propellant rocket engine, albeit in a simplified form. These processes include heat transfer, fluid dynamics, combustion, and turbomachine performance. Two engine models are typically employed in order to accurately model maneuvering and the powerpack-out condition where the power section of one engine is used to supply propellants to both engines if one engine malfunctions. The X-33 Real-Time Model is compared to actual hot fire test data and is been found to be in good agreement.
NASA Technical Reports Server (NTRS)
Flowers, George T.
1989-01-01
Rotor dynamical analyses are typically performed using rigid disk models. Studies of rotor models in which the effects of disk flexibility were included indicate that is may be an important effect for many systems. This issue is addressed with respect to the Space Shuttle Main Engine high pressure turbo-pumps. Finite element analyses have been performed for a simplified free-free flexible disk rotor model and the modes and frequencies compared to those of a rigid disk model. The simple model was then extended to a more sophisticated HPTOP rotor model and similar results were observed. Equations were developed that are suitable for modifying the current rotordynamical analysis program to account for disk flexibility. Some conclusions are drawn from the results of this work as to the importance of disk flexibility on the HPTOP rotordynamics and some recommendations are given for follow-up research in this area.
Nucleosynthesis of Iron-Peak Elements in Type-Ia Supernovae
NASA Astrophysics Data System (ADS)
Leung, Shing-Chi; Nomoto, Ken'ichi
The observed features of typical Type Ia supernovae are well-modeled as the explosions of carbon-oxygen white dwarfs both near Chandrasekhar mass and sub-Chandrasekhar mass. However, observations in the last decade have shown that Type Ia supernovae exhibit a wide diversity, which implies models for wider range of parameters are necessary. Based on the hydrodynamics code we developed, we carry out a parameter study of Chandrasekhar mass models for Type Ia supernovae. We conduct a series of two-dimensional hydrodynamics simulations of the explosion phase using the turbulent flame model with the deflagration-detonation-transition (DDT). To reconstruct the nucleosynthesis history, we use the particle tracer scheme. We examine the role of model parameters by examining their influences on the final product of nucleosynthesis. The parameters include the initial density, metallicity, initial flame structure, detonation criteria and so on. We show that the observed chemical evolution of galaxies can help constrain these model parameters.
What's with all this peer-review stuff anyway?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warner, J. S.
2010-01-01
The Journal of Physical Security was ostensibly started to deal with a perceived lack of peer-reviewed journals related to the field of physical security. In fact, concerns have been expressed that the field of physical security is scarcely a field at all. A typical, well-developed field might include the following: multiple peer-reviewed journals devoted to the subject, rigor and critical thinking, metrics, fundamental principles, models and theories, effective standards and guidelines, R and D conferences, professional societies, certifications, its own academic department (or at least numerous academic experts), widespread granting of degrees in the field from 4-year research universities, mechanismsmore » for easily spotting 'snake oil' products and services, and the practice of professionals organizing to police themselves, provide quality control, and determine best practices. Physical Security seems to come up short in a number of these areas. Many of these attributes are difficult to quantify. This paper seeks to focus on one area that is quantifiable: the number of peer-reviewed journals dedicated to the field of Physical Security. In addition, I want to examine the number of overall periodicals (peer-reviewed and non-peer-reviewed) dedicated to physical security, as well as the number of papers published each year about physical security. These are potentially useful analyses because one can often infer how healthy or active a given field is by its publishing activity. For example, there are 2,754 periodicals dedicated to the (very healthy and active) field of physics. This paper concentrates on trade journal versus peer-reviewed journals. Trade journals typically focus on practice-related topics. A paper appropriate for a trade journal is usually based more on practical experience than rigorous studies or research. Models, theories, or rigorous experimental research results will usually not be included. A trade journal typically targets a specific market in an industry or trade. Such journals are often considered to be news magazines and may contain industry specific advertisements and/or job ads. A peer-reviewed journal, a.k.a 'referred journal', in contrast, contains peer-reviewed papers. A peer-reviewed paper is one that has been vetted by the peer review process. In this process, the paper is typically sent to independent experts for review and consideration. A peer-reviewed paper might cover experimental results, and/or a rigorous study, analyses, research efforts, theory, models, or one of many other scholarly endeavors.« less
ERIC Educational Resources Information Center
van Maanen, Leendert; van Rijn, Hedderik; Taatgen, Niels
2012-01-01
This article discusses how sequential sampling models can be integrated in a cognitive architecture. The new theory Retrieval by Accumulating Evidence in an Architecture (RACE/A) combines the level of detail typically provided by sequential sampling models with the level of task complexity typically provided by cognitive architectures. We will use…
ERIC Educational Resources Information Center
Kwon, Heekyung
2011-01-01
The objective of this study is to provide a systematic account of three typical phenomena surrounding absolute accuracy of metacomprehension assessments: (1) the absolute accuracy of predictions is typically quite low; (2) there exist individual differences in absolute accuracy of predictions as a function of reading skill; and (3) postdictions…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chandler, David; Betzler, Ben; Hirtz, Gregory John
2016-09-01
The purpose of this report is to document a high-fidelity VESTA/MCNP High Flux Isotope Reactor (HFIR) core model that features a new, representative experiment loading. This model, which represents the current, high-enriched uranium fuel core, will serve as a reference for low-enriched uranium conversion studies, safety-basis calculations, and other research activities. A new experiment loading model was developed to better represent current, typical experiment loadings, in comparison to the experiment loading included in the model for Cycle 400 (operated in 2004). The new experiment loading model for the flux trap target region includes full length 252Cf production targets, 75Se productionmore » capsules, 63Ni production capsules, a 188W production capsule, and various materials irradiation targets. Fully loaded 238Pu production targets are modeled in eleven vertical experiment facilities located in the beryllium reflector. Other changes compared to the Cycle 400 model are the high-fidelity modeling of the fuel element side plates and the material composition of the control elements. Results obtained from the depletion simulations with the new model are presented, with a focus on time-dependent isotopic composition of irradiated fuel and single cycle isotope production metrics.« less
Urano, K; Tamaoki, N; Nomura, T
2012-01-01
Transgenic animal models have been used in small numbers in gene function studies in vivo for a period of time, but more recently, the use of a single transgenic animal model has been approved as a second species, 6-month alternative (to the routine 2-year, 2-animal model) used in short-term carcinogenicity studies for generating regulatory application data of new drugs. This article addresses many of the issues associated with the creation and use of one of these transgenic models, the rasH2 mouse, for regulatory science. The discussion includes strategies for mass producing mice with the same stable phenotype, including constructing the transgene, choosing a founder mouse, and controlling both the transgene and background genes; strategies for developing the model for regulatory science, including measurements of carcinogen susceptibility, stability of a large-scale production system, and monitoring for uniform carcinogenicity responses; and finally, efficient use of the transgenic animal model on study. Approximately 20% of mouse carcinogenicity studies for new drug applications in the United States currently use transgenic models, typically the rasH2 mouse. The rasH2 mouse could contribute to animal welfare by reducing the numbers of animals used as well as reducing the cost of carcinogenicity studies. A better understanding of the advantages and disadvantages of the transgenic rasH2 mouse will result in greater and more efficient use of this animal model in the future.
Soares, Marta O.; Palmer, Stephen; Ades, Anthony E.; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M.
2015-01-01
Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. PMID:25712447
Welton, Nicky J; Soares, Marta O; Palmer, Stephen; Ades, Anthony E; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M
2015-07-01
Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. © The Author(s) 2015.
Parenting and Childhood Chronicity: making visible the invisible work.
Ray, Lynne D
2002-12-01
The work required to raise a child with a chronic illness or disability is above and beyond that of raising a typical child. This article presents a model, Parenting and Childhood Chronicity (PACC), that was developed during an interpretive study with 43 parents of 34 children (aged 15 months to 16 years) with various chronic conditions, is presented. "Special needs parenting" describes the additional care that a child needs and includes medical care, parenting plus, and working the systems. "Minimizing consequences" reflects the struggle to balance the rest of family life and includes parenting siblings, maintaining relationships, and keeping yourself going. Copyright 2002, Elsevier Science (USA). All rights reserved.
NASA Technical Reports Server (NTRS)
Mccall, D. L.
1984-01-01
The results of a simulation study to define the functional characteristics of a airborne and ground reference GPS receiver for use in a Differential GPS system are doumented. The operations of a variety of receiver types (sequential-single channel, continuous multi-channel, etc.) are evaluated for a typical civil helicopter mission scenario. The math model of each receiver type incorporated representative system errors including intentional degradation. The results include the discussion of the receiver relative performance, the spatial correlative properties of individual range error sources, and the navigation algorithm used to smooth the position data.
Lee, Jimin; Hustad, Katherine C; Weismer, Gary
2014-10-01
Speech acoustic characteristics of children with cerebral palsy (CP) were examined with a multiple speech subsystems approach; speech intelligibility was evaluated using a prediction model in which acoustic measures were selected to represent three speech subsystems. Nine acoustic variables reflecting different subsystems, and speech intelligibility, were measured in 22 children with CP. These children included 13 with a clinical diagnosis of dysarthria (speech motor impairment [SMI] group) and 9 judged to be free of dysarthria (no SMI [NSMI] group). Data from children with CP were compared to data from age-matched typically developing children. Multiple acoustic variables reflecting the articulatory subsystem were different in the SMI group, compared to the NSMI and typically developing groups. A significant speech intelligibility prediction model was obtained with all variables entered into the model (adjusted R2 = .801). The articulatory subsystem showed the most substantial independent contribution (58%) to speech intelligibility. Incremental R2 analyses revealed that any single variable explained less than 9% of speech intelligibility variability. Children in the SMI group had articulatory subsystem problems as indexed by acoustic measures. As in the adult literature, the articulatory subsystem makes the primary contribution to speech intelligibility variance in dysarthria, with minimal or no contribution from other systems.
NASA Astrophysics Data System (ADS)
Hardikar, Kedar Y.; Liu, Bill J. J.; Bheemreddy, Venkata
2016-09-01
Gaining an understanding of degradation mechanisms and their characterization are critical in developing relevant accelerated tests to ensure PV module performance warranty over a typical lifetime of 25 years. As newer technologies are adapted for PV, including new PV cell technologies, new packaging materials, and newer product designs, the availability of field data over extended periods of time for product performance assessment cannot be expected within the typical timeframe for business decisions. In this work, to enable product design decisions and product performance assessment for PV modules utilizing newer technologies, Simulation and Mechanism based Accelerated Reliability Testing (SMART) methodology and empirical approaches to predict field performance from accelerated test results are presented. The method is demonstrated for field life assessment of flexible PV modules based on degradation mechanisms observed in two accelerated tests, namely, Damp Heat and Thermal Cycling. The method is based on design of accelerated testing scheme with the intent to develop relevant acceleration factor models. The acceleration factor model is validated by extensive reliability testing under different conditions going beyond the established certification standards. Once the acceleration factor model is validated for the test matrix a modeling scheme is developed to predict field performance from results of accelerated testing for particular failure modes of interest. Further refinement of the model can continue as more field data becomes available. While the demonstration of the method in this work is for thin film flexible PV modules, the framework and methodology can be adapted to other PV products.
Caldwell, Ronald; Roberts, Craig S; An, Zhijie; Chen, Chieh-I; Wang, Bruce
2015-07-24
China has experienced several severe outbreaks of influenza over the past century: 1918, 1957, 1968, and 2009. Influenza itself can be deadly; however, the increase in mortality during an influenza outbreak is also attributable to secondary bacterial infections, specifically pneumococcal disease. Given the history of pandemic outbreaks and the associated morbidity and mortality, we investigated the cost-effectiveness of a PCV7 vaccination program in China from the context of typical and pandemic influenza seasons. A decision-analytic model was employed to evaluate the impact of a 7-valent pneumococcal vaccine (PCV7) infant vaccination program on the incidence, mortality, and cost associated with pneumococcal disease during a typical influenza season (15% flu incidence) and influenza pandemic (30% flu incidence) in China. The model incorporated Chinese data where available and included both direct and indirect (herd) effects on the unvaccinated population, assuming a point in time following the initial introduction of the vaccine where the impact of the indirect effects has reached a steady state, approximately seven years following the implementation of the vaccine program. Pneumococcal disease incidence, mortality, and costs were evaluated over a one year time horizon. Healthcare costs were calculated using a payer perspective and included vaccination program costs and direct medical expenditures from pneumococcal disease. The model predicted that routine PCV7 vaccination of infants in China would prevent 5,053,453 cases of pneumococcal disease and 76,714 deaths in a single year during a normal influenza season.The estimated incremental-cost-effectiveness ratios were ¥12,281 (US$1,900) per life-year saved and ¥13,737 (US$2,125) per quality-adjusted-life-year gained. During an influenza pandemic, the model estimated that routine vaccination with PCV7 would prevent 8,469,506 cases of pneumococcal disease and 707,526 deaths, and would be cost-saving. Routine vaccination with PCV7 in China would be a cost-effective strategy at limiting the negative impact of influenza during a typical influenza season. During an influenza pandemic, the benefit of PCV7 in preventing excess pneumococcal morbidity and mortality renders a PCV7 vaccination program cost-saving.
Equivalent plate modeling for conceptual design of aircraft wing structures
NASA Technical Reports Server (NTRS)
Giles, Gary L.
1995-01-01
This paper describes an analysis method that generates conceptual-level design data for aircraft wing structures. A key requirement is that this data must be produced in a timely manner so that is can be used effectively by multidisciplinary synthesis codes for performing systems studies. Such a capability is being developed by enhancing an equivalent plate structural analysis computer code to provide a more comprehensive, robust and user-friendly analysis tool. The paper focuses on recent enhancements to the Equivalent Laminated Plate Solution (ELAPS) analysis code that significantly expands the modeling capability and improves the accuracy of results. Modeling additions include use of out-of-plane plate segments for representing winglets and advanced wing concepts such as C-wings along with a new capability for modeling the internal rib and spar structure. The accuracy of calculated results is improved by including transverse shear effects in the formulation and by using multiple sets of assumed displacement functions in the analysis. Typical results are presented to demonstrate these new features. Example configurations include a C-wing transport aircraft, a representative fighter wing and a blended-wing-body transport. These applications are intended to demonstrate and quantify the benefits of using equivalent plate modeling of wing structures during conceptual design.
Locally adaptive, spatially explicit projection of US population for 2030 and 2050.
McKee, Jacob J; Rose, Amy N; Bright, Edward A; Huynh, Timmy; Bhaduri, Budhendra L
2015-02-03
Localized adverse events, including natural hazards, epidemiological events, and human conflict, underscore the criticality of quantifying and mapping current population. Building on the spatial interpolation technique previously developed for high-resolution population distribution data (LandScan Global and LandScan USA), we have constructed an empirically informed spatial distribution of projected population of the contiguous United States for 2030 and 2050, depicting one of many possible population futures. Whereas most current large-scale, spatially explicit population projections typically rely on a population gravity model to determine areas of future growth, our projection model departs from these by accounting for multiple components that affect population distribution. Modeled variables, which included land cover, slope, distances to larger cities, and a moving average of current population, were locally adaptive and geographically varying. The resulting weighted surface was used to determine which areas had the greatest likelihood for future population change. Population projections of county level numbers were developed using a modified version of the US Census's projection methodology, with the US Census's official projection as the benchmark. Applications of our model include incorporating multiple various scenario-driven events to produce a range of spatially explicit population futures for suitability modeling, service area planning for governmental agencies, consequence assessment, mitigation planning and implementation, and assessment of spatially vulnerable populations.
Reliable inference of light curve parameters in the presence of systematics
NASA Astrophysics Data System (ADS)
Gibson, Neale P.
2016-10-01
Time-series photometry and spectroscopy of transiting exoplanets allow us to study their atmospheres. Unfortunately, the required precision to extract atmospheric information surpasses the design specifications of most general purpose instrumentation. This results in instrumental systematics in the light curves that are typically larger than the target precision. Systematics must therefore be modelled, leaving the inference of light-curve parameters conditioned on the subjective choice of systematics models and model-selection criteria. Here, I briefly review the use of systematics models commonly used for transmission and emission spectroscopy, including model selection, marginalisation over models, and stochastic processes. These form a hierarchy of models with increasing degree of objectivity. I argue that marginalisation over many systematics models is a minimal requirement for robust inference. Stochastic models provide even more flexibility and objectivity, and therefore produce the most reliable results. However, no systematics models are perfect, and the best strategy is to compare multiple methods and repeat observations where possible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lave, Matthew; Hayes, William; Pohl, Andrew
2015-02-02
We report an evaluation of the accuracy of combinations of models that estimate plane-of-array (POA) irradiance from measured global horizontal irradiance (GHI). This estimation involves two steps: 1) decomposition of GHI into direct and diffuse horizontal components and 2) transposition of direct and diffuse horizontal irradiance (DHI) to POA irradiance. Measured GHI and coincident measured POA irradiance from a variety of climates within the United States were used to evaluate combinations of decomposition and transposition models. A few locations also had DHI measurements, allowing for decoupled analysis of either the decomposition or the transposition models alone. Results suggest that decompositionmore » models had mean bias differences (modeled versus measured) that vary with climate. Transposition model mean bias differences depended more on the model than the location. Lastly, when only GHI measurements were available and combinations of decomposition and transposition models were considered, the smallest mean bias differences were typically found for combinations which included the Hay/Davies transposition model.« less
Kapellusch, Jay M; Bao, Stephen S; Silverstein, Barbara A; Merryweather, Andrew S; Thiese, Mathew S; Hegmann, Kurt T; Garg, Arun
2017-12-01
The Strain Index (SI) and the American Conference of Governmental Industrial Hygienists (ACGIH) Threshold Limit Value for Hand Activity Level (TLV for HAL) use different constituent variables to quantify task physical exposures. Similarly, time-weighted-average (TWA), Peak, and Typical exposure techniques to quantify physical exposure from multi-task jobs make different assumptions about each task's contribution to the whole job exposure. Thus, task and job physical exposure classifications differ depending upon which model and technique are used for quantification. This study examines exposure classification agreement, disagreement, correlation, and magnitude of classification differences between these models and techniques. Data from 710 multi-task job workers performing 3,647 tasks were analyzed using the SI and TLV for HAL models, as well as with the TWA, Typical and Peak job exposure techniques. Physical exposures were classified as low, medium, and high using each model's recommended, or a priori limits. Exposure classification agreement and disagreement between models (SI, TLV for HAL) and between job exposure techniques (TWA, Typical, Peak) were described and analyzed. Regardless of technique, the SI classified more tasks as high exposure than the TLV for HAL, and the TLV for HAL classified more tasks as low exposure. The models agreed on 48.5% of task classifications (kappa = 0.28) with 15.5% of disagreement between low and high exposure categories. Between-technique (i.e., TWA, Typical, Peak) agreement ranged from 61-93% (kappa: 0.16-0.92) depending on whether the SI or TLV for HAL was used. There was disagreement between the SI and TLV for HAL and between the TWA, Typical and Peak techniques. Disagreement creates uncertainty for job design, job analysis, risk assessments, and developing interventions. Task exposure classifications from the SI and TLV for HAL might complement each other. However, TWA, Typical, and Peak job exposure techniques all have limitations. Part II of this article examines whether the observed differences between these models and techniques produce different exposure-response relationships for predicting prevalence of carpal tunnel syndrome.
NASA Technical Reports Server (NTRS)
Carpenter, Paul; Curreri, Peter A. (Technical Monitor)
2002-01-01
This course will cover practical applications of the energy-dispersive spectrometer (EDS) to x-ray microanalysis. Topics covered will include detector technology, advances in pulse processing, resolution and performance monitoring, detector modeling, peak deconvolution and fitting, qualitative and quantitative analysis, compositional mapping, and standards. An emphasis will be placed on use of the EDS for quantitative analysis, with discussion of typical problems encountered in the analysis of a wide range of materials and sample geometries.
Establishing Good Practices for Exposure–Response Analysis of Clinical Endpoints in Drug Development
Overgaard, RV; Ingwersen, SH; Tornøe, CW
2015-01-01
This tutorial aims at promoting good practices for exposure–response (E-R) analyses of clinical endpoints in drug development. The focus is on practical aspects of E-R analyses to assist modeling scientists with a process of performing such analyses in a consistent manner across individuals and projects and tailored to typical clinical drug development decisions. This includes general considerations for planning, conducting, and visualizing E-R analyses, and how these are linked to key questions. PMID:26535157
The economics of satellite retrieval
NASA Technical Reports Server (NTRS)
Price, Kent M.; Greenberg, Joel S.
1988-01-01
The economics of space operations with and without the Space Station have been studied in terms of the financial performance of a typical communications-satellite business venture. A stochastic Monte-Carlo communications-satellite business model is employed which includes factors such as satellite configuration, random and wearout failures, reliability of launch and space operations, stand-down time resulting from failures, and insurance by operation. Financial performance impacts have been evaluated in terms of the magnitude of investment, net present value, and return on investment.
Use of Reference Frames for Interplanetary Navigation at JPL
NASA Technical Reports Server (NTRS)
Heflin, Michael; Jacobs, Chris; Sovers, Ojars; Moore, Angelyn; Owen, Sue
2010-01-01
Navigation of interplanetary spacecraft is typically based on range, Doppler, and differential interferometric measurements made by ground-based telescopes. Acquisition and interpretation of these observations requires accurate knowledge of the terrestrial reference frame and its orientation with respect to the celestial frame. Work is underway at JPL to reprocess historical VLBI and GPS data to improve realizations of the terrestrial and celestial frames. Improvements include minimal constraint alignment, improved tropospheric modeling, better orbit determination, and corrections for antenna phase center patterns.
Metric-driven harm: an exploration of unintended consequences of performance measurement.
Rambur, Betty; Vallett, Carol; Cohen, Judith A; Tarule, Jill Mattuck
2013-11-01
Performance measurement is an increasingly common element of the US health care system. Typically a proxy for high quality outcomes, there has been little systematic investigation of the potential negative unintended consequences of performance metrics, including metric-driven harm. This case study details an incidence of post-surgical metric-driven harm and offers Smith's 1995 work and a patient centered, context sensitive metric model for potential adoption by nurse researchers and clinicians. Implications for further research are discussed. © 2013.
1995-12-01
34 Environmental Science and Technology, 26:1404-1410 (July 1992). 4. Atlas , Ronald M. and Richard Bartha . Microbial Ecology , Fundamentals and Applica...the impact of physical factors on microbial activity. They cite research by Atlas and Bartha observing that low temperatures inhibit microbial activity...mixture. Atlas and Bartha (4:393-394) explain that a typical petroleum mixture includes aliphatics, alicyclics, aromatics and other organics. The
Prediction of overall and blade-element performance for axial-flow pump configurations
NASA Technical Reports Server (NTRS)
Serovy, G. K.; Kavanagh, P.; Okiishi, T. H.; Miller, M. J.
1973-01-01
A method and a digital computer program for prediction of the distributions of fluid velocity and properties in axial flow pump configurations are described and evaluated. The method uses the blade-element flow model and an iterative numerical solution of the radial equilbrium and continuity conditions. Correlated experimental results are used to generate alternative methods for estimating blade-element turning and loss characteristics. Detailed descriptions of the computer program are included, with example input and typical computed results.
NASA Astrophysics Data System (ADS)
Spitoni, E.; Vincenzo, F.; Matteucci, F.
2017-03-01
Context. Analytical models of chemical evolution, including inflow and outflow of gas, are important tools for studying how the metal content in galaxies evolves as a function of time. Aims: We present new analytical solutions for the evolution of the gas mass, total mass, and metallicity of a galactic system when a decaying exponential infall rate of gas and galactic winds are assumed. We apply our model to characterize a sample of local star-forming and passive galaxies from the Sloan Digital Sky Survey data, with the aim of reproducing their observed mass-metallicity relation. Methods: We derived how the two populations of star-forming and passive galaxies differ in their particular distribution of ages, formation timescales, infall masses, and mass loading factors. Results: We find that the local passive galaxies are, on average, older and assembled on shorter typical timescales than the local star-forming galaxies; on the other hand, the star-forming galaxies with higher masses generally show older ages and longer typical formation timescales compared than star-forming galaxies with lower masses. The local star-forming galaxies experience stronger galactic winds than the passive galaxy population. Exploring the effect of assuming different initial mass functions in our model, we show that to reproduce the observed mass-metallicity relation, stronger winds are requested if the initial mass function is top-heavy. Finally, our analytical models predict the assumed sample of local galaxies to lie on a tight surface in the 3D space defined by stellar metallicity, star formation rate, and stellar mass, in agreement with the well-known fundamental relation from adopting gas-phase metallicity. Conclusions: By using a new analytical model of chemical evolution, we characterize an ensemble of SDSS galaxies in terms of their infall timescales, infall masses, and mass loading factors. Local passive galaxies are, on average, older and assembled on shorter typical timescales than the local star-forming galaxies. Moreover, the local star-forming galaxies show stronger galactic winds than the passive galaxy population. Finally, we find that the fundamental relation between metallicity, mass, and star formation rate for these local galaxies is still valid when adopting the average galaxy stellar metallicity.
Recent solar extreme ultraviolet irradiance observations and modeling: A review
NASA Technical Reports Server (NTRS)
Tobiska, W. Kent
1993-01-01
For more than 90 years, solar extreme ultraviolet (EUV) irradiance modeling has progressed from empirical blackbody radiation formulations, through fudge factors, to typically measured irradiances and reference spectra was well as time-dependent empirical models representing continua and line emissions. A summary of recent EUV measurements by five rockets and three satellites during the 1980s is presented along with the major modeling efforts. The most significant reference spectra are reviewed and threee independently derived empirical models are described. These include Hinteregger's 1981 SERF1, Nusinov's 1984 two-component, and Tobiska's 1990/1991/SERF2/EUV91 flux models. They each provide daily full-disk broad spectrum flux values from 2 to 105 nm at 1 AU. All the models depend to one degree or another on the long time series of the Atmosphere Explorer E (AE-E) EUV database. Each model uses ground- and/or space-based proxies to create emissions from solar atmospheric regions. Future challenges in EUV modeling are summarized including the basic requirements of models, the task of incorporating new observations and theory into the models, the task of comparing models with solar-terrestrial data sets, and long-term goals and modeling objectives. By the late 1990s, empirical models will potentially be improved through the use of proposed solar EUV irradiance measurements and images at selected wavelengths that will greatly enhance modeling and predictive capabilities.
NASA Astrophysics Data System (ADS)
Amezquita-Brooks, Luis; Liceaga-Castro, Eduardo; Gonzalez-Sanchez, Mario; Garcia-Salazar, Octavio; Martinez-Vazquez, Daniel
2017-11-01
Applications based on quad-rotor-vehicles (QRV) are becoming increasingly wide-spread. Many of these applications require accurate mathematical representations for control design, simulation and estimation. However, there is no consensus on a standardized model for these purposes. In this article a review of the most common elements included in QRV models reported in the literature is presented. This survey shows that some elements are recurrent for typical non-aerobatic QRV applications; in particular, for control design and high-performance simulation. By synthesising the common features of the reviewed models a standard generic model SGM is proposed. The SGM is cast as a typical state-space model without memory-less transformations, a structure which is useful for simulation and controller design. The survey also shows that many QRV applications use simplified representations, which may be considered simplifications of the SGM here proposed. In order to assess the effectiveness of the simplified models, a comprehensive comparison based on digital simulations is presented. With this comparison, it is possible to determine the accuracy of each model under particular operating ranges. Such information is useful for the selection of a model according to a particular application. In addition to the models found in the literature, in this article a novel simplified model is derived. The main characteristics of this model are that its inner dynamics are linear, it has low complexity and it has a high level of accuracy in all the studied operating ranges, a characteristic found only in more complex representations. To complement the article the main elements of the SGM are evaluated with the aid of experimental data and the computational complexity of all surveyed models is briefly analysed. Finally, the article presents a discussion on how the structural characteristics of the models are useful to suggest particular QRV control structures.
Dynamical sequestration of the Moon-forming impactor in co-orbital resonance with Earth
NASA Astrophysics Data System (ADS)
Kortenkamp, Stephen J.; Hartmann, William K.
2016-09-01
Recent concerns about the giant impact hypothesis for the origin of the Moon, and an associated "isotope crisis" may be assuaged if the impactor was a local object that formed near Earth. We investigated a scenario that may meet this criterion, with protoplanets assumed to originate in 1:1 co-orbital resonance with Earth. Using N-body numerical simulations we explored the dynamical consequences of placing Mars-mass companions in various co-orbital configurations with a proto-Earth of 0.9 Earth-masses (M⊕). We modeled 162 different configurations, some with just the four terrestrial planets and others that included the four giant planets. In both the 4- and 8-planet models we found that a single Mars-mass companion typically remained a stable co-orbital of Earth for the entire 250 million year (Myr) duration of our simulations (59 of 68 unique simulations). In an effort to destabilize such a system we carried out an additional 94 simulations that included a second Mars-mass co-orbital companion. Even with two Mars-mass companions sharing Earth's orbit about two-thirds of these models (66) also remained stable for the entire 250 Myr duration of the simulations. Of the 28 2-companion models that eventually became unstable 24 impacts were observed between Earth and an escaping co-orbital companion. The average delay we observed for an impact of a Mars-mass companion with Earth was 102 Myr, and the longest delay was 221 Myr. In 40% of the 8-planet models that became unstable (10 out of 25) Earth collided with the nearly equal mass Venus to form a super-Earth (loosely defined here as mass ≥1.7 M⊕). These impacts were typically the final giant impact in the system and often occurred after Earth and/or Venus has accreted one or more of the other large objects. Several of the stable configurations involved unusual 3-planet hierarchical co-orbital systems.
Numerical modeling of a vortex stabilized arcjet
NASA Astrophysics Data System (ADS)
Pawlas, Gary Edward
Arcjet thrusters are being actively considered for use in Earth orbit maneuvering applications. Satellite station-keeping is an example of a maneuvering application requiring the low thrust, high specific impulse of an arcjet. Experimental studies are currently the chief means of determining an optimal thruster configuration. Earlier numerical studies have failed to include all of the effects found in typical arcjets including complex geometries, viscosity and swirling flow. Arcjet geometries are large area ratio converging-diverging nozzles with centerbodies in the subsonic portion of the nozzle. The nozzle walls serve as the anode while the centerbody functions as the cathode. Viscous effects are important because the Reynolds number, based on the throat radius, is typically less than 1,000. Experimental studies have shown a swirl or circumferential velocity component stabilizes a constricted arc. The equations are described which governs the flow through a constricted arcjet thruster. An assumption that the flowfield is in local thermodynamic equilibrium leads to a single fluid plasma temperature model. An order of magnitude analysis reveals the governing fluid mechanics equations are uncoupled from the electromagnetic field equations. A numerical method is developed to solve the governing fluid mechanics equations, the Thin Layer Navier-Stokes equations. A coordinate transformation is used in deriving the governing equations to simplify the application of boundary conditions in complex geometries. An axisymmetric formulation is employed to include the swirl velocity component as well as the axial and redial velocity components. The numerical method is an implicit finite-volume technique and allows for large time steps to reach a converged steady-state solution. The inviscid fluxes are flux-split and Gauss-Seidel line relaxation is used to accelerate convergence. 'Converging diverging' nozzles with exit-to-throat area ratios up to 100:1 and annual nozzles were examined. Comparisons with experimental data and previous numerical results were in excellent agreement. Quantities examined included Mach number and static wall pressure distributions, and oblique shock structures.
ADHD and Depression Symptoms in Parent Couples Predict Response to Child ADHD and ODD Behavior.
Wymbs, Brian T; Dawson, Anne E; Egan, Theresa E; Sacchetti, Gina M; Tams, Sean T; Wymbs, Frances A
2017-04-01
Parents of children with attention-deficit hyperactivity disorder (ADHD) and oppositional defiant disorder (ODD) often have elevated ADHD and depressive symptoms, both of which increase the risk of ineffective parenting and interparental discord. However, little is known about whether child ADHD/ODD behavior and parent ADHD or depressive symptoms uniquely or synergistically predict the quality of parenting and interparental communication during triadic (mother-father-child) interactions. Ninety parent couples, including 51 who have children diagnosed with ADHD, were randomly assigned to interact with a 9-12 year-old confederate child (84 % male) exhibiting either ADHD/ODD-like behavior or typical behavior. Parents reported their own ADHD and depressive symptoms, and parents and observers rated the quality of parenting and interparental communication during the interaction. Actor-partner interdependence modeling indicated that child ADHD/ODD behavior predicted less positive and more negative parenting and communication, independent of adult ADHD and depressive symptoms. Parent couples including two parents with elevated ADHD communicated more positively while managing children exhibiting ADHD/ODD behavior than couples managing children behaving typically or couples with only one parent with elevated ADHD symptoms. Couples including one parent with, and one parent without, elevated ADHD or depressive symptoms parented less positively and more negatively, and communicated more negatively, when managing children exhibiting ADHD/ODD behavior than when managing children behaving typically. Taken together, depending on the similarity of ADHD and depressive symptom levels in parent couples, adults managing children exhibiting ADHD/ODD behavior may parent or communicate positively or negatively. Findings highlight the need to consider the psychopathology of both parents when treating children with ADHD in two-parent homes.
NASA Technical Reports Server (NTRS)
Madaras, Eric I.; Brush, Edwin F., III; Bridal, S. L.; Holland, Mark R.; Miller, James G.
1992-01-01
This paper focuses on the nature of a typical composite surface and its effects on scattering. Utilizing epoxy typical of that in composites and standard composite fabrication methods, a sample with release cloth impressions on its surface is produced. A simple model for the scattering from the surface impressions of this sample is constructed and then polar backscatter measurements are made on the sample and compared with the model predictions.
Clouds and ocean-atmosphere interactions. Final report, September 15, 1992--September 14, 1995
DOE Office of Scientific and Technical Information (OSTI.GOV)
Randall, D.A.; Jensen, T.G.
1995-10-01
Predictions of global change based on climate models are influencing both national and international policies on energy and the environment. Existing climate models show some skill in simulating the present climate, but suffer from many widely acknowledged deficiencies. Among the most serious problems is the need to apply ``flux corrections`` to prevent the models from drifting away from the observed climate in control runs that do not include external perturbing influences such as increased carbon dioxide (CO{sub 2}) concentrations. The flux corrections required to prevent climate drift are typically comparable in magnitude to the observed fluxes themselves. Although there canmore » be many contributing reasons for the climate drift problem, clouds and their effects on the surface energy budget are among the prime suspects. The authors have conducted a research program designed to investigate global air-sea interaction as it relates to the global warming problem, with special emphasis on the role of clouds. Their research includes model development efforts; application of models to simulation of present and future climates, with comparison to observations wherever possible; and vigorous participation in ongoing efforts to intercompare the present generation of atmospheric general circulation models.« less
Brignoli, Riccardo; Brown, J Steven; Skye, H; Domanski, Piotr A
2017-08-01
Preliminary refrigerant screenings typically rely on using cycle simulation models involving thermodynamic properties alone. This approach has two shortcomings. First, it neglects transport properties, whose influence on system performance is particularly strong through their impact on the performance of the heat exchangers. Second, the refrigerant temperatures in the evaporator and condenser are specified as input, while real-life equipment operates at imposed heat sink and heat source temperatures; the temperatures in the evaporator and condensers are established based on overall heat transfer resistances of these heat exchangers and the balance of the system. The paper discusses a simulation methodology and model that addresses the above shortcomings. This model simulates the thermodynamic cycle operating at specified heat sink and heat source temperature profiles, and includes the ability to account for the effects of thermophysical properties and refrigerant mass flux on refrigerant heat transfer and pressure drop in the air-to-refrigerant evaporator and condenser. Additionally, the model can optimize the refrigerant mass flux in the heat exchangers to maximize the Coefficient of Performance. The new model is validated with experimental data and its predictions are contrasted to those of a model based on thermodynamic properties alone.
NASA Astrophysics Data System (ADS)
Luscz, E.; Kendall, A. D.; Martin, S. L.; Hyndman, D. W.
2011-12-01
Watershed nutrient loading models are important tools used to address issues including eutrophication, harmful algal blooms, and decreases in aquatic species diversity. Such approaches have been developed to assess the level and source of nutrient loading across a wide range of scales, yet there is typically a tradeoff between the scale of the model and the level of detail regarding the individual sources of nutrients. To avoid this tradeoff, we developed a detailed source nutrient loading model for every watershed in Michigan's lower peninsula. Sources considered include atmospheric deposition, septic tanks, waste water treatment plants, combined sewer overflows, animal waste from confined animal feeding operations and pastured animals, as well as fertilizer from agricultural, residential, and commercial sources and industrial effluents . Each source is related to readily-available GIS inputs that may vary through time. This loading model was used to assess the importance of sources and landscape factors in nutrient loading rates to watersheds, and how these have changed in recent decades. The results showed the value of detailed source inputs, revealing regional trends while still providing insight to the existence of variability at smaller scales.
14 CFR Appendix C to Part 1215 - Typical User Activity Timeline
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Typical User Activity Timeline C Appendix C... RELAY SATELLITE SYSTEM (TDRSS) Pt. 1215, App. C Appendix C to Part 1215—Typical User Activity Timeline... mission model. 3 years before launch (Ref. § 1215.109(c). Submit general user requirements to permit...
Robotic Billiards: Understanding Humans in Order to Counter Them.
Nierhoff, Thomas; Leibrandt, Konrad; Lorenz, Tamara; Hirche, Sandra
2016-08-01
Ongoing technological advances in the areas of computation, sensing, and mechatronics enable robotic-based systems to interact with humans in the real world. To succeed against a human in a competitive scenario, a robot must anticipate the human behavior and include it in its own planning framework. Then it can predict the next human move and counter it accordingly, thus not only achieving overall better performance but also systematically exploiting the opponent's weak spots. Pool is used as a representative scenario to derive a model-based planning and control framework where not only the physics of the environment but also a model of the opponent is considered. By representing the game of pool as a Markov decision process and incorporating a model of the human decision-making based on studies, an optimized policy is derived. This enables the robot to include the opponent's typical game style into its tactical considerations when planning a stroke. The results are validated in simulations and real-life experiments with an anthropomorphic robot playing pool against a human.
NASA Astrophysics Data System (ADS)
Early, A. B.; Chen, G.; Beach, A. L., III; Northup, E. A.
2016-12-01
NASA has conducted airborne tropospheric chemistry studies for over three decades. These field campaigns have generated a great wealth of observations, including a wide range of the trace gases and aerosol properties. The Atmospheric Science Data Center (ASDC) at NASA Langley Research Center in Hampton Virginia originally developed the Toolsets for Airborne Data (TAD) web application in September 2013 to meet the user community needs for manipulating aircraft data for scientific research on climate change and air quality relevant issues. The analysis of airborne data typically requires data subsetting, which can be challenging and resource intensive for end users. In an effort to streamline this process, the TAD toolset enhancements will include new data subsetting features and updates to the current database model. These will include two subsetters: temporal and spatial, and vertical profile. The temporal and spatial subsetter will allow users to both focus on data from a specific location and/or time period. The vertical profile subsetter will retrieve data collected during an individual aircraft ascent or descent spiral. This effort will allow for the automation of the typically labor-intensive manual data subsetting process, which will provide users with data tailored to their specific research interests. The development of these enhancements will be discussed in this presentation.
An improved active contour model for glacial lake extraction
NASA Astrophysics Data System (ADS)
Zhao, H.; Chen, F.; Zhang, M.
2017-12-01
Active contour model is a widely used method in visual tracking and image segmentation. Under the driven of objective function, the initial curve defined in active contour model will evolve to a stable condition - a desired result in given image. As a typical region-based active contour model, C-V model has a good effect on weak boundaries detection and anti noise ability which shows great potential in glacial lake extraction. Glacial lake is a sensitive indicator for reflecting global climate change, therefore accurate delineate glacial lake boundaries is essential to evaluate hydrologic environment and living environment. However, the current method in glacial lake extraction mainly contains water index method and recognition classification method are diffcult to directly applied in large scale glacial lake extraction due to the diversity of glacial lakes and masses impacted factors in the image, such as image noise, shadows, snow and ice, etc. Regarding the abovementioned advantanges of C-V model and diffcults in glacial lake extraction, we introduce the signed pressure force function to improve the C-V model for adapting to processing of glacial lake extraction. To inspect the effect of glacial lake extraction results, three typical glacial lake development sites were selected, include Altai mountains, Centre Himalayas, South-eastern Tibet, and Landsat8 OLI imagery was conducted as experiment data source, Google earth imagery as reference data for varifying the results. The experiment consequence suggests that improved active contour model we proposed can effectively discriminate the glacial lakes from complex backgound with a higher Kappa Coefficient - 0.895, especially in some small glacial lakes which belongs to weak information in the image. Our finding provide a new approach to improved accuracy under the condition of large proportion of small glacial lakes and the possibility for automated glacial lake mapping in large-scale area.
Ocean-Atmosphere Coupled Model Simulations of Precipitation in the Central Andes
NASA Technical Reports Server (NTRS)
Nicholls, Stephen D.; Mohr, Karen I.
2015-01-01
The meridional extent and complex orography of the South American continent contributes to a wide diversity of climate regimes ranging from hyper-arid deserts to tropical rainforests to sub-polar highland regions. In addition, South American meteorology and climate are also made further complicated by ENSO, a powerful coupled ocean-atmosphere phenomenon. Modelling studies in this region have typically resorted to either atmospheric mesoscale or atmosphere-ocean coupled global climate models. The latter offers full physics and high spatial resolution, but it is computationally inefficient typically lack an interactive ocean, whereas the former offers high computational efficiency and ocean-atmosphere coupling, but it lacks adequate spatial and temporal resolution to adequate resolve the complex orography and explicitly simulate precipitation. Explicit simulation of precipitation is vital in the Central Andes where rainfall rates are light (0.5-5 mm hr-1), there is strong seasonality, and most precipitation is associated with weak mesoscale-organized convection. Recent increases in both computational power and model development have led to the advent of coupled ocean-atmosphere mesoscale models for both weather and climate study applications. These modelling systems, while computationally expensive, include two-way ocean-atmosphere coupling, high resolution, and explicit simulation of precipitation. In this study, we use the Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST), a fully-coupled mesoscale atmosphere-ocean modeling system. Previous work has shown COAWST to reasonably simulate the entire 2003-2004 wet season (Dec-Feb) as validated against both satellite and model analysis data when ECMWF interim analysis data were used for boundary conditions on a 27-9-km grid configuration (Outer grid extent: 60.4S to 17.7N and 118.6W to 17.4W).
Comparison of the UAF Ionosphere Model with Incoherent-Scatter Radar Data
NASA Astrophysics Data System (ADS)
McAllister, J.; Maurits, S.; Kulchitsky, A.; Watkins, B.
2004-12-01
The UAF Eulerian Parallel Polar Ionosphere Model (UAF EPPIM) is a first-principles three-dimensional time-dependent representation of the northern polar ionosphere (>50 degrees north latitude). The model routinely generates short-term (~2 hours) ionospheric forecasts in real-time. It may also be run in post-processing/batch mode for specific time periods, including long-term (multi-year) simulations. The model code has been extensively validated (~100k comparisons/model year) against ionosonde foF2 data during quiet and moderate solar activity in 2002-2004 with reasonable fidelity (typical relative RMS 10-20% for summer daytime, 30-50% winter nighttime). However, ionosonde data is frequently not available during geomagnetic disturbances. The objective of the work reported here is to compare model outputs with available incoherent-scatter radar data during the storm period of October-November 2003. Model accuracy is examined for this period and compared to model performance during geomagnetically quiet and moderate circumstances. Possible improvements are suggested which are likely to boost model fidelity during storm conditions.
Computational Modeling for Language Acquisition: A Tutorial With Syntactic Islands.
Pearl, Lisa S; Sprouse, Jon
2015-06-01
Given the growing prominence of computational modeling in the acquisition research community, we present a tutorial on how to use computational modeling to investigate learning strategies that underlie the acquisition process. This is useful for understanding both typical and atypical linguistic development. We provide a general overview of why modeling can be a particularly informative tool and some general considerations when creating a computational acquisition model. We then review a concrete example of a computational acquisition model for complex structural knowledge referred to as syntactic islands. This includes an overview of syntactic islands knowledge, a precise definition of the acquisition task being modeled, the modeling results, and how to meaningfully interpret those results in a way that is relevant for questions about knowledge representation and the learning process. Computational modeling is a powerful tool that can be used to understand linguistic development. The general approach presented here can be used to investigate any acquisition task and any learning strategy, provided both are precisely defined.
Wang, D-D; Lu, J-M; Li, Q; Li, Z-P
2018-05-15
Different population pharmacokinetics (PPK) models of tacrolimus have been established in various populations. However, the tacrolimus PPK model in paediatric systemic lupus erythematosus (PSLE) is still undefined. This study aimed to establish the tacrolimus PPK model in Chinese PSLE. A total of nineteen Chinese patients with PSLE from real-world study were characterized with nonlinear mixed-effects modelling (NONMEM). The impact of demographic features, biological characteristics, and concomitant medications was evaluated. Model validation was assessed by bootstrap and prediction-corrected visual predictive check (VPC). A one-compartment model with first-order absorption and elimination was determined to be the most suitable model in PSLE. The typical values of apparent oral clearance (CL/F) and the apparent volume of distribution (V/F) in the final model were 2.05 L/h and 309 L, respectively. Methylprednisolone and simvastatin were included as significant. The first validated tacrolimus PPK model in patients with PSLE is presented. © 2018 John Wiley & Sons Ltd.
A Method for Generating Reduced Order Linear Models of Supersonic Inlets
NASA Technical Reports Server (NTRS)
Chicatelli, Amy; Hartley, Tom T.
1997-01-01
For the modeling of high speed propulsion systems, there are at least two major categories of models. One is based on computational fluid dynamics (CFD), and the other is based on design and analysis of control systems. CFD is accurate and gives a complete view of the internal flow field, but it typically has many states and runs much slower dm real-time. Models based on control design typically run near real-time but do not always capture the fundamental dynamics. To provide improved control models, methods are needed that are based on CFD techniques but yield models that are small enough for control analysis and design.
Using video modeling to teach reciprocal pretend play to children with autism.
MacDonald, Rebecca; Sacramone, Shelly; Mansfield, Renee; Wiltz, Kristine; Ahearn, William H
2009-01-01
The purpose of the present study was to use video modeling to teach children with autism to engage in reciprocal pretend play with typically developing peers. Scripted play scenarios involving various verbalizations and play actions with adults as models were videotaped. Two children with autism were each paired with a typically developing child, and a multiple-probe design across three play sets was used to evaluate the effects of the video modeling procedure. Results indicated that both children with autism and the typically developing peers acquired the sequences of scripted verbalizations and play actions quickly and maintained this performance during follow-up probes. In addition, probes indicated an increase in the mean number of unscripted verbalizations as well as reciprocal verbal interactions and cooperative play. These findings are discussed as they relate to the development of reciprocal pretend-play repertoires in young children with autism.
Using Video Modeling to Teach Reciprocal Pretend Play to Children with Autism
MacDonald, Rebecca; Sacramone, Shelly; Mansfield, Renee; Wiltz, Kristine; Ahearn, William H
2009-01-01
The purpose of the present study was to use video modeling to teach children with autism to engage in reciprocal pretend play with typically developing peers. Scripted play scenarios involving various verbalizations and play actions with adults as models were videotaped. Two children with autism were each paired with a typically developing child, and a multiple-probe design across three play sets was used to evaluate the effects of the video modeling procedure. Results indicated that both children with autism and the typically developing peers acquired the sequences of scripted verbalizations and play actions quickly and maintained this performance during follow-up probes. In addition, probes indicated an increase in the mean number of unscripted verbalizations as well as reciprocal verbal interactions and cooperative play. These findings are discussed as they relate to the development of reciprocal pretend-play repertoires in young children with autism. PMID:19721729
NASA Astrophysics Data System (ADS)
Du, Changwen; Zhou, Jianmin; Liu, Jianfeng
2017-02-01
With increased demand for Cordyceps sinensis it needs rapid methods to meet the challenge of identification raised in quality control. In this study Cordyceps sinensis from four typical natural habitats in China was characterized by depth-profiling Fourier transform infrared photoacoustic spectroscopy. Results demonstrated that Cordyceps sinensis samples resulted in typical photoacoustic spectral appearance, but heterogeneity was sensed in the whole sample; due to the heterogeneity Cordyceps sinensis was represented by spectra of four groups including head, body, tail and leaf under a moving mirror velocity of 0.30 cm s- 1. The spectra of the four groups were used as input of a probabilistic neural network (PNN) to identify the source of Cordyceps sinensis, and all the samples were correctly identified by the PNN model. Therefore, depth-profiling Fourier transform infrared photoacoustic spectroscopy provides novel and unique technique to identify Cordyceps sinensis, which shows great potential in quality control of Cordyceps sinensis.
Neural correlates of the implicit association test: evidence for semantic and emotional processing.
Williams, John K; Themanson, Jason R
2011-09-01
The implicit association test (IAT) has been widely used in social cognitive research over the past decade. Controversies have arisen over what cognitive processes are being tapped into using this task. While most models use behavioral (RT) results to support their claims, little research has examined neurocognitive correlates of these behavioral measures. The present study measured event-related brain potentials (ERPs) of participants while completing a gay-straight IAT in order to further understand the processes involved in a typical group bias IAT. Results indicated significantly smaller N400 amplitudes and significantly larger LPP amplitudes for compatible trials than for incompatible trials, suggesting that both the semantic and emotional congruence of stimuli paired together in an IAT trial contribute to the typical RT differences found, while no differences were present for earlier ERP components including the N2. These findings are discussed with respect to early and late processing in group bias IATs.
On the mechanics of growing thin biological membranes
NASA Astrophysics Data System (ADS)
Rausch, Manuel K.; Kuhl, Ellen
2014-02-01
Despite their seemingly delicate appearance, thin biological membranes fulfill various crucial roles in the human body and can sustain substantial mechanical loads. Unlike engineering structures, biological membranes are able to grow and adapt to changes in their mechanical environment. Finite element modeling of biological growth holds the potential to better understand the interplay of membrane form and function and to reliably predict the effects of disease or medical intervention. However, standard continuum elements typically fail to represent thin biological membranes efficiently, accurately, and robustly. Moreover, continuum models are typically cumbersome to generate from surface-based medical imaging data. Here we propose a computational model for finite membrane growth using a classical midsurface representation compatible with standard shell elements. By assuming elastic incompressibility and membrane-only growth, the model a priori satisfies the zero-normal stress condition. To demonstrate its modular nature, we implement the membrane growth model into the general-purpose non-linear finite element package Abaqus/Standard using the concept of user subroutines. To probe efficiently and robustness, we simulate selected benchmark examples of growing biological membranes under different loading conditions. To demonstrate the clinical potential, we simulate the functional adaptation of a heart valve leaflet in ischemic cardiomyopathy. We believe that our novel approach will be widely applicable to simulate the adaptive chronic growth of thin biological structures including skin membranes, mucous membranes, fetal membranes, tympanic membranes, corneoscleral membranes, and heart valve membranes. Ultimately, our model can be used to identify diseased states, predict disease evolution, and guide the design of interventional or pharmaceutic therapies to arrest or revert disease progression.
Space-time least-squares Petrov-Galerkin projection in nonlinear model reduction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Youngsoo; Carlberg, Kevin Thomas
Our work proposes a space-time least-squares Petrov-Galerkin (ST-LSPG) projection method for model reduction of nonlinear dynamical systems. In contrast to typical nonlinear model-reduction methods that first apply Petrov-Galerkin projection in the spatial dimension and subsequently apply time integration to numerically resolve the resulting low-dimensional dynamical system, the proposed method applies projection in space and time simultaneously. To accomplish this, the method first introduces a low-dimensional space-time trial subspace, which can be obtained by computing tensor decompositions of state-snapshot data. The method then computes discrete-optimal approximations in this space-time trial subspace by minimizing the residual arising after time discretization over allmore » space and time in a weighted ℓ 2-norm. This norm can be de ned to enable complexity reduction (i.e., hyper-reduction) in time, which leads to space-time collocation and space-time GNAT variants of the ST-LSPG method. Advantages of the approach relative to typical spatial-projection-based nonlinear model reduction methods such as Galerkin projection and least-squares Petrov-Galerkin projection include: (1) a reduction of both the spatial and temporal dimensions of the dynamical system, (2) the removal of spurious temporal modes (e.g., unstable growth) from the state space, and (3) error bounds that exhibit slower growth in time. Numerical examples performed on model problems in fluid dynamics demonstrate the ability of the method to generate orders-of-magnitude computational savings relative to spatial-projection-based reduced-order models without sacrificing accuracy.« less
On the mechanics of growing thin biological membranes
Rausch, Manuel K.; Kuhl, Ellen
2013-01-01
Despite their seemingly delicate appearance, thin biological membranes fulfill various crucial roles in the human body and can sustain substantial mechanical loads. Unlike engineering structures, biological membranes are able to grow and adapt to changes in their mechanical environment. Finite element modeling of biological growth holds the potential to better understand the interplay of membrane form and function and to reliably predict the effects of disease or medical intervention. However, standard continuum elements typically fail to represent thin biological membranes efficiently, accurately, and robustly. Moreover, continuum models are typically cumbersome to generate from surface-based medical imaging data. Here we propose a computational model for finite membrane growth using a classical midsurface representation compatible with standard shell elements. By assuming elastic incompressibility and membrane-only growth, the model a priori satisfies the zero-normal stress condition. To demonstrate its modular nature, we implement the membrane growth model into the general-purpose non-linear finite element package Abaqus/Standard using the concept of user subroutines. To probe efficiently and robustness, we simulate selected benchmark examples of growing biological membranes under different loading conditions. To demonstrate the clinical potential, we simulate the functional adaptation of a heart valve leaflet in ischemic cardiomyopathy. We believe that our novel approach will be widely applicable to simulate the adaptive chronic growth of thin biological structures including skin membranes, mucous membranes, fetal membranes, tympanic membranes, corneoscleral membranes, and heart valve membranes. Ultimately, our model can be used to identify diseased states, predict disease evolution, and guide the design of interventional or pharmaceutic therapies to arrest or revert disease progression. PMID:24563551
ERIC Educational Resources Information Center
Recker, Margaret M.; Pirolli, Peter
Students learning to program recursive LISP functions in a typical school-like lesson on recursion were observed. The typical lesson contains text and examples and involves solving a series of programming problems. The focus of this study is on students' learning strategies in new domains. In this light, a Soar computational model of…
ERIC Educational Resources Information Center
Weber, Joe
2004-01-01
The development of new transport systems has been an important and highly visible component of economic development and spatial reorganization in the past two centuries. The Ideal-Typical Sequence of network development has been a widely used model of transport development. This paper shows that this model has been used in several different ways,…
Idealness and similarity in goal-derived categories: a computational examination.
Voorspoels, Wouter; Storms, Gert; Vanpaemel, Wolf
2013-02-01
The finding that the typicality gradient in goal-derived categories is mainly driven by ideals rather than by exemplar similarity has stood uncontested for nearly three decades. Due to the rather rigid earlier implementations of similarity, a key question has remained--that is, whether a more flexible approach to similarity would alter the conclusions. In the present study, we evaluated whether a similarity-based approach that allows for dimensional weighting could account for findings in goal-derived categories. To this end, we compared a computational model of exemplar similarity (the generalized context model; Nosofsky, Journal of Experimental Psychology. General 115:39-57, 1986) and a computational model of ideal representation (the ideal-dimension model; Voorspoels, Vanpaemel, & Storms, Psychonomic Bulletin & Review 18:1006-114, 2011) in their accounts of exemplar typicality in ten goal-derived categories. In terms of both goodness-of-fit and generalizability, we found strong evidence for an ideal approach in nearly all categories. We conclude that focusing on a limited set of features is necessary but not sufficient to account for the observed typicality gradient. A second aspect of ideal representations--that is, that extreme rather than common, central-tendency values drive typicality--seems to be crucial.
A comparison of FE beam and continuum elements for typical nitinol stent geometries
NASA Astrophysics Data System (ADS)
Ballew, Wesley; Seelecke, Stefan
2009-03-01
With interest in improved efficiency and a more complete description of the SMA material, this paper compares finite element (FE) simulations of typical stent geometries using two different constitutive models and two different element types. Typically, continuum elements are used for the simulation of stents, for example the commercial FE software ANSYS offers a continuum element based on Auricchio's SMA model. Almost every stent geometry, however, is made up of long and slender components and can be modeled more efficiently, in the computational sense, with beam elements. Using the ANSYS user programmable material feature, we implement the free energy based SMA model developed by Mueller and Seelecke into the ANSYS beam element 188. Convergence behavior for both, beam and continuum formulations, is studied in terms of element and layer number, respectively. This is systematically illustrated first for the case of a straight cantilever beam under end loading, and subsequently for a section of a z-bend wire, a typical stent sub-geometry. It is shown that the computation times for the beam element are reduced to only one third of those of the continuum element, while both formulations display a comparable force/displacement response.
ATTDES: An Expert System for Satellite Attitude Determination and Control. 2
NASA Technical Reports Server (NTRS)
Mackison, Donald L.; Gifford, Kevin
1996-01-01
The design, analysis, and flight operations of satellite attitude determintion and attitude control systems require extensive mathematical formulations, optimization studies, and computer simulation. This is best done by an analyst with extensive education and experience. The development of programs such as ATTDES permit the use of advanced techniques by those with less experience. Typical tasks include the mission analysis to select stabilization and damping schemes, attitude determination sensors and algorithms, and control system designs to meet program requirements. ATTDES is a system that includes all of these activities, including high fidelity orbit environment models that can be used for preliminary analysis, parameter selection, stabilization schemes, the development of estimators covariance analyses, and optimization, and can support ongoing orbit activities. The modification of existing simulations to model new configurations for these purposes can be an expensive, time consuming activity that becomes a pacing item in the development and operation of such new systems. The use of an integrated tool such as ATTDES significantly reduces the effort and time required for these tasks.
Hudson, Kerry D; Farran, Emily K
2013-09-01
Drawings by individuals with Williams syndrome (WS) typically lack cohesion. The popular hypothesis is that this is a result of excessive focus on local-level detail at the expense of global configuration. In this study, we explored a novel hypothesis that inadequate attention might underpin drawing in WS. WS and typically developing (TD) non-verbal ability matched groups copied and traced a house figure comprised of geometric shapes. The house was presented on a computer screen for 5-s periods and participants pressed a key to re-view the model. Frequency of key-presses indexed the looks to the model. The order that elements were replicated was recorded to assess hierarchisation of elements. If a lack of attention to the model explained poor drawing performance, we expected participants with WS to look less frequently to the model than TD children when copying. If a local-processing preference underpins drawing in WS, more local than global elements would be produced. Results supported the first, but not second hypothesis. The WS group looked to the model infrequently, but global, not local, parts were drawn first, scaffolding local-level details. Both groups adopted a similar order of drawing and tracing of parts, suggesting typical, although delayed strategy-use in the WS group. Additionally both groups drew larger elements of the model before smaller elements, suggested a size-bias when drawing. Copyright © 2013 Elsevier Ltd. All rights reserved.
Comparison of heaving buoy and oscillating flap wave energy converters
NASA Astrophysics Data System (ADS)
Abu Bakar, Mohd Aftar; Green, David A.; Metcalfe, Andrew V.; Najafian, G.
2013-04-01
Waves offer an attractive source of renewable energy, with relatively low environmental impact, for communities reasonably close to the sea. Two types of simple wave energy converters (WEC), the heaving buoy WEC and the oscillating flap WEC, are studied. Both WECs are considered as simple energy converters because they can be modelled, to a first approximation, as single degree of freedom linear dynamic systems. In this study, we estimate the response of both WECs to typical wave inputs; wave height for the buoy and corresponding wave surge for the flap, using spectral methods. A nonlinear model of the oscillating flap WEC that includes the drag force, modelled by the Morison equation is also considered. The response to a surge input is estimated by discrete time simulation (DTS), using central difference approximations to derivatives. This is compared with the response of the linear model obtained by DTS and also validated using the spectral method. Bendat's nonlinear system identification (BNLSI) technique was used to analyze the nonlinear dynamic system since the spectral analysis was only suitable for linear dynamic system. The effects of including the nonlinear term are quantified.
The mouse and ferret models for studying the novel avian-origin human influenza A (H7N9) virus.
Xu, Lili; Bao, Linlin; Deng, Wei; Zhu, Hua; Chen, Ting; Lv, Qi; Li, Fengdi; Yuan, Jing; Xiang, Zhiguang; Gao, Kai; Xu, Yanfeng; Huang, Lan; Li, Yanhong; Liu, Jiangning; Yao, Yanfeng; Yu, Pin; Yong, Weidong; Wei, Qiang; Zhang, Lianfeng; Qin, Chuan
2013-08-08
The current study was conducted to establish animal models (including mouse and ferret) for the novel avian-origin H7N9 influenza virus. A/Anhui/1/2013 (H7N9) virus was administered by intranasal instillation to groups of mice and ferrets, and animals developed typical clinical signs including body weight loss (mice and ferrets), ruffled fur (mice), sneezing (ferrets), and death (mice). Peak virus shedding from respiratory tract was observed on 2 days post inoculation (d.p.i.) for mice and 3-5 d.p.i. for ferrets. Virus could also be detected in brain, liver, spleen, kidney, and intestine from inoculated mice, and in heart, liver, and olfactory bulb from inoculated ferrets. The inoculation of H7N9 could elicit seroconversion titers up to 1280 in ferrets and 160 in mice. Leukopenia, significantly reduced lymphocytes but increased neutrophils were also observed in mouse and ferret models. The mouse and ferret model enables detailed studies of the pathogenesis of this illness and lay the foundation for drug or vaccine evaluation.
Zhuang, Chengxu; Wang, Yulong; Yamins, Daniel; Hu, Xiaolin
2017-01-01
Visual information in the visual cortex is processed in a hierarchical manner. Recent studies show that higher visual areas, such as V2, V3, and V4, respond more vigorously to images with naturalistic higher-order statistics than to images lacking them. This property is a functional signature of higher areas, as it is much weaker or even absent in the primary visual cortex (V1). However, the mechanism underlying this signature remains elusive. We studied this problem using computational models. In several typical hierarchical visual models including the AlexNet, VggNet, and SHMAX, this signature was found to be prominent in higher layers but much weaker in lower layers. By changing both the model structure and experimental settings, we found that the signature strongly correlated with sparse firing of units in higher layers but not with any other factors, including model structure, training algorithm (supervised or unsupervised), receptive field size, and property of training stimuli. The results suggest an important role of sparse neuronal activity underlying this special feature of higher visual areas.
Zhuang, Chengxu; Wang, Yulong; Yamins, Daniel; Hu, Xiaolin
2017-01-01
Visual information in the visual cortex is processed in a hierarchical manner. Recent studies show that higher visual areas, such as V2, V3, and V4, respond more vigorously to images with naturalistic higher-order statistics than to images lacking them. This property is a functional signature of higher areas, as it is much weaker or even absent in the primary visual cortex (V1). However, the mechanism underlying this signature remains elusive. We studied this problem using computational models. In several typical hierarchical visual models including the AlexNet, VggNet, and SHMAX, this signature was found to be prominent in higher layers but much weaker in lower layers. By changing both the model structure and experimental settings, we found that the signature strongly correlated with sparse firing of units in higher layers but not with any other factors, including model structure, training algorithm (supervised or unsupervised), receptive field size, and property of training stimuli. The results suggest an important role of sparse neuronal activity underlying this special feature of higher visual areas. PMID:29163117
New Tools Being Developed for Engine- Airframe Blade-Out Structural Simulations
NASA Technical Reports Server (NTRS)
Lawrence, Charles
2003-01-01
One of the primary concerns of aircraft structure designers is the accurate simulation of the blade-out event. This is required for the aircraft to pass Federal Aviation Administration (FAA) certification and to ensure that the aircraft is safe for operation. Typically, the most severe blade-out occurs when a first-stage fan blade in a high-bypass gas turbine engine is released. Structural loading results from both the impact of the blade onto the containment ring and the subsequent instantaneous unbalance of the rotating components. Reliable simulations of blade-out are required to ensure structural integrity during flight as well as to guarantee successful blade-out certification testing. The loads generated by these analyses are critical to the design teams for several components of the airplane structures including the engine, nacelle, strut, and wing, as well as the aircraft fuselage. Currently, a collection of simulation tools is used for aircraft structural design. Detailed high-fidelity simulation tools are used to capture the structural loads resulting from blade loss, and then these loads are used as input into an overall system model that includes complete structural models of both the engines and the airframe. The detailed simulation (shown in the figure) includes the time-dependent trajectory of the lost blade and its interactions with the containment structure, and the system simulation includes the lost blade loadings and the interactions between the rotating turbomachinery and the remaining aircraft structural components. General-purpose finite element structural analysis codes are typically used, and special provisions are made to include transient effects from the blade loss and rotational effects resulting from the engine s turbomachinery. To develop and validate these new tools with test data, the NASA Glenn Research Center has teamed with GE Aircraft Engines, Pratt & Whitney, Boeing Commercial Aircraft, Rolls-Royce, and MSC.Software.
Bifurcation and stability in a model of moist convection in a shearing environment
NASA Technical Reports Server (NTRS)
Shirer, H. N.
1980-01-01
The truncated spectral system (model I) of shallow moist two-dimensional convection discussed by Shirer and Dutton (1979) is expanded to eleven coefficients (model II) in order to include a basic wind. Cloud streets, the atmospheric analog of the solutions to model II, are typically observed in an environment containing a shearing basic motion field. Analysis of the branching behavior of solutions to mode II shows that, if the basic wind direction varies with height, very complex temporal behavior is possible as the modified Rayleigh number HR is increased sufficiently. The first convective solution is periodic, corresponding to a cloud band that propagates downwind; but secondary branching to a two-dimensional torus can occur for larger values of HR. Orientation band formulas are derived whose predictions generally agree with the results of previous studies.
Lung cancer in never smokers Epidemiology and risk prediction models
McCarthy, William J.; Meza, Rafael; Jeon, Jihyoun; Moolgavkar, Suresh
2012-01-01
In this chapter we review the epidemiology of lung cancer incidence and mortality among never smokers/ nonsmokers and describe the never smoker lung cancer risk models used by CISNET modelers. Our review focuses on those influences likely to have measurable population impact on never smoker risk, such as secondhand smoke, even though the individual-level impact may be small. Occupational exposures may also contribute importantly to the population attributable risk of lung cancer. We examine the following risk factors in this chapter: age, environmental tobacco smoke, cooking fumes, ionizing radiation including radon gas, inherited genetic susceptibility, selected occupational exposures, preexisting lung disease, and oncogenic viruses. We also compare the prevalence of never smokers between the three CISNET smoking scenarios and present the corresponding lung cancer mortality estimates among never smokers as predicted by a typical CISNET model. PMID:22882894
A gentle introduction to quantile regression for ecologists
Cade, B.S.; Noon, B.R.
2003-01-01
Quantile regression is a way to estimate the conditional quantiles of a response variable distribution in the linear model that provides a more complete view of possible causal relationships between variables in ecological processes. Typically, all the factors that affect ecological processes are not measured and included in the statistical models used to investigate relationships between variables associated with those processes. As a consequence, there may be a weak or no predictive relationship between the mean of the response variable (y) distribution and the measured predictive factors (X). Yet there may be stronger, useful predictive relationships with other parts of the response variable distribution. This primer relates quantile regression estimates to prediction intervals in parametric error distribution regression models (eg least squares), and discusses the ordering characteristics, interval nature, sampling variation, weighting, and interpretation of the estimates for homogeneous and heterogeneous regression models.
Application of Wind Tunnel Free-Flight Technique for Wake Vortex Encounters
NASA Technical Reports Server (NTRS)
Brandon, Jay M.; Jordan, Frank L., Jr.; Stuever, Robert A.; Buttrill, Catherine W.
1997-01-01
A wind tunnel investigation was conducted in the Langley 30- by 60-Foot Tunnel to assess the free-flight test technique as a tool in research on wake vortex encounters. A typical 17.5-percent scale business-class jet airplane model was flown behind a stationary wing mounted in the forward portion of the wind tunnel test section. The span ratio (model span-generating wingspan) was 0.75. The wing angle of attack could be adjusted to produce a vortex of desired strength. The test airplane model was successfully flown in the vortex and through the vortex for a range of vortex strengths. Data obtained included the model airplane body axis accelerations, angular rates, attitudes, and control positions as a function of vortex strength and relative position. Pilot comments and video records were also recorded during the vortex encounters.
NASA Astrophysics Data System (ADS)
Wang, Jun-Wei; Zhou, Tian-Shou
2009-12-01
In this paper, we develop a new mathematical model for the mammalian circadian clock, which incorporates both transcriptional/translational feedback loops (TTFLs) and a cAMP-mediated feedback loop. The model shows that TTFLs and cAMP signalling cooperatively drive the circadian rhythms. It reproduces typical experimental observations with qualitative similarities, e.g. circadian oscillations in constant darkness and entrainment to light-dark cycles. In addition, it can explain the phenotypes of cAMP-mutant and Rev-erbα-/--mutant mice, and help us make an experimentally-testable prediction: oscillations may be rescued when arrhythmic mice with constitutively low concentrations of cAMP are crossed with Rev-erbα-/- mutant mice. The model enhances our understanding of the mammalian circadian clockwork from the viewpoint of the entire cell.
In-depth analysis and modelling of self-heating effects in nanometric DGMOSFETs
NASA Astrophysics Data System (ADS)
Roldán, J. B.; González, B.; Iñiguez, B.; Roldán, A. M.; Lázaro, A.; Cerdeira, A.
2013-01-01
Self-heating effects (SHEs) in nanometric symmetrical double-gate MOSFETs (DGMOSFETs) have been analysed. An equivalent thermal circuit for the transistors has been developed to characterise thermal effects, where the temperature and thickness dependency of the thermal conductivity of the silicon and oxide layers within the devices has been included. The equivalent thermal circuit is consistent with simulations using a commercial technology computer-aided design (TCAD) tool (Sentaurus by Synopsys). In addition, a model for DGMOSFETs has been developed where SHEs have been considered in detail, taking into account the temperature dependence of the low-field mobility, saturation velocity, and inversion charge. The model correctly reproduces Sentaurus simulation data for the typical bias range used in integrated circuits. Lattice temperatures predicted by simulation are coherently reproduced by the model for varying silicon layer geometry.
NASA Astrophysics Data System (ADS)
Kanjilal, Baishali; Iram, Samreen; Das, Atreyee; Chakrabarti, Haimanti
2018-05-01
This work reports a novel two dimensional approach to the theoretical computation of the glass transition temperature in simple hypothetical icosahedral packed structures based on Thin Film metallic glasses using liquid state theories in the realm of transport properties. The model starts from Navier-Stokes equation and evaluates the statistical average velocity of each different species of atom under the condition of ensemble equality to compute diffusion lengths and the diffusion coefficients as a function of temperature. The additional correction brought in is that of the limited states due to tethering of one nodule vis -a-vis the others. The movement of the molecules use our Twin Cell Model a typical model pertinent for modeling chain motions. A temperature viscosity correction by Cohen and Grest is included through the temperature dependence of the relaxation times for glass formers.
Comments on 'Frontogenesis in a moist semigeostrophic model'
NASA Technical Reports Server (NTRS)
Keyser, D.; Anthes, R. A.
1986-01-01
The development of narrow updrafts or jetlike features in the vertical motion field (VMF) over the leading edge of a surface frontal zone is examined on the basis of model simulations, summarizing and clarifying the results presented by Keyser and Anthes (1982) and responding to critical remarks by Mak and Bannon (1984). Typical velocity and potential-temperature cross sections are shown, and it is concluded that the inclusion of generally parameterized planetary-boundary-layer (PBL) physics in the model has a significant effect on the VMF, suggesting that frictional processes alone (without latent heating) can explain the formation of jetlike frontal updrafts. In a reply by Mak and Bannon it is argued that the increased strength of the VMF in models including PBL physics is not significant, whereas other models show that the VMF can be significantly strengthened and narrowed by condensational heating alone.
Some guidance on preparing validation plans for the DART Full System Models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gray, Genetha Anne; Hough, Patricia Diane; Hills, Richard Guy
2009-03-01
Planning is an important part of computational model verification and validation (V&V) and the requisite planning document is vital for effectively executing the plan. The document provides a means of communicating intent to the typically large group of people, from program management to analysts to test engineers, who must work together to complete the validation activities. This report provides guidelines for writing a validation plan. It describes the components of such a plan and includes important references and resources. While the initial target audience is the DART Full System Model teams in the nuclear weapons program, the guidelines are generallymore » applicable to other modeling efforts. Our goal in writing this document is to provide a framework for consistency in validation plans across weapon systems, different types of models, and different scenarios. Specific details contained in any given validation plan will vary according to application requirements and available resources.« less
Modelling multimodal expression of emotion in a virtual agent.
Pelachaud, Catherine
2009-12-12
Over the past few years we have been developing an expressive embodied conversational agent system. In particular, we have developed a model of multimodal behaviours that includes dynamism and complex facial expressions. The first feature refers to the qualitative execution of behaviours. Our model is based on perceptual studies and encompasses several parameters that modulate multimodal behaviours. The second feature, the model of complex expressions, follows a componential approach where a new expression is obtained by combining facial areas of other expressions. Lately we have been working on adding temporal dynamism to expressions. So far they have been designed statically, typically at their apex. Only full-blown expressions could be modelled. To overcome this limitation, we have defined a representation scheme that describes the temporal evolution of the expression of an emotion. It is no longer represented by a static definition but by a temporally ordered sequence of multimodal signals.
Modeling anomalous radial transport in kinetic transport codes
NASA Astrophysics Data System (ADS)
Bodi, K.; Krasheninnikov, S. I.; Cohen, R. H.; Rognlien, T. D.
2009-11-01
Anomalous transport is typically the dominant component of the radial transport in magnetically confined plasmas, where the physical origin of this transport is believed to be plasma turbulence. A model is presented for anomalous transport that can be used in continuum kinetic edge codes like TEMPEST, NEO and the next-generation code being developed by the Edge Simulation Laboratory. The model can also be adapted to particle-based codes. It is demonstrated that the model with a velocity-dependent diffusion and convection terms can match a diagonal gradient-driven transport matrix as found in contemporary fluid codes, but can also include off-diagonal effects. The anomalous transport model is also combined with particle drifts and a particle/energy-conserving Krook collision operator to study possible synergistic effects with neoclassical transport. For the latter study, a velocity-independent anomalous diffusion coefficient is used to mimic the effect of long-wavelength ExB turbulence.
Applications of MIDAS regression in analysing trends in water quality
NASA Astrophysics Data System (ADS)
Penev, Spiridon; Leonte, Daniela; Lazarov, Zdravetz; Mann, Rob A.
2014-04-01
We discuss novel statistical methods in analysing trends in water quality. Such analysis uses complex data sets of different classes of variables, including water quality, hydrological and meteorological. We analyse the effect of rainfall and flow on trends in water quality utilising a flexible model called Mixed Data Sampling (MIDAS). This model arises because of the mixed frequency in the data collection. Typically, water quality variables are sampled fortnightly, whereas the rain data is sampled daily. The advantage of using MIDAS regression is in the flexible and parsimonious modelling of the influence of the rain and flow on trends in water quality variables. We discuss the model and its implementation on a data set from the Shoalhaven Supply System and Catchments in the state of New South Wales, Australia. Information criteria indicate that MIDAS modelling improves upon simplistic approaches that do not utilise the mixed data sampling nature of the data.
Wildfire risk assessment in a typical Mediterranean wildland-urban interface of Greece.
Mitsopoulos, Ioannis; Mallinis, Giorgos; Arianoutsou, Margarita
2015-04-01
The purpose of this study was to assess spatial wildfire risk in a typical Mediterranean wildland-urban interface (WUI) in Greece and the potential effect of three different burning condition scenarios on the following four major wildfire risk components: burn probability, conditional flame length, fire size, and source-sink ratio. We applied the Minimum Travel Time fire simulation algorithm using the FlamMap and ArcFuels tools to characterize the potential response of the wildfire risk to a range of different burning scenarios. We created site-specific fuel models of the study area by measuring the field fuel parameters in representative natural fuel complexes, and we determined the spatial extent of the different fuel types and residential structures in the study area using photointerpretation procedures of large scale natural color orthophotographs. The results included simulated spatially explicit fire risk components along with wildfire risk exposure analysis and the expected net value change. Statistical significance differences in simulation outputs between the scenarios were obtained using Tukey's significance test. The results of this study provide valuable information for decision support systems for short-term predictions of wildfire risk potential and inform wildland fire management of typical WUI areas in Greece.
Wildfire Risk Assessment in a Typical Mediterranean Wildland-Urban Interface of Greece
NASA Astrophysics Data System (ADS)
Mitsopoulos, Ioannis; Mallinis, Giorgos; Arianoutsou, Margarita
2015-04-01
The purpose of this study was to assess spatial wildfire risk in a typical Mediterranean wildland-urban interface (WUI) in Greece and the potential effect of three different burning condition scenarios on the following four major wildfire risk components: burn probability, conditional flame length, fire size, and source-sink ratio. We applied the Minimum Travel Time fire simulation algorithm using the FlamMap and ArcFuels tools to characterize the potential response of the wildfire risk to a range of different burning scenarios. We created site-specific fuel models of the study area by measuring the field fuel parameters in representative natural fuel complexes, and we determined the spatial extent of the different fuel types and residential structures in the study area using photointerpretation procedures of large scale natural color orthophotographs. The results included simulated spatially explicit fire risk components along with wildfire risk exposure analysis and the expected net value change. Statistical significance differences in simulation outputs between the scenarios were obtained using Tukey's significance test. The results of this study provide valuable information for decision support systems for short-term predictions of wildfire risk potential and inform wildland fire management of typical WUI areas in Greece.
Crowell, Sheila E; Baucom, Brian R; Yaptangco, Mona; Bride, Daniel; Hsiao, Ray; McCauley, Elizabeth; Beauchaine, Theodore P
2014-04-01
Many depressed adolescents experience difficulty in regulating their emotions. These emotion regulation difficulties appear to emerge in part from socialization processes within families and then generalize to other contexts. However, emotion dysregulation is typically assessed within the individual, rather than in the social relationships that shape and maintain dysregulation. In this study, we evaluated concordance of physiological and observational measures of emotion dysregulation during interpersonal conflict, using a multilevel actor-partner interdependence model (APIM). Participants were 75 mother-daughter dyads, including 50 depressed adolescents with or without a history of self-injury, and 25 typically developing controls. Behavior dysregulation was operationalized as observed aversiveness during a conflict discussion, and physiological dysregulation was indexed by respiratory sinus arrhythmia (RSA). Results revealed different patterns of concordance for control versus depressed participants. Controls evidenced a concordant partner (between-person) effect, and showed increased physiological regulation during minutes when their partner was more aversive. In contrast, clinical dyad members displayed a concordant actor (within-person) effect, becoming simultaneously physiologically and behaviorally dysregulated. Results inform current understanding of emotion dysregulation across multiple levels of analysis. Copyright © 2014 Elsevier B.V. All rights reserved.
Suzuki, Etsuji; Yamamoto, Eiji; Takao, Soshi; Kawachi, Ichiro; Subramanian, S. V.
2012-01-01
Background Multilevel analyses are ideally suited to assess the effects of ecological (higher level) and individual (lower level) exposure variables simultaneously. In applying such analyses to measures of ecologies in epidemiological studies, individual variables are usually aggregated into the higher level unit. Typically, the aggregated measure includes responses of every individual belonging to that group (i.e. it constitutes a self-included measure). More recently, researchers have developed an aggregate measure which excludes the response of the individual to whom the aggregate measure is linked (i.e. a self-excluded measure). In this study, we clarify the substantive and technical properties of these two measures when they are used as exposures in multilevel models. Methods Although the differences between the two aggregated measures are mathematically subtle, distinguishing between them is important in terms of the specific scientific questions to be addressed. We then show how these measures can be used in two distinct types of multilevel models—self-included model and self-excluded model—and interpret the parameters in each model by imposing hypothetical interventions. The concept is tested on empirical data of workplace social capital and employees' systolic blood pressure. Results Researchers assume group-level interventions when using a self-included model, and individual-level interventions when using a self-excluded model. Analytical re-parameterizations of these two models highlight their differences in parameter interpretation. Cluster-mean centered self-included models enable researchers to decompose the collective effect into its within- and between-group components. The benefit of cluster-mean centering procedure is further discussed in terms of hypothetical interventions. Conclusions When investigating the potential roles of aggregated variables, researchers should carefully explore which type of model—self-included or self-excluded—is suitable for a given situation, particularly when group sizes are relatively small. PMID:23251609
Mooring; Benjamin; Harte; Herzog
2000-07-01
Tick removal grooming may be centrally regulated by an internal timing mechanism operating to remove ticks before they attach and engorge (programmed grooming model) and/or evoked by cutaneous stimulation from tick bites (stimulus-driven model). The programmed grooming model predicts that organismic and environmental factors that impact the cost-benefit ratio of grooming (e.g. body size and habitat) will influence the rate of tick removal grooming. The body size principle predicts that smaller-sized animals, because of their greater surface-to-mass ratio, should engage in more frequent tick removal grooming than larger-bodied animals in order to compensate for higher costs of tick infestation. The body size principle may be tested intraspecifically between young and adult animals, or interspecifically among species of contrasting body sizes. To rigorously test the interspecific body size prediction, we observed the programmed grooming (oral and scratch grooming) of 25 species (or subspecies) of bovids at a tick-free zoological park in which stimulus-driven grooming was ruled out. Multiple correlation analysis revealed highly significant negative correlations between species-typical mass and mean species grooming rates when habitat was controlled for in the model. Species-typical habitat type (classified along a gradient from most open to most closed) was positively correlated with mean oral grooming rate, indicating that species tended to groom at a higher rate in woodland and forest habitats (where typical tick density would be high) compared with more open environments. Species mass accounted for up to two-thirds of the variation in grooming rate across species, whereas habitat accounted for ca. 20% of variation in oral grooming. Similar results were obtained when the analysis was expanded to include 36 species/subspecies of six different families. The body size principle can therefore account for a large proportion of species-typical differences in programmed grooming rate among ungulates. However, to understand the tick defence adaptations of very large mammals that rarely or never engage in oral or scratch grooming (e.g. elephants, giraffes, rhinoceros), alternative tick defence strategies must be considered, such as thick skin, wallowing, rubbing and tolerance of oxpeckers and other tick-eating birds. Copyright 2000 The Association for the Study of Animal Behaviour.
A Rat Model of Ventricular Fibrillation and Resuscitation by Conventional Closed-chest Technique
Lamoureux, Lorissa; Radhakrishnan, Jeejabai; Gazmuri, Raúl J.
2015-01-01
A rat model of electrically-induced ventricular fibrillation followed by cardiac resuscitation using a closed chest technique that incorporates the basic components of cardiopulmonary resuscitation in humans is herein described. The model was developed in 1988 and has been used in approximately 70 peer-reviewed publications examining a myriad of resuscitation aspects including its physiology and pathophysiology, determinants of resuscitability, pharmacologic interventions, and even the effects of cell therapies. The model featured in this presentation includes: (1) vascular catheterization to measure aortic and right atrial pressures, to measure cardiac output by thermodilution, and to electrically induce ventricular fibrillation; and (2) tracheal intubation for positive pressure ventilation with oxygen enriched gas and assessment of the end-tidal CO2. A typical sequence of intervention entails: (1) electrical induction of ventricular fibrillation, (2) chest compression using a mechanical piston device concomitantly with positive pressure ventilation delivering oxygen-enriched gas, (3) electrical shocks to terminate ventricular fibrillation and reestablish cardiac activity, (4) assessment of post-resuscitation hemodynamic and metabolic function, and (5) assessment of survival and recovery of organ function. A robust inventory of measurements is available that includes – but is not limited to – hemodynamic, metabolic, and tissue measurements. The model has been highly effective in developing new resuscitation concepts and examining novel therapeutic interventions before their testing in larger and translationally more relevant animal models of cardiac arrest and resuscitation. PMID:25938619
Jaramillo, Hector E; Gómez, Lessby; García, Jose J
2015-01-01
With the aim to study disc degeneration and the risk of injury during occupational activities, a new finite element (FE) model of the L4-L5-S1 segment of the human spine was developed based on the anthropometry of a typical Colombian worker. Beginning with medical images, the programs CATIA and SOLIDWORKS were used to generate and assemble the vertebrae and create the soft structures of the segment. The software ABAQUS was used to run the analyses, which included a detailed model calibration using the experimental step-wise reduction data for the L4-L5 component, while the L5-S1 segment was calibrated in the intact condition. The range of motion curves, the intradiscal pressure and the lateral bulging under pure moments were considered for the calibration. As opposed to other FE models that include the L5-S1 disc, the model developed in this study considered the regional variations and anisotropy of the annulus as well as a realistic description of the nucleus geometry, which allowed an improved representation of experimental data during the validation process. Hence, the model can be used to analyze the stress and strain distributions in the L4-L5 and L5-S1 discs of workers performing activities such as lifting and carrying tasks.
NASA Technical Reports Server (NTRS)
Pi, Xiaoqing; Mannucci, Anthony J.; Verkhoglyadova, Olga P.; Stephens, Philip; Wilson, Brian D.; Akopian, Vardan; Komjathy, Attila; Lijima, Byron A.
2013-01-01
ISOGAME is designed and developed to assess quantitatively the impact of new observation systems on the capability of imaging and modeling the ionosphere. With ISOGAME, one can perform observation system simulation experiments (OSSEs). A typical OSSE using ISOGAME would involve: (1) simulating various ionospheric conditions on global scales; (2) simulating ionospheric measurements made from a constellation of low-Earth-orbiters (LEOs), particularly Global Navigation Satellite System (GNSS) radio occultation data, and from ground-based global GNSS networks; (3) conducting ionospheric data assimilation experiments with the Global Assimilative Ionospheric Model (GAIM); and (4) analyzing modeling results with visualization tools. ISOGAME can provide quantitative assessment of the accuracy of assimilative modeling with the interested observation system. Other observation systems besides those based on GNSS are also possible to analyze. The system is composed of a suite of software that combines the GAIM, including a 4D first-principles ionospheric model and data assimilation modules, an Internal Reference Ionosphere (IRI) model that has been developed by international ionospheric research communities, observation simulator, visualization software, and orbit design, simulation, and optimization software. The core GAIM model used in ISOGAME is based on the GAIM++ code (written in C++) that includes a new high-fidelity geomagnetic field representation (multi-dipole). New visualization tools and analysis algorithms for the OSSEs are now part of ISOGAME.
Clark, D Angus; Nuttall, Amy K; Bowles, Ryan P
2018-01-01
Latent change score models (LCS) are conceptually powerful tools for analyzing longitudinal data (McArdle & Hamagami, 2001). However, applications of these models typically include constraints on key parameters over time. Although practically useful, strict invariance over time in these parameters is unlikely in real data. This study investigates the robustness of LCS when invariance over time is incorrectly imposed on key change-related parameters. Monte Carlo simulation methods were used to explore the impact of misspecification on parameter estimation, predicted trajectories of change, and model fit in the dual change score model, the foundational LCS. When constraints were incorrectly applied, several parameters, most notably the slope (i.e., constant change) factor mean and autoproportion coefficient, were severely and consistently biased, as were regression paths to the slope factor when external predictors of change were included. Standard fit indices indicated that the misspecified models fit well, partly because mean level trajectories over time were accurately captured. Loosening constraint improved the accuracy of parameter estimates, but estimates were more unstable, and models frequently failed to converge. Results suggest that potentially common sources of misspecification in LCS can produce distorted impressions of developmental processes, and that identifying and rectifying the situation is a challenge.
Effects of spatial and temporal resolution on simulated feedbacks from polygonal tundra.
NASA Astrophysics Data System (ADS)
Coon, E.; Atchley, A. L.; Painter, S. L.; Karra, S.; Moulton, J. D.; Wilson, C. J.; Liljedahl, A.
2014-12-01
Earth system land models typically resolve permafrost regions at spatial resolutions grossly larger than the scales of topographic variation. This observation leads to two critical questions: How much error is introduced by this lack of resolution, and what is the effect of this approximation on other coupled components of the Earth system, notably the energy balance and carbon cycle? Here we use the Arctic Terrestrial Simulator (ATS) to run micro-topography resolving simulations of polygonal ground, driven by meteorological data from Barrow, AK, to address these questions. ATS couples surface and subsurface processes, including thermal hydrology, surface energy balance, and a snow model. Comparisons are made between one-dimensional "column model" simulations (similar to, for instance, CLM or other land models typically used in Earth System models) and higher-dimensional simulations which resolve micro-topography, allowing for distributed surface runoff, horizontal flow in the subsurface, and uneven snow distribution. Additionally, we drive models with meteorological data averaged over different time scales from daily to weekly moving windows. In each case, we compare fluxes important to the surface energy balance including albedo, latent and sensible heat fluxes, and land-to-atmosphere long-wave radiation. Results indicate that spatial topography variation and temporal variability are important in several ways. Snow distribution greatly affects the surface energy balance, fundamentally changing the partitioning of incoming solar radiation between the subsurface and the atmosphere. This has significant effects on soil moisture and temperature, with implications for vegetation and decomposition. Resolving temporal variability is especially important in spring, when early warm days can alter the onset of snowmelt by days to weeks. We show that high-resolution simulations are valuable in evaluating current land models, especially in areas of polygonal ground. This work was supported by LANL Laboratory Directed Research and Development Project LDRD201200068DR and by the The Next-Generation Ecosystem Experiments (NGEE Arctic) project. NGEE-Arctic is supported by the Office of Biological and Environmental Research in the DOE Office of Science. LA-UR-14-26227.
Tao, S; Trzasko, J D; Gunter, J L; Weavers, P T; Shu, Y; Huston, J; Lee, S K; Tan, E T; Bernstein, M A
2017-01-01
Due to engineering limitations, the spatial encoding gradient fields in conventional magnetic resonance imaging cannot be perfectly linear and always contain higher-order, nonlinear components. If ignored during image reconstruction, gradient nonlinearity (GNL) manifests as image geometric distortion. Given an estimate of the GNL field, this distortion can be corrected to a degree proportional to the accuracy of the field estimate. The GNL of a gradient system is typically characterized using a spherical harmonic polynomial model with model coefficients obtained from electromagnetic simulation. Conventional whole-body gradient systems are symmetric in design; typically, only odd-order terms up to the 5th-order are required for GNL modeling. Recently, a high-performance, asymmetric gradient system was developed, which exhibits more complex GNL that requires higher-order terms including both odd- and even-orders for accurate modeling. This work characterizes the GNL of this system using an iterative calibration method and a fiducial phantom used in ADNI (Alzheimer’s Disease Neuroimaging Initiative). The phantom was scanned at different locations inside the 26-cm diameter-spherical-volume of this gradient, and the positions of fiducials in the phantom were estimated. An iterative calibration procedure was utilized to identify the model coefficients that minimize the mean-squared-error between the true fiducial positions and the positions estimated from images corrected using these coefficients. To examine the effect of higher-order and even-order terms, this calibration was performed using spherical harmonic polynomial of different orders up to the 10th-order including even- and odd-order terms, or odd-order only. The results showed that the model coefficients of this gradient can be successfully estimated. The residual root-mean-squared-error after correction using up to the 10th-order coefficients was reduced to 0.36 mm, yielding spatial accuracy comparable to conventional whole-body gradients. The even-order terms were necessary for accurate GNL modeling. In addition, the calibrated coefficients improved image geometric accuracy compared with the simulation-based coefficients. PMID:28033119
Saraswat, Prabhav; MacWilliams, Bruce A; Davis, Roy B; D'Astous, Jacques L
2013-01-01
Several multisegment foot models have been proposed and some have been used to study foot pathologies. These models have been tested and validated on typically developed populations; however application of such models to feet with significant deformities presents an additional set of challenges. For the first time, in this study, a multisegment foot model is tested for repeatability in a population of children with symptomatic abnormal feet. The results from this population are compared to the same metrics collected from an age matched (8-14 years) typically developing population. The modified Shriners Hospitals for Children, Greenville (mSHCG) foot model was applied to ten typically developing children and eleven children with planovalgus feet by two clinicians. Five subjects in each group were retested by both clinicians after 4-6 weeks. Both intra-clinician and inter-clinician repeatability were evaluated using static and dynamic measures. A plaster mold method was used to quantify variability arising from marker placement error. Dynamic variability was measured by examining trial differences from the same subjects when multiple clinicians carried out the data collection multiple times. For hindfoot and forefoot angles, static and dynamic variability in both groups was found to be less than 4° and 6° respectively. The mSHCG model strategy of minimal reliance on anatomical markers for dynamic measures and inherent flexibility enabled by separate anatomical and technical coordinate systems resulted in a model equally repeatable in typically developing and planovalgus populations. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Pithan, Felix; Shepherd, Theodore G.; Zappa, Giuseppe; Sandu, Irina
2016-07-01
State-of-the art climate models generally struggle to represent important features of the large-scale circulation. Common model deficiencies include an equatorward bias in the location of the midlatitude westerlies and an overly zonal orientation of the North Atlantic storm track. Orography is known to strongly affect the atmospheric circulation and is notoriously difficult to represent in coarse-resolution climate models. Yet how the representation of orography affects circulation biases in current climate models is not understood. Here we show that the effects of switching off the parameterization of drag from low-level orographic blocking in one climate model resemble the biases of the Coupled Model Intercomparison Project Phase 5 ensemble: An overly zonal wintertime North Atlantic storm track and less European blocking events, and an equatorward shift in the Southern Hemispheric jet and increase in the Southern Annular Mode time scale. This suggests that typical circulation biases in coarse-resolution climate models may be alleviated by improved parameterizations of low-level drag.