Science.gov

Sample records for additional simplifying assumptions

  1. The simplified reference tissue model: model assumption violations and their impact on binding potential.

    PubMed

    Salinas, Cristian A; Searle, Graham E; Gunn, Roger N

    2015-02-01

    Reference tissue models have gained significant traction over the last two decades as the methods of choice for the quantification of brain positron emission tomography data because they balance quantitative accuracy with less invasive procedures. The principal advantage is the elimination of the need to perform arterial cannulation of the subject to measure blood and metabolite concentrations for input function generation. In particular, the simplified reference tissue model (SRTM) has been widely adopted as it uses a simplified model configuration with only three parameters that typically produces good fits to the kinetic data and a stable parameter estimation process. However, the model's simplicity and its ability to generate good fits to the data, even when the model assumptions are not met, can lead to misplaced confidence in binding potential (BPND) estimates. Computer simulation were used to study the bias introduced in BPND estimates as a consequence of violating each of the four core SRTM model assumptions. Violation of each model assumption led to bias in BPND (both over and underestimation). Careful assessment of the bias in SRTM BPND should be performed for new tracers and applications so that an appropriate decision about its applicability can be made. PMID:25425078

  2. The simplified reference tissue model: model assumption violations and their impact on binding potential

    PubMed Central

    Salinas, Cristian A; Searle, Graham E; Gunn, Roger N

    2015-01-01

    Reference tissue models have gained significant traction over the last two decades as the methods of choice for the quantification of brain positron emission tomography data because they balance quantitative accuracy with less invasive procedures. The principal advantage is the elimination of the need to perform arterial cannulation of the subject to measure blood and metabolite concentrations for input function generation. In particular, the simplified reference tissue model (SRTM) has been widely adopted as it uses a simplified model configuration with only three parameters that typically produces good fits to the kinetic data and a stable parameter estimation process. However, the model's simplicity and its ability to generate good fits to the data, even when the model assumptions are not met, can lead to misplaced confidence in binding potential (BPND) estimates. Computer simulation were used to study the bias introduced in BPND estimates as a consequence of violating each of the four core SRTM model assumptions. Violation of each model assumption led to bias in BPND (both over and underestimation). Careful assessment of the bias in SRTM BPND should be performed for new tracers and applications so that an appropriate decision about its applicability can be made. PMID:25425078

  3. SU-E-T-293: Simplifying Assumption for Determining Sc and Sp

    SciTech Connect

    King, R; Cheung, A; Anderson, R; Thompson, G; Fletcher, M

    2014-06-01

    Purpose: Scp(mlc,jaw) is a two-dimensional function of collimator field size and effective field size. Conventionally, Scp(mlc,jaw) is treated as separable into components Sc(jaw) and Sp(mlc). Scp(mlc=jaw) is measured in phantom and Sc(jaw) is measured in air with Sp=Scp/Sc. Ideally, Sc and Sp would be able to predict measured values of Scp(mlc,jaw) for all combinations of mlc and jaw. However, ideal Sc and Sp functions do not exist and a measured two-dimensional Scp dataset cannot be decomposed into a unique pair of one-dimensional functions.If the output functions Sc(jaw) and Sp(mlc) were equal to each other and thus each equal to Scp(mlc=jaw){sup 0.5}, this condition would lead to a simpler measurement process by eliminating the need for in-air measurements. Without the distorting effect of the buildup-cap, small-field measurement would be limited only by the dimensions of the detector and would thus be improved by this simplification of the output functions. The goal of the present study is to evaluate an assumption that Sc=Sp. Methods: For a 6 MV x-ray beam, Sc and Sp were determined both by the conventional method and as Scp(mlc=jaw){sup 0.5}. Square field benchmark values of Scp(mlc,jaw) were then measured across the range from 2×2 to 29×29. Both Sc and Sp functions were then evaluated as to their ability to predict these measurements. Results: Both methods produced qualitatively similar results with <4% error for all cases and >3% error in 1 case. The conventional method produced 2 cases with >2% error, while the squareroot method produced only 1 such case. Conclusion: Though it would need to be validated for any specific beam to which it might be applied, under the conditions studied, the simplifying assumption that Sc = Sp is justified.

  4. English as an Additional Language: Assumptions and Challenges

    ERIC Educational Resources Information Center

    Mistry, Malini; Sood, Krishan

    2010-01-01

    The number of pupils who have English as an Additional Language (EAL) in our English schools is increasing with an increased influx of migrants from Europe. This paper investigates how schools are addressing the needs of these children. Using survey and interviews with teachers and paraprofessionals (teaching assistants and bilingual assistants),…

  5. Feasibility of a simplified fuel additive evaluation protocol

    SciTech Connect

    Lister, S.J.; Hunzinger, R.D.; Taghizadeh, A.

    1998-12-31

    This report describes the work carried out during the four stages of the first phase of a project that involved the determination of the feasibility of replacing the Association of American Railroads Recommended Practice (ARRP) 503 protocol for testing diesel fuel oil additives with a new procedure using the single cylinder research engine SCRE-251 as the laboratory test engine, which tests for both engine performance as well as emissions compliance. The report begins with a review of the literature on fuel additive testing, then reviews the new US Environmental Protection Agency regulations regarding locomotive diesel emissions. This is followed by a review of the ARRP 503 protocol and the proposed new procedure, a comparison of the ARRP 503 test engines and the SCRE-251, and a study of the SCRE-251`s ability to represent a multi-cylinder medium-speed diesel engine. Appendices include fuel additive manufacturers` information sheets.

  6. Thermoregulatory response to an organophosphate and carbamate insecticide mixture: testing the assumption of dose-additivity.

    PubMed

    Gordon, Christopher J; Herr, David W; Gennings, Chris; Graff, Jaimie E; McMurray, Matthew; Stork, LeAnna; Coffey, Todd; Hamm, Adam; Mack, Cina M

    2006-01-01

    Most toxicity data are based on studies using single compounds. This study assessed if there is an interaction between mixtures of the anticholinesterase insecticides chlorpyrifos (CHP) and carbaryl (CAR) using hypothermia and cholinesterase (ChE) inhibition as toxicological endpoints. Core temperature (T(c)) was continuously monitored by radiotelemetry in adult Long-Evans rats administered CHP at doses ranging from 0 to 50mg/kg and CAR doses of 0-150 mg/kg. The temperature index (TI), an integration of the change in T(c) over a 12h period, was quantified. Effects of mixtures of CHP and CAR in 2:1 and 1:1 ratios on the TI were examined and the data analyzed using a statistical model designed to assess significant departures from additivity for chemical mixtures. CHP and CAR elicited a marked hypothermia and dose-related decrease in the TI. The TI response to a 2:1 ratio of CHP:CAR was significantly less than that predicted by additivity. The TI response to a 1:1 ratio of CHP and CAR was not significantly different from the predicted additivity. Plasma and brain ChE activity were measured 4h after dosing with CHP, CAR, and mixtures in separate groups of rats. There was a dose-additive interaction for the inhibition of brain ChE for the 2:1 ratio, but an antagonistic effect for the 1:1 ratio. The 2:1 and 1:1 mixtures had an antagonistic interaction on plasma ChE. Overall, the departures from additivity for the physiological (i.e., temperature) and biochemical (i.e., ChE inhibition) endpoints for the 2:1 and 1:1 mixtures studies did not coincide as expected. An interaction between CHP and CAR appears to depend on the ratio of compounds in the mixture as well as the biological endpoint. PMID:16182429

  7. The effects of material property assumptions on predicted meltpool shape for laser powder bed fusion based additive manufacturing

    NASA Astrophysics Data System (ADS)

    Teng, Chong; Ashby, Kathryn; Phan, Nam; Pal, Deepankar; Stucker, Brent

    2016-08-01

    The objective of this study was to provide guidance on material specifications for powders used in laser powder bed fusion based additive manufacturing (AM) processes. The methodology was to investigate how different material property assumptions in a simulation affect meltpool prediction and by corrolary how different material properties affect meltpool formation in AM processes. The sensitvity of meltpool variations to each material property can be used as a guide to help drive future research and to help prioritize material specifications in requirements documents. By identifying which material properties have the greatest affect on outcomes, metrology can be tailored to focus on those properties which matter most; thus reducing costs by eliminating unnecessary testing and property charaterizations. Futhermore, this sensitivity study provides insight into which properties require more accurate measurements, thus motivating development of new metrology methods to measure those properties accurately.

  8. False assumptions.

    PubMed

    Swaminathan, M

    1997-01-01

    Indian women do not have to be told the benefits of breast feeding or "rescued from the clutches of wicked multinational companies" by international agencies. There is no proof that breast feeding has declined in India; in fact, a 1987 survey revealed that 98% of Indian women breast feed. Efforts to promote breast feeding among the middle classes rely on such initiatives as the "baby friendly" hospital where breast feeding is promoted immediately after birth. This ignores the 76% of Indian women who give birth at home. Blaming this unproved decline in breast feeding on multinational companies distracts attention from more far-reaching and intractable effects of social change. While the Infant Milk Substitutes Act is helpful, it also deflects attention from more pressing issues. Another false assumption is that Indian women are abandoning breast feeding to comply with the demands of employment, but research indicates that most women give up employment for breast feeding, despite the economic cost to their families. Women also seek work in the informal sector to secure the flexibility to meet their child care responsibilities. Instead of being concerned about "teaching" women what they already know about the benefits of breast feeding, efforts should be made to remove the constraints women face as a result of their multiple roles and to empower them with the support of families, governmental policies and legislation, employers, health professionals, and the media. PMID:12321627

  9. Sensitivity Analysis Without Assumptions

    PubMed Central

    VanderWeele, Tyler J.

    2016-01-01

    Unmeasured confounding may undermine the validity of causal inference with observational studies. Sensitivity analysis provides an attractive way to partially circumvent this issue by assessing the potential influence of unmeasured confounding on causal conclusions. However, previous sensitivity analysis approaches often make strong and untestable assumptions such as having an unmeasured confounder that is binary, or having no interaction between the effects of the exposure and the confounder on the outcome, or having only one unmeasured confounder. Without imposing any assumptions on the unmeasured confounder or confounders, we derive a bounding factor and a sharp inequality such that the sensitivity analysis parameters must satisfy the inequality if an unmeasured confounder is to explain away the observed effect estimate or reduce it to a particular level. Our approach is easy to implement and involves only two sensitivity parameters. Surprisingly, our bounding factor, which makes no simplifying assumptions, is no more conservative than a number of previous sensitivity analysis techniques that do make assumptions. Our new bounding factor implies not only the traditional Cornfield conditions that both the relative risk of the exposure on the confounder and that of the confounder on the outcome must satisfy but also a high threshold that the maximum of these relative risks must satisfy. Furthermore, this new bounding factor can be viewed as a measure of the strength of confounding between the exposure and the outcome induced by a confounder. PMID:26841057

  10. Rearchitecting IT: Simplify. Simplify

    ERIC Educational Resources Information Center

    Panettieri, Joseph C.

    2006-01-01

    Simplifying and securing an IT infrastructure is not easy. It frequently requires rethinking years of hardware and software investments, and a gradual migration to modern systems. Even so, writes the author, universities can take six practical steps to success: (1) Audit software infrastructure; (2) Evaluate current applications; (3) Centralize…

  11. Simplifying Blepharoplasty.

    PubMed

    Zoumalan, Christopher I; Roostaeian, Jason

    2016-01-01

    Blepharoplasty remains one of the most common aesthetic procedures performed today. Its popularity stems partly from the ability to consistently make significant improvements in facial aesthetics with a relatively short operation that carries an acceptable risk profile. In this article, the authors attempt to simplify the approach to both upper and lower lid blepharoplasty and provide an algorithm based on the individual findings for any given patient. The recent trend with both upper and lower lid blepharoplasty has been toward greater volume preservation and at times volume augmentation. A simplified approach to upper lid blepharoplasty focuses on removal of excess skin and judicious removal of periorbital fat. Avoidance of a hollow upper sulcus has been emphasized and the addition of volume with either fat grafting or fillers can be considered. Lower lid blepharoplasty can use a transcutaneous or a transconjunctival approach to address herniated fat pads while blending the lid-cheek junction through release of the orbitomalar ligament and volume augmentation with fat (by repositioning and/or grafting) or injectable fillers. Complications with upper lid blepharoplasty are typically minimal, particularly with conservative skin removal and volume preservation techniques. Lower lid blepharoplasty, conversely, can lead to more serious complications, including lid malposition, and therefore should be approached with great caution. Nevertheless, through an algorithmic approach that meets the needs of each individual patient, the approach to blepharoplasty may be simplified with consistent and predictable results. PMID:26710052

  12. Simplified Vicarious Radiometric Calibration

    NASA Technical Reports Server (NTRS)

    Stanley, Thomas; Ryan, Robert; Holekamp, Kara; Pagnutti, Mary

    2010-01-01

    ground target areas having different reflectance values. The target areas can be natural or artificial and must be large enough to minimize adjacent-pixel contamination effects. The radiative coupling between the atmosphere and the terrain needs to be approximately the same for the two targets. This condition can be met for relatively uniform backgrounds when the distance between the targets is within a few hundred meters. For each target area, the radiance leaving the ground in the direction of the satellite is measured with a radiometrically calibrated spectroradiometer. Using the radiance measurements from the two targets, atmospheric adjacency and atmospheric scattering effects can be subtracted, thereby eliminating many assumptions about the atmosphere and the radiative interaction between the atmosphere and the terrain. In addition, the radiometrically calibrated spectroradiometer can be used with a known reflectance target to estimate atmospheric transmission and diffuse- to-global ratios without the need for ancillary sun photometers. Several comparisons between the simplified method and traditional techniques were found to agree within a few percent. Hence, the simplified method reduces the overall complexity of performing vicarious calibrations and can serve as a method for validating traditional radiative transfer models

  13. Additives

    NASA Technical Reports Server (NTRS)

    Smalheer, C. V.

    1973-01-01

    The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.

  14. The assumptions of computing

    SciTech Connect

    Huggins, J.K.

    1994-12-31

    The use of computers, like any technological activity, is not content-neutral. Users of computers constantly interact with assumptions regarding worthwhile activity which are embedded in any computing system. Directly questioning these assumptions in the context of computing allows us to develop an understanding of responsible computing.

  15. Adult Learning Assumptions

    ERIC Educational Resources Information Center

    Baskas, Richard S.

    2011-01-01

    The purpose of this study is to examine Knowles' theory of andragogy and his six assumptions of how adults learn while providing evidence to support two of his assumptions based on the theory of andragogy. As no single theory explains how adults learn, it can best be assumed that adults learn through the accumulation of formal and informal…

  16. Mathematical models of Ebola-Consequences of underlying assumptions.

    PubMed

    Feng, Zhilan; Zheng, Yiqiang; Hernandez-Ceron, Nancy; Zhao, Henry; Glasser, John W; Hill, Andrew N

    2016-07-01

    Mathematical models have been used to study Ebola disease transmission dynamics and control for the recent epidemics in West Africa. Many of the models used in these studies are based on the model of Legrand et al. (2007), and most failed to accurately project the outbreak's course (Butler, 2014). Although there could be many reasons for this, including incomplete and unreliable data on Ebola epidemiology and lack of empirical data on how disease-control measures quantitatively affect Ebola transmission, we examine the underlying assumptions of the Legrand model, and provide alternate formulations that are simpler and provide additional information regarding the epidemiology of Ebola during an outbreak. We developed three models with different assumptions about disease stage durations, one of which simplifies to the Legrand model while the others have more realistic distributions. Control and basic reproduction numbers for all three models are derived and shown to provide threshold conditions for outbreak control and prevention. PMID:27130854

  17. Testing Our Fundamental Assumptions

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2016-06-01

    fundamental assumptions.A recent focus set in the Astrophysical Journal Letters, titled Focus on Exploring Fundamental Physics with Extragalactic Transients, consists of multiple published studies doing just that.Testing General RelativitySeveral of the articles focus on the 4th point above. By assuming that the delay in photon arrival times is only due to the gravitational potential of the Milky Way, these studies set constraints on the deviation of our galaxys gravitational potential from what GR would predict. The study by He Gao et al. uses the different photon arrival times from gamma-ray bursts to set constraints at eVGeV energies, and the study by Jun-Jie Wei et al. complements this by setting constraints at keV-TeV energies using photons from high-energy blazar emission.Photons or neutrinos from different extragalactic transients each set different upper limits on delta gamma, the post-Newtonian parameter, vs. particle energy or frequency. This is a test of Einsteins equivalence principle: if the principle is correct, delta gamma would be exactly zero, meaning that photons of different energies move at the same velocity through a vacuum. [Tingay Kaplan 2016]S.J. Tingay D.L. Kaplan make the case that measuring the time delay of photons from fast radio bursts (FRBs; transient radio pulses that last only a few milliseconds) will provide even tighter constraints if we are able to accurately determine distances to these FRBs.And Adi Musser argues that the large-scale structure of the universe plays an even greater role than the Milky Way gravitational potential, allowing for even stricter testing of Einsteins equivalence principle.The ever-narrower constraints from these studies all support GR as a correct set of rules through which to interpret our universe.Other Tests of Fundamental PhysicsIn addition to the above tests, Xue-Feng Wu et al. show that FRBs can be used to provide severe constraints on the rest mass of the photon, and S. Croft et al. even touches on what we

  18. Testing Our Fundamental Assumptions

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2016-06-01

    fundamental assumptions.A recent focus set in the Astrophysical Journal Letters, titled Focus on Exploring Fundamental Physics with Extragalactic Transients, consists of multiple published studies doing just that.Testing General RelativitySeveral of the articles focus on the 4th point above. By assuming that the delay in photon arrival times is only due to the gravitational potential of the Milky Way, these studies set constraints on the deviation of our galaxys gravitational potential from what GR would predict. The study by He Gao et al. uses the different photon arrival times from gamma-ray bursts to set constraints at eVGeV energies, and the study by Jun-Jie Wei et al. complements this by setting constraints at keV-TeV energies using photons from high-energy blazar emission.Photons or neutrinos from different extragalactic transients each set different upper limits on delta gamma, the post-Newtonian parameter, vs. particle energy or frequency. This is a test of Einsteins equivalence principle: if the principle is correct, delta gamma would be exactly zero, meaning that photons of different energies move at the same velocity through a vacuum. [Tingay Kaplan 2016]S.J. Tingay D.L. Kaplan make the case that measuring the time delay of photons from fast radio bursts (FRBs; transient radio pulses that last only a few milliseconds) will provide even tighter constraints if we are able to accurately determine distances to these FRBs.And Adi Musser argues that the large-scale structure of the universe plays an even greater role than the Milky Way gravitational potential, allowing for even stricter testing of Einsteins equivalence principle.The ever-narrower constraints from these studies all support GR as a correct set of rules through which to interpret our universe.Other Tests of Fundamental PhysicsIn addition to the above tests, Xue-Feng Wu et al. show that FRBs can be used to provide severe constraints on the rest mass of the photon, and S. Croft et al. even touches on what we

  19. Teaching Practices: Reexamining Assumptions.

    ERIC Educational Resources Information Center

    Spodek, Bernard, Ed.

    This publication contains eight papers, selected from papers presented at the Bicentennial Conference on Early Childhood Education, that discuss different aspects of teaching practices. The first two chapters reexamine basic assumptions underlying the organization of curriculum experiences for young children. Chapter 3 discusses the need to…

  20. Neuron Model with Simplified Memristive Ionic Channels

    NASA Astrophysics Data System (ADS)

    Hegab, Almoatazbellah M.; Salem, Noha M.; Radwan, Ahmed G.; Chua, Leon

    2015-06-01

    A simplified neuron model is introduced to mimic the action potential generated by the famous Hodgkin-Huxley equations by using the genetic optimization algorithm. Comparison with different neuron models is investigated, and it is confirmed that the sodium and potassium channels in our simplified neuron model are made out of memristors. In addition, the channel equations in the simplified model may be adjusted to introduce a simplified memristor model that is in accordance with the theoretical conditions of the memristive systems.

  1. A simplified determination of total concentrations of Ca, Fe, Mg and Mn in addition to their bioaccessible fraction in popular instant coffee brews.

    PubMed

    Stelmach, Ewelina; Szymczycha-Madeja, Anna; Pohl, Pawel

    2016-04-15

    A direct analysis of instant coffee brews with HR-CS-FAAS spectrometry to determine the total Ca, Fe, Mg and Mn content has been developed and validated. The proposed method is simple and fast; it delivers good analytical performance; its accuracy being within -3% to 3%, its precision--2-3% and detection limits--0.03, 0.04, 0.004 and 0.01 mg l(-1) for Ca, Fe, Mg and Mn, respectively. In addition, Ca, Fe, Mg and Mn bioaccessibility in instant coffee brews was measured by means of the in vitro gastrointestinal digestion with the use of simulated gastric and intestinal juice solutions. Absorption of metals in intestinal villi was simulated by means of ultrafiltration over semi-permeable membrane with a molecular weight cut-off of 5 kDa. Ca, Fe, Mg and Mn concentrations in permeates of instant coffee gastrointestinal incubates were measured with HR-CS-FAA spectrometry. PMID:26616965

  2. Impact of unseen assumptions on communication of atmospheric carbon mitigation options

    NASA Astrophysics Data System (ADS)

    Elliot, T. R.; Celia, M. A.; Court, B.

    2010-12-01

    With the rapid access and dissemination of information made available through online and digital pathways, there is need for a concurrent openness and transparency in communication of scientific investigation. Even with open communication it is essential that the scientific community continue to provide impartial result-driven information. An unknown factor in climate literacy is the influence of an impartial presentation of scientific investigation that has utilized biased base-assumptions. A formal publication appendix, and additional digital material, provides active investigators a suitable framework and ancillary material to make informed statements weighted by assumptions made in a study. However, informal media and rapid communiqués rarely make such investigatory attempts, often citing headline or key phrasing within a written work. This presentation is focused on Geologic Carbon Sequestration (GCS) as a proxy for the wider field of climate science communication, wherein we primarily investigate recent publications in GCS literature that produce scenario outcomes using apparently biased pro- or con- assumptions. A general review of scenario economics, capture process efficacy and specific examination of sequestration site assumptions and processes, reveals an apparent misrepresentation of what we consider to be a base-case GCS system. The authors demonstrate the influence of the apparent bias in primary assumptions on results from commonly referenced subsurface hydrology models. By use of moderate semi-analytical model simplification and Monte Carlo analysis of outcomes, we can establish the likely reality of any GCS scenario within a pragmatic middle ground. Secondarily, we review the development of publically available web-based computational tools and recent workshops where we presented interactive educational opportunities for public and institutional participants, with the goal of base-assumption awareness playing a central role. Through a series of

  3. Bounds on the microanalyzer array assumption

    NASA Astrophysics Data System (ADS)

    Vaughn, Israel J.; Alenin, Andrey S.; Tyo, J. Scott

    2016-05-01

    Micropolarizer arrays are occasionally used in partial Stokes, full Stokes, and Mueller matrix polarimeters. When treating modulated polarimeters as linear systems, specific assumptions are made about the Dirac delta functional forms generated in the channel space by micropolarizer arrays. These assumptions are 1) infinitely fine sampling both spatially and temporally and 2) infinite array sizes. When these assumptions are lifted and the physical channel shapes are computed, channel shapes become dependent on both the physical pixel area and shape, as well as the array size. We show that under certain circumstances the Dirac delta function approximation is not valid, and give some bounding terms to compute when the approximation is valid, i.e., which array and pixel sizes must be used for the Dirac delta function approximation to hold. Additionally, we show how the physical channel shape changes as a function of array and pixel size, for a conventional 0°, 45°, -45°, 90° superpixel micropolarizer array configuration.

  4. Sampling Assumptions in Inductive Generalization

    ERIC Educational Resources Information Center

    Navarro, Daniel J.; Dry, Matthew J.; Lee, Michael D.

    2012-01-01

    Inductive generalization, where people go beyond the data provided, is a basic cognitive capability, and it underpins theoretical accounts of learning, categorization, and decision making. To complete the inductive leap needed for generalization, people must make a key "sampling" assumption about how the available data were generated. Previous…

  5. Stealth Supersymmetry simplified

    NASA Astrophysics Data System (ADS)

    Fan, JiJi; Krall, Rebecca; Pinner, David; Reece, Matthew; Ruderman, Joshua T.

    2016-07-01

    In Stealth Supersymmetry, bounds on superpartners from direct searches can be notably weaker than in standard supersymmetric scenarios, due to suppressed missing energy. We present a set of simplified models of Stealth Supersymmetry that motivate 13 TeV LHC searches. We focus on simplified models within the Natural Supersymmetry framework, in which the gluino, stop, and Higgsino are assumed to be lighter than other superpartners. Our simplified models exhibit novel decay patterns that differ significantly from topologies of the Minimal Supersymmetric Standard Model, with and without R-parity. We determine limits on stops and gluinos from searches at the 8 TeV LHC. Existing searches constitute a powerful probe of Stealth Supersymmetry gluinos with certain topologies. However, we identify simplified models where the gluino can be considerably lighter than 1 TeV. Stops are significantly less constrained in Stealth Supersymmetry than the MSSM, and we have identified novel stop decay topologies that are completely unconstrained by existing LHC searches.

  6. Learning Assumptions for Compositional Verification

    NASA Technical Reports Server (NTRS)

    Cobleigh, Jamieson M.; Giannakopoulou, Dimitra; Pasareanu, Corina; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Compositional verification is a promising approach to addressing the state explosion problem associated with model checking. One compositional technique advocates proving properties of a system by checking properties of its components in an assume-guarantee style. However, the application of this technique is difficult because it involves non-trivial human input. This paper presents a novel framework for performing assume-guarantee reasoning in an incremental and fully automated fashion. To check a component against a property, our approach generates assumptions that the environment needs to satisfy for the property to hold. These assumptions are then discharged on the rest of the system. Assumptions are computed by a learning algorithm. They are initially approximate, but become gradually more precise by means of counterexamples obtained by model checking the component and its environment, alternately. This iterative process may at any stage conclude that the property is either true or false in the system. We have implemented our approach in the LTSA tool and applied it to the analysis of a NASA system.

  7. Simplified analysis of a generalized bias test for fabrics with two families of inextensible fibres

    NASA Astrophysics Data System (ADS)

    Cuomo, M.; dell'Isola, F.; Greco, L.

    2016-06-01

    Two tests for woven fabrics with orthogonal fibres are examined using simplified kinematic assumptions. The aim is to analyse how different constitutive assumptions may affect the response of the specimen. The fibres are considered inextensible, and the kinematics of 2D continua with inextensible chords due to Rivlin is adopted. In addition to two forms of strain energy depending on the shear deformation, also two forms of energy depending on the gradient of shear are examined. It is shown that this energy can account for the bending of the fibres. In addition to the standard bias extension test, a modified test has been examined, in which the head of the specimen is rotated rather than translated. In this case more bending occurs, so that the results of the simulation carried out with the different energy models adopted differ more that what has been found for the BE test.

  8. Can Computers Simplify Admissions?

    ERIC Educational Resources Information Center

    Bruker, Robert M.

    1978-01-01

    Based on experience with a simplified admissions concept, Southern Illinois University is satisfied that the admissions process has been made easier for prospective students, high school counselors, and admissions staff. The computer does not make decisions regarding admission of a student, but reduced work loads for everyone concerned. (Author)

  9. Faulty assumptions for repository requirements

    SciTech Connect

    Sutcliffe, W G

    1999-06-03

    Long term performance requirements for a geologic repository for spent nuclear fuel and high-level waste are based on assumptions concerning water use and subsequent deaths from cancer due to ingesting water contaminated with radio isotopes ten thousand years in the future. This paper argues that the assumptions underlying these requirements are faulty for a number of reasons. First, in light of the inevitable technological progress, including efficient desalination of water, over the next ten thousand years, it is inconceivable that a future society would drill for water near a repository. Second, even today we would not use water without testing its purity. Third, today many types of cancer are curable, and with the rapid progress in medical technology in general, and the prevention and treatment of cancer in particular, it is improbable that cancer caused by ingesting contaminated water will be a sign&ant killer in the far future. This paper reviews the performance requirements for geological repositories and comments on the difficulties in proving compliance in the face of inherent uncertainties. The already tiny long-term risk posed by a geologic repository is presented and contrasted with contemporary every day risks. A number of examples of technological progress, including cancer treatments, are advanced. The real and significant costs resulting from the overly conservative requirements are then assessed. Examples are given of how money (and political capital) could be put to much better use to save lives today and in the future. It is concluded that although a repository represents essentially no long-term risk, monitored retrievable dry storage (above or below ground) is the current best alternative for spent fuel and high-level nuclear waste.

  10. Assumptions of the QALY procedure.

    PubMed

    Carr-Hill, R A

    1989-01-01

    The Quality Adjusted Life Year (QALY) has been proposed as a useful index for those managing the provision of health care because it enables the decision-maker to compare the 'value' of different health care programmes and in a way which, potentially at least, reflects social preferences about the appropriate pattern of provision. The index depends on a combination of a measure of morbidity and the risk of mortality. Methodological debate has tended to concentrate on the technicalities of producing a scale of health; and philosophical argument has concentrated on the ethics of interpersonal comparison. There is little recognition of the fragility of the theoretical assumptions underpinning the proposed combination of morbidity and risk of mortality. The context in which the proposed indices are being developed is examined in Section 2. Whilst most working in the field of health measurement eschew over-simplification, it is clear that the application of micro-economics to management is greatly facilitated if a single index can be agreed. The various approaches to combining morbidity and mortality are described in Section 3. The crucial assumptions concern the measurement and valuation of morbidity; the procedures used for scaling morbidity with mortality; and the role of risk. The nature of the valuations involved are examined in Section 4. It seems unlikely that they could ever be widely acceptable; the combination with death and perfect health poses particular problems; and aggregation across individuals compounds the problem. There are also several technical difficulties of scaling and of allowing for risk which have been discussed elsewhere and so are only considered briefly in Section 5 of this paper.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:2762872

  11. Simplified Parallel Domain Traversal

    SciTech Connect

    Erickson III, David J

    2011-01-01

    Many data-intensive scientific analysis techniques require global domain traversal, which over the years has been a bottleneck for efficient parallelization across distributed-memory architectures. Inspired by MapReduce and other simplified parallel programming approaches, we have designed DStep, a flexible system that greatly simplifies efficient parallelization of domain traversal techniques at scale. In order to deliver both simplicity to users as well as scalability on HPC platforms, we introduce a novel two-tiered communication architecture for managing and exploiting asynchronous communication loads. We also integrate our design with advanced parallel I/O techniques that operate directly on native simulation output. We demonstrate DStep by performing teleconnection analysis across ensemble runs of terascale atmospheric CO{sub 2} and climate data, and we show scalability results on up to 65,536 IBM BlueGene/P cores.

  12. Investigations in a Simplified Bracketed Grid Approach to Metrical Structure

    ERIC Educational Resources Information Center

    Liu, Patrick Pei

    2010-01-01

    In this dissertation, I examine the fundamental mechanisms and assumptions of the Simplified Bracketed Grid Theory (Idsardi 1992) in two ways: first, by comparing it with Parametric Metrical Theory (Hayes 1995), and second, by implementing it in the analysis of several case studies in stress assignment and syllabification. Throughout these…

  13. 75 FR 81459 - Simplified Proceedings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-28

    ... Simplified Proceedings in certain civil penalty proceedings. 75 FR 28223. The Commission explained that since... simplify the procedures for handling certain civil penalty proceedings. DATES: The final rule takes effect... to deal with that burgeoning caseload, the Commission is considering methods to simplify...

  14. Microbial life detection with minimal assumptions

    NASA Astrophysics Data System (ADS)

    Kounaves, Samuel P.; Noll, Rebecca A.; Buehler, Martin G.; Hecht, Michael H.; Lankford, Kurt; West, Steven J.

    2002-02-01

    To produce definitive and unambiguous results, any life detection experiment must make minimal assumptions about the nature of extraterrestrial life. The only criteria that fits this definition is the ability to reproduce and in the process create a disequilibrium in the chemical and redox environment. The Life Detection Array (LIDA), an instrument proposed for the 2007 NASA Mars Scout Mission, and in the future for the Jovian moons, enables such an experiment. LIDA responds to minute biogenic chemical and physical changes in two identical 'growth' chambers. The sensitivity is provided by two differentially monitored electrochemical sensor arrays. Growth in one of the chambers alters the chemistry and ionic properties and results in a signal. This life detection system makes minimal assumptions; that after addition of water the microorganism replicates and in the process will produce small changes in its immediate surroundings by consuming, metabolizing, and excreting a number of molecules and/or ionic species. The experiment begins by placing an homogenized split-sample of soil or water into each chamber, adding water if soil, sterilizing via high temperature, and equilibrating. In the absence of any microorganism in either chamber, no signal will be detected. The inoculation of one chamber with even a few microorganisms which reproduce, will create a sufficient disequilibrium in the system (compared to the control) to be detectable. Replication of the experiment and positive results would lead to a definitive conclusion of biologically induced changes. The split sample and the nanogram inoculation eliminates chemistry as a causal agent.

  15. Scenarios Based on Shared Socioeconomic Pathway Assumptions

    NASA Astrophysics Data System (ADS)

    Edmonds, J.

    2013-12-01

    scenario with at least 8.5 Wm-2. To address this problem each SSP scenario can be treated as a reference scenario, to which emissions mitigation policies can be applied to create a set of RCP replications. These RCP replications have the underlying SSP socio-economic assumptions in addition to policy assumptions and radiative forcing levels consistent with the CMIP5 products. We report quantitative results of initial experiments from the five participating groups.

  16. A simplified model for glass formation

    NASA Technical Reports Server (NTRS)

    Uhlmann, D. R.; Onorato, P. I. K.; Scherer, G. W.

    1979-01-01

    A simplified model of glass formation based on the formal theory of transformation kinetics is presented, which describes the critical cooling rates implied by the occurrence of glassy or partly crystalline bodies. In addition, an approach based on the nose of the time-temperature-transformation (TTT) curve as an extremum in temperature and time has provided a relatively simple relation between the activation energy for viscous flow in the undercooled region and the temperature of the nose of the TTT curve. Using this relation together with the simplified model, it now seems possible to predict cooling rates using only the liquidus temperature, glass transition temperature, and heat of fusion.

  17. A discussion of assumptions and solution approaches of infiltration into a cracked soil

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A model for predicting rain infiltration into a swelling/shrinking/cracking soil was proposed (Römkens, M.J.M., and S. N. Prasad., 2006, Agricultural Water Management. 86:196-205). Several simplifying assumptions were made. The model consists of a two-component process of Darcian matrix flow and Hor...

  18. Assumptions to the Annual Energy Outlook

    EIA Publications

    2015-01-01

    This report presents the major assumptions of the National Energy Modeling System (NEMS) used to generate the projections in the Annual Energy Outlook, including general features of the model structure, assumptions concerning energy markets, and the key input data and parameters that are the most significant in formulating the model results.

  19. 5 CFR 841.405 - Economic assumptions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... modification of the economic assumptions concerning salary and wage growth to take into account the combined... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Economic assumptions. 841.405 Section 841... (CONTINUED) FEDERAL EMPLOYEES RETIREMENT SYSTEM-GENERAL ADMINISTRATION Government Costs § 841.405...

  20. The Assumptive Worlds of Fledgling Administrators.

    ERIC Educational Resources Information Center

    Marshall, Catherine; Mitchell, Barbara A.

    1991-01-01

    Studies school-site administrators' understanding about ways of gaining/maintaining power, control, and predictability. Multisite study data concerning assistant principals identify rules of the game for four micropolitical (site-level assumptive world) domains. Assumptive worlds create avoidance of value conflicts and risky change, group-think…

  1. 10 CFR 436.14 - Methodological assumptions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 3 2011-01-01 2011-01-01 false Methodological assumptions. 436.14 Section 436.14 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION FEDERAL ENERGY MANAGEMENT AND PLANNING PROGRAMS Methodology and Procedures for Life Cycle Cost Analyses § 436.14 Methodological assumptions. (a) Each Federal Agency shall discount to present values the...

  2. 10 CFR 436.14 - Methodological assumptions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 3 2014-01-01 2014-01-01 false Methodological assumptions. 436.14 Section 436.14 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION FEDERAL ENERGY MANAGEMENT AND PLANNING PROGRAMS Methodology and Procedures for Life Cycle Cost Analyses § 436.14 Methodological assumptions. (a) Each Federal Agency shall discount to present values the...

  3. 5 CFR 841.405 - Economic assumptions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false Economic assumptions. 841.405 Section 841... (CONTINUED) FEDERAL EMPLOYEES RETIREMENT SYSTEM-GENERAL ADMINISTRATION Government Costs § 841.405 Economic assumptions. The determinations of the normal cost percentage will be based on the economic...

  4. 5 CFR 841.405 - Economic assumptions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 2 2014-01-01 2014-01-01 false Economic assumptions. 841.405 Section 841... (CONTINUED) FEDERAL EMPLOYEES RETIREMENT SYSTEM-GENERAL ADMINISTRATION Government Costs § 841.405 Economic assumptions. The determinations of the normal cost percentage will be based on the economic...

  5. 5 CFR 841.405 - Economic assumptions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 2 2013-01-01 2013-01-01 false Economic assumptions. 841.405 Section 841... (CONTINUED) FEDERAL EMPLOYEES RETIREMENT SYSTEM-GENERAL ADMINISTRATION Government Costs § 841.405 Economic assumptions. The determinations of the normal cost percentage will be based on the economic...

  6. 5 CFR 841.405 - Economic assumptions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 2 2012-01-01 2012-01-01 false Economic assumptions. 841.405 Section 841... (CONTINUED) FEDERAL EMPLOYEES RETIREMENT SYSTEM-GENERAL ADMINISTRATION Government Costs § 841.405 Economic assumptions. The determinations of the normal cost percentage will be based on the economic...

  7. Teaching Critical Thinking by Examining Assumptions

    ERIC Educational Resources Information Center

    Yanchar, Stephen C.; Slife, Brent D.

    2004-01-01

    We describe how instructors can integrate the critical thinking skill of examining theoretical assumptions (e.g., determinism and materialism) and implications into psychology courses. In this instructional approach, students formulate questions that help them identify assumptions and implications, use those questions to identify and examine the…

  8. Assessment of calibration assumptions under strong climate changes

    NASA Astrophysics Data System (ADS)

    Van Schaeybroeck, Bert; Vannitsem, Stéphane

    2016-02-01

    Climate model calibration relies on different working hypotheses. The simplest bias correction or delta change methods assume the invariance of bias under climate change. Recent works have questioned this hypothesis and proposed linear bias changes with respect to the forcing. However, when the system experiences larger forcings, these schemes could fail. Calibration assumptions are tested within a simplified framework in the context of an intermediate complexity model for which the reference (or "reality") differs from the model by a single parametric model error and climate change is emulated by largely different CO2 forcings. It appears that calibration does not add value since the variation of bias under climate change is nonmonotonous for almost all variables and large compared to the climate change and the bias, except for the global temperature and sea ice area. For precipitation, calibration provides added value both globally and regionally. The calibration methods used fail to correct climate variability.

  9. Further evidence for the EPNT assumption

    NASA Technical Reports Server (NTRS)

    Greenberger, Daniel M.; Bernstein, Herbert J.; Horne, Michael; Zeilinger, Anton

    1994-01-01

    We recently proved a theorem extending the Greenberger-Horne-Zeilinger (GHZ) Theorem from multi-particle systems to two-particle systems. This proof depended upon an auxiliary assumption, the EPNT assumption (Emptiness of Paths Not Taken). According to this assumption, if there exists an Einstein-Rosen-Podolsky (EPR) element of reality that determines that a path is empty, then there can be no entity associated with the wave that travels this path (pilot-waves, empty waves, etc.) and reports information to the amplitude, when the paths recombine. We produce some further evidence in support of this assumption, which is certainly true in quantum theory. The alternative is that such a pilot-wave theory would have to violate EPR locality.

  10. Critical Thinking: Distinguishing between Inferences and Assumptions.

    ERIC Educational Resources Information Center

    Elder, Linda; Paul, Richard

    2002-01-01

    Outlines the differences between inferences and assumptions in critical thinking processes. Explains that as students develop critical intuitions, they increasingly notice how their point of view shapes their experiences. (AUTH/NB)

  11. 47 CFR 214.3 - Assumptions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... (3 CFR, 1966-1970 Comp., p. 820), and other emergency plans regarding the allocation and use of... COORDINATION OF THE RADIO SPECTRUM DURING A WARTIME EMERGENCY § 214.3 Assumptions. When the provisions of...

  12. 47 CFR 214.3 - Assumptions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... (3 CFR, 1966-1970 Comp., p. 820), and other emergency plans regarding the allocation and use of... COORDINATION OF THE RADIO SPECTRUM DURING A WARTIME EMERGENCY § 214.3 Assumptions. When the provisions of...

  13. 47 CFR 214.3 - Assumptions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... (3 CFR, 1966-1970 Comp., p. 820), and other emergency plans regarding the allocation and use of... COORDINATION OF THE RADIO SPECTRUM DURING A WARTIME EMERGENCY § 214.3 Assumptions. When the provisions of...

  14. 47 CFR 214.3 - Assumptions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... (3 CFR, 1966-1970 Comp., p. 820), and other emergency plans regarding the allocation and use of... COORDINATION OF THE RADIO SPECTRUM DURING A WARTIME EMERGENCY § 214.3 Assumptions. When the provisions of...

  15. 47 CFR 214.3 - Assumptions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... (3 CFR, 1966-1970 Comp., p. 820), and other emergency plans regarding the allocation and use of... COORDINATION OF THE RADIO SPECTRUM DURING A WARTIME EMERGENCY § 214.3 Assumptions. When the provisions of...

  16. Revisiting the Simplified Bernoulli Equation

    PubMed Central

    Heys, Jeffrey J; Holyoak, Nicole; Calleja, Anna M; Belohlavek, Marek; Chaliki, Hari P

    2010-01-01

    Background: The assessment of the severity of aortic valve stenosis is done by either invasive catheterization or non-invasive Doppler Echocardiography in conjunction with the simplified Bernoulli equation. The catheter measurement is generally considered more accurate, but the procedure is also more likely to have dangerous complications. Objective: The focus here is on examining computational fluid dynamics as an alternative method for analyzing the echo data and determining whether it can provide results similar to the catheter measurement. Methods: An in vitro heart model with a rigid orifice is used as a first step in comparing echocardiographic data, which uses the simplified Bernoulli equation, catheterization, and echocardiographic data, which uses computational fluid dynamics (i.e., the Navier-Stokes equations). Results: For a 0.93cm2 orifice, the maximum pressure gradient predicted by either the simplified Bernoulli equation or computational fluid dynamics was not significantly different from the experimental catheter measurement (p > 0.01). For a smaller 0.52cm2 orifice, there was a small but significant difference (p < 0.01) between the simplified Bernoulli equation and the computational fluid dynamics simulation, with the computational fluid dynamics simulation giving better agreement with experimental data for some turbulence models. Conclusion: For this simplified, in vitro system, the use of computational fluid dynamics provides an improvement over the simplified Bernoulli equation with the biggest improvement being seen at higher valvular stenosis levels. PMID:21625471

  17. Life Support Baseline Values and Assumptions Document

    NASA Technical Reports Server (NTRS)

    Anderson, Molly S.; Ewert, Michael K.; Keener, John F.; Wagner, Sandra A.

    2015-01-01

    The Baseline Values and Assumptions Document (BVAD) provides analysts, modelers, and other life support researchers with a common set of values and assumptions which can be used as a baseline in their studies. This baseline, in turn, provides a common point of origin from which many studies in the community may depart, making research results easier to compare and providing researchers with reasonable values to assume for areas outside their experience. With the ability to accurately compare different technologies' performance for the same function, managers will be able to make better decisions regarding technology development.

  18. Assessing Statistical Model Assumptions under Climate Change

    NASA Astrophysics Data System (ADS)

    Varotsos, Konstantinos V.; Giannakopoulos, Christos; Tombrou, Maria

    2016-04-01

    The majority of the studies assesses climate change impacts on air-quality using chemical transport models coupled to climate ones in an off-line mode, for various horizontal resolutions and different present and future time slices. A complementary approach is based on present-day empirical relations between air-pollutants and various meteorological variables which are then extrapolated to the future. However, the extrapolation relies on various assumptions such as that these relationships will retain their main characteristics in the future. In this study we focus on the ozone-temperature relationship. It is well known that among a number of meteorological variables, temperature is found to exhibit the highest correlation with ozone concentrations. This has led, in the past years, to the development and application of statistical models with which the potential impact of increasing future temperatures on various ozone statistical targets was examined. To examine whether the ozone-temperature relationship retains its main characteristics under warmer temperatures we analyze the relationship during the heatwaves events of 2003 and 2006 in Europe. More specifically, we use available gridded daily maximum temperatures (E-OBS) and hourly ozone observations from different non-urban stations (EMEP) within the areas that were impacted from the two heatwave events. In addition, we compare the temperature distributions of the two events with temperatures from two different future time periods 2021-2050 and 2071-2100 from a number of regional climate models developed under the framework of the Cordex initiative (http://www.cordex.org) with a horizontal resolution of 12 x 12km, based on different IPCC RCPs emissions scenarios. A statistical analysis is performed on the ozone-temperature relationship for each station and for the two aforementioned years which are then compared against the ozone-temperature relationships obtained from the rest of the available dataseries. The

  19. Publish unexpected results that conflict with assumptions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Some widely held scientific assumptions have been discredited, whereas others are just inappropriate for many applications. Sometimes, a widely-held analysis procedure takes on a life of its own, forgetting the original purpose of the analysis. The peer-reviewed system makes it difficult to get a pa...

  20. Parenting the Musically Gifted: Assumptions and Issues.

    ERIC Educational Resources Information Center

    Flohr, John W.

    1987-01-01

    Commonly held assumptions about musical giftedness in children are disputed. Several issues are examined, including how musical giftedness is defined, the availability of community resources, parental encouragement versus pressure, and potential emotional and behavioral problems. Some suggestions useful for the parents of musically gifted children…

  1. Artificial Intelligence: Underlying Assumptions and Basic Objectives.

    ERIC Educational Resources Information Center

    Cercone, Nick; McCalla, Gordon

    1984-01-01

    Presents perspectives on methodological assumptions underlying research efforts in artificial intelligence (AI) and charts activities, motivations, methods, and current status of research in each of the major AI subareas: natural language understanding; computer vision; expert systems; search, problem solving, planning; theorem proving and logic…

  2. Assumptions of Multiple Regression: Correcting Two Misconceptions

    ERIC Educational Resources Information Center

    Williams, Matt N.; Gomez Grajales, Carlos Alberto; Kurkiewicz, Dason

    2013-01-01

    In 2002, an article entitled "Four assumptions of multiple regression that researchers should always test" by Osborne and Waters was published in "PARE." This article has gone on to be viewed more than 275,000 times (as of August 2013), and it is one of the first results displayed in a Google search for "regression…

  3. Classroom Instruction: Background, Assumptions, and Challenges

    ERIC Educational Resources Information Center

    Wolery, Mark; Hemmeter, Mary Louise

    2011-01-01

    In this article, the authors focus on issues of instruction in classrooms. Initially, a brief definitional and historic section is presented. This is followed by a discussion of four assumptions about the current state of affairs: (a) evidence-based practices should be identified and used, (b) children's phase of performance should dictate…

  4. 24 CFR 58.4 - Assumption authority.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ..., decision-making, and action that would otherwise apply to HUD under NEPA and other provisions of law that... environmental review, decision-making and action for programs authorized by the Native American Housing... separate decision regarding assumption of responsibilities for each of these Acts and communicate...

  5. Causal Mediation Analysis: Warning! Assumptions Ahead

    ERIC Educational Resources Information Center

    Keele, Luke

    2015-01-01

    In policy evaluations, interest may focus on why a particular treatment works. One tool for understanding why treatments work is causal mediation analysis. In this essay, I focus on the assumptions needed to estimate mediation effects. I show that there is no "gold standard" method for the identification of causal mediation effects. In…

  6. Extracurricular Business Planning Competitions: Challenging the Assumptions

    ERIC Educational Resources Information Center

    Watson, Kayleigh; McGowan, Pauric; Smith, Paul

    2014-01-01

    Business planning competitions [BPCs] are a commonly offered yet under-examined extracurricular activity. Given the extent of sceptical comment about business planning, this paper offers what the authors believe is a much-needed critical discussion of the assumptions that underpin the provision of such competitions. In doing so it is suggested…

  7. Culturally Biased Assumptions in Counseling Psychology

    ERIC Educational Resources Information Center

    Pedersen, Paul B.

    2003-01-01

    Eight clusters of culturally biased assumptions are identified for further discussion from Leong and Ponterotto's (2003) article. The presence of cultural bias demonstrates that cultural bias is so robust and pervasive that is permeates the profession of counseling psychology, even including those articles that effectively attack cultural bias…

  8. 29 CFR 4044.53 - Mortality assumptions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... for one person is in pay status on the valuation date, and if the payment of a death benefit after the... (c) of this section to represent the mortality of the death beneficiary. (c) Healthy lives. If the... assumptions. (a) General rule. Subject to paragraph (b) of this section (regarding certain death...

  9. 24 CFR 58.4 - Assumption authority.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ..., decision-making, and action that would otherwise apply to HUD under NEPA and other provisions of law that... environmental review, decision-making and action for programs authorized by the Native American Housing... separate decision regarding assumption of responsibilities for each of these Acts and communicate...

  10. Mexican-American Cultural Assumptions and Implications.

    ERIC Educational Resources Information Center

    Carranza, E. Lou

    The search for presuppositions of a people's thought is not new. Octavio Paz and Samuel Ramos have both attempted to describe the assumptions underlying the Mexican character. Paz described Mexicans as private, defensive, and stoic, characteristics taken to the extreme in the "pachuco." Ramos, on the other hand, described Mexicans as being…

  11. 10 CFR 436.14 - Methodological assumptions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.14 Methodological assumptions. (a) Each Federal Agency shall... the Life Cycle Costing Manual for the Federal Energy Management Program (NIST 85-3273) and determined... of the fiscal year in the Annual Supplement to the Life Cycle Costing Manual for the Federal...

  12. 10 CFR 436.14 - Methodological assumptions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.14 Methodological assumptions. (a) Each Federal Agency shall... the Life Cycle Costing Manual for the Federal Energy Management Program (NIST 85-3273) and determined... of the fiscal year in the Annual Supplement to the Life Cycle Costing Manual for the Federal...

  13. 10 CFR 436.14 - Methodological assumptions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.14 Methodological assumptions. (a) Each Federal Agency shall... the Life Cycle Costing Manual for the Federal Energy Management Program (NIST 85-3273) and determined... of the fiscal year in the Annual Supplement to the Life Cycle Costing Manual for the Federal...

  14. Assumptive Worldviews and Problematic Reactions to Bereavement

    ERIC Educational Resources Information Center

    Currier, Joseph M.; Holland, Jason M.; Neimeyer, Robert A.

    2009-01-01

    Forty-two individuals who had lost an immediate family member in the prior 2 years and 42 nonbereaved matched controls completed the World Assumptions Scale (Janoff-Bulman, 1989) and the Symptom Checklist-10-Revised (Rosen et al., 2000). Results showed that bereaved individuals were significantly more distressed than nonbereaved matched controls,…

  15. Critically Challenging Some Assumptions in HRD

    ERIC Educational Resources Information Center

    O'Donnell, David; McGuire, David; Cross, Christine

    2006-01-01

    This paper sets out to critically challenge five interrelated assumptions prominent in the (human resource development) HRD literature. These relate to: the exploitation of labour in enhancing shareholder value; the view that employees are co-contributors to and co-recipients of HRD benefits; the distinction between HRD and human resource…

  16. Deep Borehole Field Test Requirements and Controlled Assumptions.

    SciTech Connect

    Hardin, Ernest

    2015-07-01

    This document presents design requirements and controlled assumptions intended for use in the engineering development and testing of: 1) prototype packages for radioactive waste disposal in deep boreholes; 2) a waste package surface handling system; and 3) a subsurface system for emplacing and retrieving packages in deep boreholes. Engineering development and testing is being performed as part of the Deep Borehole Field Test (DBFT; SNL 2014a). This document presents parallel sets of requirements for a waste disposal system and for the DBFT, showing the close relationship. In addition to design, it will also inform planning for drilling, construction, and scientific characterization activities for the DBFT. The information presented here follows typical preparations for engineering design. It includes functional and operating requirements for handling and emplacement/retrieval equipment, waste package design and emplacement requirements, borehole construction requirements, sealing requirements, and performance criteria. Assumptions are included where they could impact engineering design. Design solutions are avoided in the requirements discussion. Deep Borehole Field Test Requirements and Controlled Assumptions July 21, 2015 iv ACKNOWLEDGEMENTS This set of requirements and assumptions has benefited greatly from reviews by Gordon Appel, Geoff Freeze, Kris Kuhlman, Bob MacKinnon, Steve Pye, David Sassani, Dave Sevougian, and Jiann Su.

  17. Simplified High-Power Inverter

    NASA Technical Reports Server (NTRS)

    Edwards, D. B.; Rippel, W. E.

    1984-01-01

    Solid-state inverter simplified by use of single gate-turnoff device (GTO) to commutate multiple silicon controlled rectifiers (SCR's). By eliminating conventional commutation circuitry, GTO reduces cost, size and weight. GTO commutation applicable to inverters of greater than 1-kilowatt capacity. Applications include emergency power, load leveling, drives for traction and stationary polyphase motors, and photovoltaic-power conditioning.

  18. 75 FR 28223 - Simplified Proceedings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-20

    ... From the Federal Register Online via the Government Publishing Office ] FEDERAL MINE SAFETY AND HEALTH REVIEW COMMISSION 29 CFR Part 2700 Simplified Proceedings AGENCY: Federal Mine Safety and Health Review Commission. ACTION: Notice of proposed rulemaking. SUMMARY: The Federal Mine Safety and...

  19. Simplifying the Water Poverty Index

    ERIC Educational Resources Information Center

    Cho, Danny I.; Ogwang, Tomson; Opio, Christopher

    2010-01-01

    In this paper, principal components methodology is used to derive simplified and cost effective indexes of water poverty. Using a well known data set for 147 countries from which an earlier five-component water poverty index comprising of "Resources," "Access," "Capacity," "Use" and "Environment" was constructed, we find that a simplified…

  20. A note on the assumption of quasiequilibrium in semiconductor junction devices

    NASA Technical Reports Server (NTRS)

    Von Roos, O.

    1977-01-01

    It is shown that the quasi-equilibrium theory for p-n junctions, as originally proposed by Shockley (1949), does not apply under conditions involving an application of comparatively low external voltages. A numerical example indicates that the quasi-equilibrium assumption must be discarded as soon as the voltage is increased beyond a certain critical value, although the system may still be in a low-level injection regime. It is currently not known which set of simplifying assumptions may replace the quasi-equilibrium assumptions. Possible analytic simplification relations applicable to moderate or high injection levels can, perhaps, be based on an approach considered by Mari (1968) and Choo (1971, 1972).

  1. Caring for Caregivers: Challenging the Assumptions.

    PubMed

    Williams, A Paul; Peckham, Allie; Kuluski, Kerry; Lum, Janet; Warrick, Natalie; Spalding, Karen; Tam, Tommy; Bruce-Barrett, Cindy; Grasic, Marta; Im, Jennifer

    2015-01-01

    Informal and mostly unpaid caregivers - spouses, family, friends and neighbours - play a crucial role in supporting the health, well-being, functional independence and quality of life of growing numbers of persons of all ages who cannot manage on their own. Yet, informal caregiving is in decline; falling rates of engagement in caregiving are compounded by a shrinking caregiver pool. How should policymakers respond? In this paper, we draw on a growing international literature, along with findings from community-based studies conducted by our team across Ontario, to highlight six common assumptions about informal caregivers and what can be done to support them. These include the assumption that caregivers will be there to take on an increasing responsibility; that caregiving is only about an aging population; that money alone can do the job; that policymakers can simply wait and see; that front-line care professionals should be left to fill the policy void; and that caregivers should be addressed apart from cared-for persons and formal care systems. While each assumption has a different focus, all challenge policymakers to view caregivers as key players in massive social and political change, and to respond accordingly. PMID:26626112

  2. A Proposal for Testing Local Realism Without Using Assumptions Related to Hidden Variable States

    NASA Technical Reports Server (NTRS)

    Ryff, Luiz Carlos

    1996-01-01

    A feasible experiment is discussed which allows us to prove a Bell's theorem for two particles without using an inequality. The experiment could be used to test local realism against quantum mechanics without the introduction of additional assumptions related to hidden variables states. Only assumptions based on direct experimental observation are needed.

  3. Simplified environmental study on innovative bridge structure.

    PubMed

    Bouhaya, Lina; Le Roy, Robert; Feraille-Fresnet, Adélaïde

    2009-03-15

    The aim of this paper is to present a simplified life cycle assessment on an innovative bridge structure, made of wood and ultra high performance concrete, which combines mechanical performance with minimum environmental impact. The environmental analysis was conducted from cradle to grave using the Life Cycle Assessment method. It was restricted to energy release and greenhouse gas emissions. Assumptions are detailed for each step of the analysis. For the wood end-of-life, three scenarios were proposed: dumping, burning, and recycling. Results show that the most energy needed is in the production phase, which represents 73.4% of the total amount. Analysis shows that the renewable energy is about 70% of the production energy. Wood, through its biomass CO2, contributes positively to the environmental impact. It was concluded that no scenario can be the winner on both impacts. Indeed, the end-of-life wood recycling gives the best impact on CO2 release, whereas burning wood, despite its remarkable energy impact is the worst. According to the emphasis given to each impact, designers will be able to choose one or the other. PMID:19368215

  4. A Simplified Adiabatic Compression Apparatus

    NASA Astrophysics Data System (ADS)

    Moloney, Michael J.; McGarvey, Albert P.

    2007-10-01

    Mottmann described an excellent way to measure the ratio of specific heats for air (γ = Cp/Cv) by suddenly compressing a plastic 2-liter bottle. His arrangement can be simplified so that no valves are involved and only a single connection needs to be made. This is done by adapting the plastic cap of a 2-liter plastic bottle so it connects directly to a Vernier Software Gas Pressure Sensor2 and the LabPro3 interface.

  5. Simplifying plasma chemistry via ILDM

    NASA Astrophysics Data System (ADS)

    Rehman, T.; Kemaneci, E.; Graef, W.; van Dijk, J.

    2016-02-01

    A plasma fluid model containing a large number of chemical species and reactions yields a high computational load. One of the methods to overcome this difficulty is to apply Chemical Reduction Techniques as used in combustion engineering. The chemical reduction technique that we study here is ILDM (Intrinsic Lower Dimensional Manifold). The ILDM method is used to simplify an argon plasma model and then a comparison is made with a CRM (Collisional Radiative Model).

  6. Project M: An Assessment of Mission Assumptions

    NASA Technical Reports Server (NTRS)

    Edwards, Alycia

    2010-01-01

    Project M is a mission Johnson Space Center is working on to send an autonomous humanoid robot to the moon (also known as Robonaut 2) in l000 days. The robot will be in a lander, fueled by liquid oxygen and liquid methane, and land on the moon, avoiding any hazardous obstacles. It will perform tasks like maintenance, construction, and simple student experiments. This mission is also being used as inspiration for new advancements in technology. I am considering three of the design assumptions that contribute to determining the mission feasibility: maturity of robotic technology, launch vehicle determination, and the LOX/Methane fueled spacecraft

  7. Gas/Aerosol partitioning: a simplified method for global modeling

    NASA Astrophysics Data System (ADS)

    Metzger, S. M.

    2000-09-01

    The main focus of this thesis is the development of a simplified method to routinely calculate gas/aerosol partitioning of multicomponent aerosols and aerosol associated water within global atmospheric chemistry and climate models. Atmospheric aerosols are usually multicomponent mixtures, partly composed of acids (e.g. H2SO4, HNO3), their salts (e.g. (NH4)2SO4, NH4NO3, respectively), and water. Because these acids and salts are highly hygroscopic, water, that is associated with aerosols in humid environments, often exceeds the total dry aerosol mass. Both the total dry aerosol mass and the aerosol associated water are important for the role of atmospheric aerosols in climate change simulations. Still, multicomponent aerosols are not yet routinely calculated within global atmospheric chemistry or climate models. The reason is that these particles, especially volatile aerosol compounds, require a complex and computationally expensive thermodynamical treatment. For instance, the aerosol associated water depends on the composition of the aerosol, which is determined by the gas/liquid/solid partitioning, in turn strongly dependent on temperature, relative humidity, and the presence of pre-existing aerosol particles. Based on thermodynamical relations such a simplified method has been derived. This method is based on the assumptions generally made by the modeling of multicomponent aerosols, but uses an alternative approach for the calculation of the aerosol activity and activity coefficients. This alternative approach relates activity coefficients to the ambient relative humidity, according to the vapor pressure reduction and the generalization of Raoult s law. This relationship, or simplification, is a consequence of the assumption that the aerosol composition and the aerosol associated water are in thermodynamic equilibrium with the ambient relative humidity, which determines the solute activity and, hence, activity coefficients of a multicomponent aerosol mixture

  8. Alternative monotonicity assumptions for improving bounds on natural direct effects.

    PubMed

    Chiba, Yasutaka; Taguri, Masataka

    2013-01-01

    Estimating the direct effect of a treatment on an outcome is often the focus of epidemiological and clinical research, when the treatment has more than one specified pathway to the defined outcome. Even if the total effect is unconfounded, the direct effect is not identified when unmeasured variables affect the intermediate and outcome variables. Therefore, bounds on direct effects have been presented via linear programming under two common definitions of direct effects: controlled and natural. Here, we propose bounds on natural direct effects without using linear programming, because such bounds on controlled direct effects have already been proposed. To derive narrow bounds, we introduce two monotonicity assumptions that are weaker than those in previous studies and another monotonicity assumption. Furthermore, we do not assume that an outcome variable is binary, whereas previous studies have made that assumption. An additional advantage of our bounds is that the bounding formulas are extremely simple. The proposed bounds are illustrated using a randomized trial for coronary heart disease. PMID:23893690

  9. Simplified tools for evaluating domestic ventilation systems

    SciTech Connect

    Maansson, L.G.; Orme, M.

    1999-07-01

    Within an International Energy Agency (IEA) project, Annex 27, experts from 8 countries (Canada, France, Italy, Japan, The Netherlands, Sweden, UK and USA) have developed simplified tools for evaluating domestic ventilation systems during the heating season. Tools for building and user aspects, thermal comfort, noise, energy, life cycle cost, reliability and indoor air quality (IAQ) have been devised. The results can be used both for dwellings at the design stage and after construction. The tools lead to immediate answers and indications about the consequences of different choices that may arise during discussion with clients. This paper presents an introduction to these tools. Examples applications of the indoor air quality and energy simplified tools are also provided. The IAQ tool accounts for constant emission sources, CO{sub 2}, cooking products, tobacco smoke, condensation risks, humidity levels (i.e., for judging the risk for mould and house dust mites), and pressure difference (for identifying the risk for radon or land fill spillage entering the dwelling or problems with indoor combustion appliances). An elaborated set of design parameters were worked out that resulted in about 17,000 combinations. By using multi-variate analysis it was possible to reduce this to 174 combinations for IAQ. In addition, a sensitivity analysis was made using 990 combinations. The results from all the runs were used to develop a simplified tool, as well as quantifying equations relying on the design parameters. A computerized energy tool has also been developed within this project, which takes into account air tightness, climate, window airing pattern, outdoor air flow rate and heat exchange efficiency.

  10. Simplified Model for iLIDS IDD

    NASA Technical Reports Server (NTRS)

    Lewis, James L.

    2010-01-01

    The NASA Docking System (NDS) Project has provided simplified volumetric models for use by potential hosts vehicles to assess vehicle integration. It should be noted that the JSC-65795 NDS Interface Definition Document (IDD) takes precedence over this simplified model. The simplified model serves as a graphical representation only. It is therefore important to state that dimensions and tolerances are to be taken from the IDD document and supersede any measurements derived from the provided simplified model geometry.

  11. Hidden assumptions and the placebo effect.

    PubMed

    Campbell, Anthony

    2009-06-01

    Whether, or how far, acupuncture effects can be explained as due to the placebo response is clearly an important issue, but there is an underlying philosophical assumption implicit in much of the debate, which is often ignored. Much of the argument is cast in terms which suggest that there is an immaterial mind hovering above the brain and giving rise to spurious effects. This model derives from Cartesian dualism which would probably be rejected by nearly all those involved, but it is characteristic of "folk psychology" and seems to have an unconscious influence on much of the terminology that is used. The majority of philosophers today reject dualism and this is also the dominant trend in science. Placebo effects, on this view, must be brain effects. It is important for modern acupuncture practitioners to keep this in mind when reading research on the placebo question. PMID:19502463

  12. Simplifying microbial electrosynthesis reactor design

    PubMed Central

    Giddings, Cloelle G. S.; Nevin, Kelly P.; Woodward, Trevor; Lovley, Derek R.; Butler, Caitlyn S.

    2015-01-01

    Microbial electrosynthesis, an artificial form of photosynthesis, can efficiently convert carbon dioxide into organic commodities; however, this process has only previously been demonstrated in reactors that have features likely to be a barrier to scale-up. Therefore, the possibility of simplifying reactor design by both eliminating potentiostatic control of the cathode and removing the membrane separating the anode and cathode was investigated with biofilms of Sporomusa ovata. S. ovata reduces carbon dioxide to acetate and acts as the microbial catalyst for plain graphite stick cathodes as the electron donor. In traditional ‘H-cell’ reactors, where the anode and cathode chambers were separated with a proton-selective membrane, the rates and columbic efficiencies of microbial electrosynthesis remained high when electron delivery at the cathode was powered with a direct current power source rather than with a potentiostat-poised cathode utilized in previous studies. A membrane-less reactor with a direct-current power source with the cathode and anode positioned to avoid oxygen exposure at the cathode, retained high rates of acetate production as well as high columbic and energetic efficiencies. The finding that microbial electrosynthesis is feasible without a membrane separating the anode from the cathode, coupled with a direct current power source supplying the energy for electron delivery, is expected to greatly simplify future reactor design and lower construction costs. PMID:26029199

  13. Simplifying microbial electrosynthesis reactor design.

    PubMed

    Giddings, Cloelle G S; Nevin, Kelly P; Woodward, Trevor; Lovley, Derek R; Butler, Caitlyn S

    2015-01-01

    Microbial electrosynthesis, an artificial form of photosynthesis, can efficiently convert carbon dioxide into organic commodities; however, this process has only previously been demonstrated in reactors that have features likely to be a barrier to scale-up. Therefore, the possibility of simplifying reactor design by both eliminating potentiostatic control of the cathode and removing the membrane separating the anode and cathode was investigated with biofilms of Sporomusa ovata. S. ovata reduces carbon dioxide to acetate and acts as the microbial catalyst for plain graphite stick cathodes as the electron donor. In traditional 'H-cell' reactors, where the anode and cathode chambers were separated with a proton-selective membrane, the rates and columbic efficiencies of microbial electrosynthesis remained high when electron delivery at the cathode was powered with a direct current power source rather than with a potentiostat-poised cathode utilized in previous studies. A membrane-less reactor with a direct-current power source with the cathode and anode positioned to avoid oxygen exposure at the cathode, retained high rates of acetate production as well as high columbic and energetic efficiencies. The finding that microbial electrosynthesis is feasible without a membrane separating the anode from the cathode, coupled with a direct current power source supplying the energy for electron delivery, is expected to greatly simplify future reactor design and lower construction costs. PMID:26029199

  14. Flat sheet metal girders with very thin metal web. Part I : general theories and assumptions

    NASA Technical Reports Server (NTRS)

    Wagner, Herbert

    1931-01-01

    The object of this report was to develop the structural method of sheet metal girders and should for that reason be considered solely from this standpoint. The ensuing methods were based on the assumption of the infinitely low stiffness in bending of the metal web. This simplifies the basis of calculations to such an extent that many questions of great practical importance can be examined which otherwise cannot be included in any analysis of the bending stiffness of the buckled plate. This report refers to such points as the safety in buckling of uprights to the effect of bending flexibility of spars, to spars not set parallel, etc.

  15. Simplified compact containment BWR plant

    SciTech Connect

    Heki, H.; Nakamaru, M.; Tsutagawa, M.; Hiraiwa, K.; Arai, K.; Hida, T.

    2004-07-01

    The reactor concept considered in this paper has a small power output, a compact containment and a simplified BWR configuration with comprehensive safety features. The Compact Containment Boiling Water Reactor (CCR), which is being developed with matured BWR technologies together with innovative systems/components, is expected to prove attractive in the world energy markets due to its flexibility in regard to both energy demands and site conditions, its high potential for reducing investment risk and its safety features facilitating public acceptance. The flexibility is achieved by CCR's small power output of 300 MWe class and capability of long operating cycle (refueling intervals). CCR is expected to be attractive from view point of investment due to its simplification/innovation in design such as natural circulation core cooling with the bottom located short core, internal upper entry control rod drives (CRDs) with ring-type dryers and simplified ECCS system with high pressure containment concept. The natural circulation core eliminates recirculation pumps and the maintenance of such pumps. The internal upper entry CRDs reduce the height of the reactor vessel (RPV) and consequently reduce the height of the primary containment vessel (PCV). The safety features mainly consist of large water inventory above the core without large penetration below the top of the core, passive cooling system by isolation condenser (IC), passive auto catalytic recombiner and in-vessel retention (IVR) capability. The large inventory increases the system response time in the case of design-base accidents, including loss of coolant accidents. The IC suppresses PCV pressure by steam condensation without any AC power. The recombiner decreases hydrogen concentration in the PCV in the case of a severe accident. Cooling the molten core inside the RPV if the core should be damaged by loss of core coolability could attain the IVR. The feasibility of CCR safety system has been confirmed by LOCA

  16. Model investigation overthrows assumptions of watershed research

    NASA Astrophysics Data System (ADS)

    Schultz, Colin

    2012-04-01

    A 2009 study revealed serious flaws in a standard technique used by hydrological researchers to understand how changes in watershed land use affect stream flow behaviors, such as peak flows. The study caused academics and government agencies alike to rethink decades of watershed research and prompted Kuraś et al. to reinvestigate a number of long-standing assumptions in watershed research using a complex and well-validated computer model that accounts for a range of internal watershed dynamics and hydrologic processes. For the test site at 241 Creek in British Columbia, Canada, the authors found not only that deforestation increased the severity of foods but also that it had a scaling influence on both the magnitudes and frequencies of the foods. The model showed that the larger the food, the more its magnitude was amplified by deforestation, with 10-to 100-year-return-period foods increasing in size by 9%-25%. Following a simulated removal of half of the watershed's trees, the authors found that 10-year-return-period foods occurred twice as often, while 100-year-returnperiod events became 5-6.7 times more frequent. This proportional relationship between the increase in food magnitudes and frequencies following deforestation and the size of the food runs counter to the prevailing wisdom in hydrological science.

  17. Model investigation overthrows assumptions of watershed research

    NASA Astrophysics Data System (ADS)

    Schultz, Colin

    2012-04-01

    A 2009 study revealed serious flaws in a standard technique used by hydrological researchers to understand how changes in watershed land use affect stream flow behaviors, such as peak flows. The study caused academics and government agencies alike to rethink decades of watershed research and prompted Kuraś et al. to reinvestigate a number of long-standing assumptions in watershed research using a complex and well-validated computer model that accounts for a range of internal watershed dynamics and hydrologic processes. For the test site at 241 Creek in British Columbia, Canada, the authors found not only that deforestation increased the severity of floods but also that it had a scaling influence on both the magnitudes and frequencies of the floods. The model showed that the larger the flood, the more its magnitude was amplified by deforestation, with 10-to 100-year-return-period floods increasing in size by 9%-25%. Following a simulated removal of half of the watershed's trees, the authors found that 10-year-return-period floods occurred twice as often, while 100-year-return-period events became 5-6.7 times more frequent. This proportional relationship between the increase in flood magnitudes and frequencies following deforestation and the size of the flood runs counter to the prevailing wisdom in hydrological science.

  18. Culturally grounded review of research assumptions.

    PubMed

    Hufford, D J

    1996-07-01

    In this article 11 assumptions underlying many discussions of alternative medicine are discussed and critiqued: that (1) cultural factors merely constitute noise in research data that can be removed by proper design; (2) the only proper goal of alternative medicine research is the incorporation of effective practices into medicine; (3) physicians are the primary consumers of good alternative medicine research; (4) control of pathology is the sole measure of the effectiveness of alternative medicine; (5) effects on pathology can be fully separated from effects on perception or quality of life; (6) effects on individual health should be the sole focus of alternative medical research; (7) medicine is aware of all sicknesses appropriate for alternative medicine research; (8) subjective data are less valuable than objective data; (9) the best leads for research come from recognizable systems with advocates; (10) more "modern-looking," highly articulated forms are necessarily better research "bets"; and (11) all good candidates for alternative medicine research are recognized as health practices by those who use them. PMID:8795922

  19. Simplified Radioimmunoassay for Diagnostic Serology

    PubMed Central

    Hutchinson, Harriet D.; Ziegler, Donald W.

    1972-01-01

    A simplified, indirect radioimmunoassay is described for Escherichia coli, vaccinia virus, and herpesvirus. The antigens were affixed to glass cover slips; thus both the primary and secondary reactions take place on the cover slips, and the unbound antiserum is easily separated from the bound antiserum by rinsing. Rabbit or human immune sera were reacted with the antigens, and the primary immune complex was quantitated by a secondary reaction with 125I-indicator globulin (anti-rabbit or anti-human). A direct relationship between the antiserum concentration and the 125I absorption was established. Variations in titers were detectable, and the titers were comparable to complement fixation titers. Homologous and heterologous reactions were distinguishable. The method affords an objective, quantitative, and qualitative evaluation of antibody, and results are reproducible. PMID:4344958

  20. Simplified SIMPs and the LHC

    NASA Astrophysics Data System (ADS)

    Daci, N.; De Bruyn, I.; Lowette, S.; Tytgat, M. H. G.; Zaldivar, B.

    2015-11-01

    The existence of Dark Matter (DM) in the form of Strongly Interacting Massive Particles (SIMPs) may be motivated by astrophysical observations that challenge the classical Cold DM scenario. Other observations greatly constrain, but do not completely exclude, the SIMP alternative. The signature of SIMPs at the LHC may consist of neutral, hadron-like, trackless jets produced in pairs. We show that the absence of charged content can provide a very efficient tool to suppress dijet backgrounds at the LHC, thus enhancing the sensitivity to a potential SIMP signal. We illustrate this using a simplified SIMP model and present a detailed feasibility study based on simulations, including a dedicated detector response parametrization. We evaluate the expected sensitivity to various signal scenarios and tentatively consider the exclusion limits on the SIMP elastic cross section with nucleons.

  1. Simplified Analysis Model for Predicting Pyroshock Responses on Composite Panel

    NASA Astrophysics Data System (ADS)

    Iwasa, Takashi; Shi, Qinzhong

    A simplified analysis model based on the frequency response analysis and the wave propagation analysis was established for predicting Shock Response Spectrum (SRS) on the composite panel subjected to pyroshock loadings. The complex composite panel was modeled as an isotropic single layer panel defined in NASA Lewis Method. Through the conductance of an impact excitation test on a composite panel with no equipment mounted on, it was presented that the simplified analysis model could estimate the SRS as well as the acceleration peak values in both near and far field in an accurate way. In addition, through the simulation for actual pyroshock tests on an actual satellite system, the simplified analysis model was proved to be applicable in predicting the actual pyroshock responses, while bringing forth several technical issues to estimate the pyroshock test specifications in early design stages.

  2. Assumptions and ambiguities in nonplanar acoustic soliton theory

    SciTech Connect

    Verheest, Frank; Hellberg, Manfred A.

    2014-02-15

    There have been many recent theoretical investigations of the nonlinear evolution of electrostatic modes with cylindrical or spherical symmetry. Through a reductive perturbation analysis based on a quasiplanar stretching, a modified form of the Korteweg-de Vries or related equation is derived, containing an additional term which is linear in the electrostatic potential and singular at time t = 0. Unfortunately, these analyses contain several restrictive assumptions and ambiguities which are normally neither properly explained nor discussed, and severely limit the applicability of the technique. Most glaring are the use of plane-wave stretchings, the assumption that shape-preserving cylindrical modes can exist and that, although time is homogeneous, the origin of time (which can be chosen arbitrarily) needs to be avoided. Hence, only in the domain where the nonlinear modes are quasiplanar, far from the axis of cylindrical or from the origin of spherical symmetry can acceptable but unexciting results be obtained. Nonplanar nonlinear modes are clearly an interesting topic of research, as some of these phenomena have been observed in experiments. However, it is argued that a proper study of such modes needs numerical simulations rather than ill-suited analytical approximations.

  3. Testing the assumptions of linear prediction analysis in normal vowels

    NASA Astrophysics Data System (ADS)

    Little, M. A.

    This paper develops an improved surrogate data test to show experimental evidence, for all the simple vowels of US English, for both male and female speakers, that Gaussian linear prediction analysis, a ubiquitous technique in current speech technologies, cannot be used to extract all the dynamical structure of real speech time series. The test provides robust evidence undermining the validity of these linear techniques, supporting the assumptions of either dynamical nonlinearity and/or non-Gaussianity common to more recent, complex, efforts at dynamical modelling speech time series. However, an additional finding is that the classical assumptions cannot be ruled out entirely, and plausible evidence is given to explain the success of the linear Gaussian theory as a weak approximation to the true, nonlinear/non-Gaussian dynamics. This supports the use of appropriate hybrid linear/nonlinear/non-Gaussian modelling. With a calibrated calculation of statistic and particular choice of experimental protocol, some of the known systematic problems of the method of surrogate data testing are circumvented to obtain results to support the conclusions to a high level of significance.

  4. Explaining the Pleistocene megafaunal extinctions: Models, chronologies, and assumptions

    PubMed Central

    Brook, Barry W.; Bowman, David M. J. S.

    2002-01-01

    Understanding of the Pleistocene megafaunal extinctions has been advanced recently by the application of simulation models and new developments in geochronological dating. Together these have been used to posit a rapid demise of megafauna due to over-hunting by invading humans. However, we demonstrate that the results of these extinction models are highly sensitive to implicit assumptions concerning the degree of prey naivety to human hunters. In addition, we show that in Greater Australia, where the extinctions occurred well before the end of the last Ice Age (unlike the North American situation), estimates of the duration of coexistence between humans and megafauna remain imprecise. Contrary to recent claims, the existing data do not prove the “blitzkrieg” model of overkill. PMID:12417761

  5. Finite Element Modeling of a Cylindrical Contact Using Hertzian Assumptions

    NASA Technical Reports Server (NTRS)

    Knudsen, Erik

    2003-01-01

    The turbine blades in the high-pressure fuel turbopump/alternate turbopump (HPFTP/AT) are subjected to hot gases rapidly flowing around them. This flow excites vibrations in the blades. Naturally, one has to worry about resonance, so a damping device was added to dissipate some energy from the system. The foundation is now laid for a very complex problem. The damper is in contact with the blade, so now there are contact stresses (both normal and tangential) to contend with. Since these stresses can be very high, it is not all that difficult to yield the material. Friction is another non-linearity and the blade is made out of a Nickel-based single-crystal superalloy that is orthotropic. A few approaches exist to solve such a problem and computer models, using contact elements, have been built with friction, plasticity, etc. These models are quite cumbersome and require many hours to solve just one load case and material orientation. A simpler approach is required. Ideally, the model should be simplified so the analysis can be conducted faster. When working with contact problems determining the contact patch and the stresses in the material are the main concerns. Closed-form solutions for non-conforming bodies, developed by Hertz, made out of isotropic materials are readily available. More involved solutions for 3-D cases using different materials are also available. The question is this: can Hertzian1 solutions be applied, or superimposed, to more complicated problems-like those involving anisotropic materials? That is the point of the investigation here. If these results agree with the more complicated computer models, then the analytical solutions can be used in lieu of the numerical solutions that take a very long time to process. As time goes on, the analytical solution will eventually have to include things like friction and plasticity. The models in this report use no contact elements and are essentially an applied load problem using Hertzian assumptions to

  6. 48 CFR 453.213 - Simplified Acquisition and other simplified purchase procedures (AD-838).

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... other simplified purchase procedures (AD-838). 453.213 Section 453.213 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE CLAUSES AND FORMS FORMS Prescription of Forms 453.213 Simplified Acquisition and other simplified purchase procedures (AD-838). Form AD-838, Purchase Order, is prescribed...

  7. Additional comments on the assumption of homogenous survival rates in modern bird banding estimation models

    USGS Publications Warehouse

    Nichols, J.D.; Stokes, S.L.; Hines, J.E.; Conroy, M.J.

    1982-01-01

    We examined the problem of heterogeneous survival and recovery rates in bird banding estimation models. We suggest that positively correlated subgroup survival and recovery probabilities may result from winter banding operations and that this situation will produce positively biased survival rate estimates. The magnitude of the survival estimate bias depends on the proportion of the population in each subgroup. Power of the suggested goodness-of-fit test to reject the inappropriate model for heterogeneous data sets was low for all situations examined and was poorest for positively related subgroup survival and recovery rates. Despite the magnitude of some of the biases reported and the relative inability to detect heterogeneity, we suggest that levels of heterogeneity normally encountered in real data sets will produce relatively small biases of average survival rates.

  8. Lens window simplifies TDL housing

    NASA Technical Reports Server (NTRS)

    Robinson, D. M.; Rowland, C. W.

    1979-01-01

    Lens window seal in tunable-diode-laser housing replaces plan parallel window. Lens seals housing and acts as optical-output coupler, thus eliminating need for additional reimaging or collimating optics.

  9. Impact of actuarial assumptions on pension costs: A simulation analysis

    NASA Astrophysics Data System (ADS)

    Yusof, Shaira; Ibrahim, Rose Irnawaty

    2013-04-01

    This study investigates the sensitivity of pension costs to changes in the underlying assumptions of a hypothetical pension plan in order to gain a perspective on the relative importance of the various actuarial assumptions via a simulation analysis. Simulation analyses are used to examine the impact of actuarial assumptions on pension costs. There are two actuarial assumptions will be considered in this study which are mortality rates and interest rates. To calculate pension costs, Accrued Benefit Cost Method, constant amount (CA) modification, constant percentage of salary (CS) modification are used in the study. The mortality assumptions and the implied mortality experience of the plan can potentially have a significant impact on pension costs. While for interest rate assumptions, it is inversely related to the pension costs. Results of the study have important implications for analyst of pension costs.

  10. Fluid-Structure Interaction Modeling of Intracranial Aneurysm Hemodynamics: Effects of Different Assumptions

    NASA Astrophysics Data System (ADS)

    Rajabzadeh Oghaz, Hamidreza; Damiano, Robert; Meng, Hui

    2015-11-01

    Intracranial aneurysms (IAs) are pathological outpouchings of cerebral vessels, the progression of which are mediated by complex interactions between the blood flow and vasculature. Image-based computational fluid dynamics (CFD) has been used for decades to investigate IA hemodynamics. However, the commonly adopted simplifying assumptions in CFD (e.g. rigid wall) compromise the simulation accuracy and mask the complex physics involved in IA progression and eventual rupture. Several groups have considered the wall compliance by using fluid-structure interaction (FSI) modeling. However, FSI simulation is highly sensitive to numerical assumptions (e.g. linear-elastic wall material, Newtonian fluid, initial vessel configuration, and constant pressure outlet), the effects of which are poorly understood. In this study, a comprehensive investigation of the sensitivity of FSI simulations in patient-specific IAs is investigated using a multi-stage approach with a varying level of complexity. We start with simulations incorporating several common simplifications: rigid wall, Newtonian fluid, and constant pressure at the outlets, and then we stepwise remove these simplifications until the most comprehensive FSI simulations. Hemodynamic parameters such as wall shear stress and oscillatory shear index are assessed and compared at each stage to better understand the sensitivity of in FSI simulations for IA to model assumptions. Supported by the National Institutes of Health (1R01 NS 091075-01).

  11. Effects of various assumptions on the calculated liquid fraction in isentropic saturated equilibrium expansions

    NASA Technical Reports Server (NTRS)

    Bursik, J. W.; Hall, R. M.

    1980-01-01

    The saturated equilibrium expansion approximation for two phase flow often involves ideal-gas and latent-heat assumptions to simplify the solution procedure. This approach is well documented by Wegener and Mack and works best at low pressures where deviations from ideal-gas behavior are small. A thermodynamic expression for liquid mass fraction that is decoupled from the equations of fluid mechanics is used to compare the effects of the various assumptions on nitrogen-gas saturated equilibrium expansion flow starting at 8.81 atm, 2.99 atm, and 0.45 atm, which are conditions representative of transonic cryogenic wind tunnels. For the highest pressure case, the entire set of ideal-gas and latent-heat assumptions are shown to be in error by 62 percent for the values of heat capacity and latent heat. An approximation of the exact, real-gas expression is also developed using a constant, two phase isentropic expansion coefficient which results in an error of only 2 percent for the high pressure case.

  12. Simplified Models for Dark Matter Model Building

    NASA Astrophysics Data System (ADS)

    DiFranzo, Anthony Paul

    The largest mass component of the universe is a longstanding mystery to the physics community. As a glaring source of new physics beyond the Standard Model, there is a large effort to uncover the quantum nature of dark matter. Many probes have been formed to search for this elusive matter; cultivating a rich environment for a phenomenologist. In addition to the primary probes---colliders, direct detection, and indirect detection---each with their own complexities, there is a plethora of prospects to illuminate our unanswered questions. In this work, phenomenological techniques for studying dark matter and other possible hints of new physics will be discussed. This work primarily focuses on the use of Simplified Models, which are intended to be a compromise between generality and validity of the theoretical description. They are often used to parameterize a particular search, develop a well-defined sense of complementarity between searches, or motivate new search strategies. Explicit examples of such models and how they may be used will be the highlight of each chapter.

  13. Simplified method for nonlinear structural analysis

    NASA Technical Reports Server (NTRS)

    Kaufman, A.

    1983-01-01

    A simplified inelastic analysis computer program was developed for predicting the stress-strain history of a thermomechanically cycled structure from an elastic solution. The program uses an iterative and incremental procedure to estimate the plastic strains from the material stress-strain properties and a simulated plasticity hardening model. The simplified method was exercised on a number of problems involving uniaxial and multiaxial loading, isothermal and nonisothermal conditions, and different materials and plasticity models. Good agreement was found between these analytical results and nonlinear finite element solutions for these problems. The simplified analysis program used less than 1 percent of the CPU time required for a nonlinear finite element analysis.

  14. Experimental demonstration of simplified quantum process tomography.

    PubMed

    Wu, Z; Li, S; Zheng, W; Peng, X; Feng, M

    2013-01-14

    The essential tool to characterize dynamics of an open quantum system is quantum process tomography (QPT). Although standard QPT methods are hard to be scalable, simplified QPT approach is available if we have the prior knowledge that the system Hamiltonian commutes with the system-environment interaction Hamiltonian. Using a nuclear magnetic resonance (NMR) quantum simulator, we experimentally simulate dephasing channels to demonstrate the simplified QPT as well as the standard QPT method as a comparison. The experimental results agree well with our predictions which confirm the validity and better efficiency of the simplified QPT. PMID:23320694

  15. A simplified Reynolds stress model for unsteady turbulent boundary layers

    NASA Technical Reports Server (NTRS)

    Fan, Sixin; Lakshminarayana, Budugur

    1993-01-01

    A simplified Reynolds stress model has been developed for the prediction of unsteady turbulent boundary layers. By assuming that the net transport of Reynolds stresses is locally proportional to the net transport of the turbulent kinetic energy, the time dependent full Reynolds stress model is reduced to a set of ordinary differential equations. These equations contain only time derivatives and can be readily integrated in a time dependent boundary layer or Navier-Stokes code. The turbulent kinetic energy and dissipation rate needed for the model are obtained by solving the k-epsilon equations. This simplified Reynolds stress turbulence model (SRSM) does not use the eddy viscosity assumption, which may not be valid for unsteady turbulent flows. The anisotropy of both the steady and the unsteady turbulent normal stresses can be captured by the SRSM model. Through proper damping of the shear stresses, the present model can be used in the near wall region of turbulent boundary layers. This model has been validated against data for steady and unsteady turbulent boundary layers, including periodic turbulent boundary layers subjected to a mean adverse pressure gradient. For the cases tested, the predicted unsteady velocity and turbulent stress components agree well with the experimental data. Comparison between the predictions from the SRSM model and a k-epsilon model is also presented.

  16. On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis

    PubMed Central

    Li, Bing; Chun, Hyonho; Zhao, Hongyu

    2014-01-01

    We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis. PMID:26401064

  17. 3. 6 simplified methods for design

    SciTech Connect

    Nickell, R.E.; Yahr, G.T.

    1981-01-01

    Simplified design analysis methods for elevated temperature construction are classified and reviewed. Because the major impetus for developing elevated temperature design methodology during the past ten years has been the LMFBR program, considerable emphasis is placed upon results from this source. The operating characteristics of the LMFBR are such that cycles of severe transient thermal stresses can be interspersed with normal elevated temperature operational periods of significant duration, leading to a combination of plastic and creep deformation. The various simplified methods are organized into two general categories, depending upon whether it is the material, or constitutive, model that is reduced, or the geometric modeling that is simplified. Because the elastic representation of material behavior is so prevalent, an entire section is devoted to elastic analysis methods. Finally, the validation of the simplified procedures is discussed.

  18. Simplified Rotation In Acoustic Levitation

    NASA Technical Reports Server (NTRS)

    Barmatz, M. B.; Gaspar, M. S.; Trinh, E. H.

    1989-01-01

    New technique based on old discovery used to control orientation of object levitated acoustically in axisymmetric chamber. Method does not require expensive equipment like additional acoustic drivers of precisely adjustable amplitude, phase, and frequency. Reflecting object acts as second source of sound. If reflecting object large enough, close enough to levitated object, or focuses reflected sound sufficiently, Rayleigh torque exerted on levitated object by reflected sound controls orientation of object.

  19. Assumptions of African-American Students about International Education Exchange.

    ERIC Educational Resources Information Center

    Fels, Michael D.

    This study attempted to identify and compare some of the assumptions concerning international education exchange of first, the international education exchange community, and, second, the African-American student community. The study reviewed materials from published institutional literature for the assumptions held by the international education…

  20. 29 CFR 4231.10 - Actuarial calculations and assumptions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Actuarial calculations and assumptions. 4231.10 Section... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date...

  1. 29 CFR 4231.10 - Actuarial calculations and assumptions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Actuarial calculations and assumptions. 4231.10 Section... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date...

  2. 29 CFR 4231.10 - Actuarial calculations and assumptions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Actuarial calculations and assumptions. 4231.10 Section... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date...

  3. 29 CFR 4231.10 - Actuarial calculations and assumptions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Actuarial calculations and assumptions. 4231.10 Section... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date...

  4. 46 CFR 174.070 - General damage stability assumptions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 7 2011-10-01 2011-10-01 false General damage stability assumptions. 174.070 Section 174.070 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SUBDIVISION AND STABILITY... Units § 174.070 General damage stability assumptions. For the purpose of determining compliance...

  5. 46 CFR 174.070 - General damage stability assumptions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 7 2010-10-01 2010-10-01 false General damage stability assumptions. 174.070 Section 174.070 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SUBDIVISION AND STABILITY... Units § 174.070 General damage stability assumptions. For the purpose of determining compliance...

  6. 14 CFR 29.473 - Ground loading conditions and assumptions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Ground loading conditions and assumptions... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY ROTORCRAFT Strength Requirements Ground Loads § 29.473 Ground loading conditions and assumptions. (a) For specified landing conditions, a...

  7. 14 CFR 29.473 - Ground loading conditions and assumptions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Ground loading conditions and assumptions... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY ROTORCRAFT Strength Requirements Ground Loads § 29.473 Ground loading conditions and assumptions. (a) For specified landing conditions, a...

  8. 14 CFR 29.473 - Ground loading conditions and assumptions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Ground loading conditions and assumptions... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY ROTORCRAFT Strength Requirements Ground Loads § 29.473 Ground loading conditions and assumptions. (a) For specified landing conditions, a...

  9. 14 CFR 27.473 - Ground loading conditions and assumptions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Ground loading conditions and assumptions... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL CATEGORY ROTORCRAFT Strength Requirements Ground Loads § 27.473 Ground loading conditions and assumptions. (a) For specified landing conditions, a...

  10. 14 CFR 27.473 - Ground loading conditions and assumptions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Ground loading conditions and assumptions... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL CATEGORY ROTORCRAFT Strength Requirements Ground Loads § 27.473 Ground loading conditions and assumptions. (a) For specified landing conditions, a...

  11. 14 CFR 27.473 - Ground loading conditions and assumptions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Ground loading conditions and assumptions... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL CATEGORY ROTORCRAFT Strength Requirements Ground Loads § 27.473 Ground loading conditions and assumptions. (a) For specified landing conditions, a...

  12. 29 CFR 4231.10 - Actuarial calculations and assumptions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Actuarial calculations and assumptions. 4231.10 Section... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date...

  13. Where Are We Going? Planning Assumptions for Community Colleges.

    ERIC Educational Resources Information Center

    Maas, Rao, Taylor and Associates, Riverside, CA.

    Designed to provide community college planners with a series of reference assumptions to consider in the planning process, this document sets forth assumptions related to finance (i.e., operational funds, capital funds, alternate funding sources, and campus financial operations); California state priorities; occupational trends; population (i.e.,…

  14. 29 CFR Appendix C to Part 4044 - Loading Assumptions

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Loading Assumptions C Appendix C to Part 4044 Labor... ASSETS IN SINGLE-EMPLOYER PLANS Pt. 4044, App. C Appendix C to Part 4044—Loading Assumptions If the total value of the plan's benefit liabilities (as defined in 29 U.S.C. § 1301(a)(16)), exclusive of...

  15. 10 CFR 71.83 - Assumptions as to unknown properties.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 2 2011-01-01 2011-01-01 false Assumptions as to unknown properties. 71.83 Section 71.83... Operating Controls and Procedures § 71.83 Assumptions as to unknown properties. When the isotopic abundance... fissile material in any package is not known, the licensee shall package the fissile material as if...

  16. 7 CFR 3575.88 - Transfers and assumptions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Transfers and assumptions. 3575.88 Section 3575.88 Agriculture Regulations of the Department of Agriculture (Continued) RURAL HOUSING SERVICE, DEPARTMENT OF AGRICULTURE GENERAL Community Programs Guaranteed Loans § 3575.88 Transfers and assumptions. (a) General....

  17. Simplified Models for LHC New Physics Searches

    SciTech Connect

    Alves, Daniele; Arkani-Hamed, Nima; Arora, Sanjay; Bai, Yang; Baumgart, Matthew; Berger, Joshua; Buckley, Matthew; Butler, Bart; Chang, Spencer; Cheng, Hsin-Chia; Cheung, Clifford; Chivukula, R.Sekhar; Cho, Won Sang; Cotta, Randy; D'Alfonso, Mariarosaria; El Hedri, Sonia; Essig, Rouven,; Evans, Jared A.; Fitzpatrick, Liam; Fox, Patrick; Franceschini, Roberto; /more authors..

    2012-06-01

    This document proposes a collection of simplified models relevant to the design of new-physics searches at the LHC and the characterization of their results. Both ATLAS and CMS have already presented some results in terms of simplified models, and we encourage them to continue and expand this effort, which supplements both signature-based results and benchmark model interpretations. A simplified model is defined by an effective Lagrangian describing the interactions of a small number of new particles. Simplified models can equally well be described by a small number of masses and cross-sections. These parameters are directly related to collider physics observables, making simplified models a particularly effective framework for evaluating searches and a useful starting point for characterizing positive signals of new physics. This document serves as an official summary of the results from the 'Topologies for Early LHC Searches' workshop, held at SLAC in September of 2010, the purpose of which was to develop a set of representative models that can be used to cover all relevant phase space in experimental searches. Particular emphasis is placed on searches relevant for the first {approx} 50-500 pb{sup -1} of data and those motivated by supersymmetric models. This note largely summarizes material posted at http://lhcnewphysics.org/, which includes simplified model definitions, Monte Carlo material, and supporting contacts within the theory community. We also comment on future developments that may be useful as more data is gathered and analyzed by the experiments.

  18. Simplified models for LHC new physics searches

    NASA Astrophysics Data System (ADS)

    Alves, Daniele; Arkani-Hamed, Nima; Arora, Sanjay; Bai, Yang; Baumgart, Matthew; Berger, Joshua; Buckley, Matthew; Butler, Bart; Chang, Spencer; Cheng, Hsin-Chia; Cheung, Clifford; Sekhar Chivukula, R.; Cho, Won Sang; Cotta, Randy; D'Alfonso, Mariarosaria; El Hedri, Sonia; Essig (Editor, Rouven; Evans, Jared A.; Fitzpatrick, Liam; Fox, Patrick; Franceschini, Roberto; Freitas, Ayres; Gainer, James S.; Gershtein, Yuri; Gray, Richard; Gregoire, Thomas; Gripaios, Ben; Gunion, Jack; Han, Tao; Haas, Andy; Hansson, Per; Hewett, JoAnne; Hits, Dmitry; Hubisz, Jay; Izaguirre, Eder; Kaplan, Jared; Katz, Emanuel; Kilic, Can; Kim, Hyung-Do; Kitano, Ryuichiro; Koay, Sue Ann; Ko, Pyungwon; Krohn, David; Kuflik, Eric; Lewis, Ian; Lisanti (Editor, Mariangela; Liu, Tao; Liu, Zhen; Lu, Ran; Luty, Markus; Meade, Patrick; Morrissey, David; Mrenna, Stephen; Nojiri, Mihoko; Okui, Takemichi; Padhi, Sanjay; Papucci, Michele; Park, Michael; Park, Myeonghun; Perelstein, Maxim; Peskin, Michael; Phalen, Daniel; Rehermann, Keith; Rentala, Vikram; Roy, Tuhin; Ruderman, Joshua T.; Sanz, Veronica; Schmaltz, Martin; Schnetzer, Stephen; Schuster (Editor, Philip; Schwaller, Pedro; Schwartz, Matthew D.; Schwartzman, Ariel; Shao, Jing; Shelton, Jessie; Shih, David; Shu, Jing; Silverstein, Daniel; Simmons, Elizabeth; Somalwar, Sunil; Spannowsky, Michael; Spethmann, Christian; Strassler, Matthew; Su, Shufang; Tait (Editor, Tim; Thomas, Brooks; Thomas, Scott; Toro (Editor, Natalia; Volansky, Tomer; Wacker (Editor, Jay; Waltenberger, Wolfgang; Yavin, Itay; Yu, Felix; Zhao, Yue; Zurek, Kathryn; LHC New Physics Working Group

    2012-10-01

    This document proposes a collection of simplified models relevant to the design of new-physics searches at the Large Hadron Collider (LHC) and the characterization of their results. Both ATLAS and CMS have already presented some results in terms of simplified models, and we encourage them to continue and expand this effort, which supplements both signature-based results and benchmark model interpretations. A simplified model is defined by an effective Lagrangian describing the interactions of a small number of new particles. Simplified models can equally well be described by a small number of masses and cross-sections. These parameters are directly related to collider physics observables, making simplified models a particularly effective framework for evaluating searches and a useful starting point for characterizing positive signals of new physics. This document serves as an official summary of the results from the ‘Topologies for Early LHC Searches’ workshop, held at SLAC in September of 2010, the purpose of which was to develop a set of representative models that can be used to cover all relevant phase space in experimental searches. Particular emphasis is placed on searches relevant for the first ˜50-500 pb-1 of data and those motivated by supersymmetric models. This note largely summarizes material posted at http://lhcnewphysics.org/, which includes simplified model definitions, Monte Carlo material, and supporting contacts within the theory community. We also comment on future developments that may be useful as more data is gathered and analyzed by the experiments.

  19. Simplified Solutions for Activity Deposited on Moving Filter Media.

    PubMed

    Smith, David L; Chabot, George E

    2016-10-01

    Simplified numerical solutions for particulate activity viewed on moving filter continuous air monitors are developed. The monitor configurations include both rectangular window (RW) and circular window (CW) types. The solutions are demonstrated first for a set of basic airborne radioactivity cases, for a series of concentration pulses, and for indicating the effects of step changes in reactor coolant system (RCS) leakage for a pressurized water reactor. The method is also compared to cases from the prior art. These simplified solutions have additional benefits: They are easily adaptable to multiple radionuclides, they will accommodate collection and detection efficiencies that vary in known ways across the collection area, and they also ease the solution programming. PMID:27575345

  20. On local total strain redistribution using a simplified cyclic inelastic analysis based on an elastic solution

    NASA Technical Reports Server (NTRS)

    Hwang, S. Y.; Kaufman, A.

    1985-01-01

    Strain redistribution corrections were developed for a simplified inelastic analysis procedure to economically calculate material cyclic response at the critical location of a structure for life prediction purposes. The method was based on the assumption that the plastic region in the structure is local and the total strain history required for input can be defined from elastic finite element analyses. Cyclic stress-strain behavior was represented by a bilinear kinematic hardening model. The simplified procedure has been found to predict stress-strain response with reasonable accuracy for thermally cycled problems but needs improvement for mechanically load cycled problems. This study derived and incorporated Neuber type corrections in the simplified procedure to account for local total strain redistribution under cyclic mechanical loading. The corrected simplified method was exercised on a mechanically load cycled benchmark notched plate problem. Excellent agreement was found between the predicted material response and nonlinear finite element solutions for the problem. The simplified analysis computer program used 0.3 percent of the CPU time required for a nonlinear finite element analysis.

  1. Sensitivity of Rooftop PV Projections in the SunShot Vision Study to Market Assumptions

    SciTech Connect

    Drury, E.; Denholm, P.; Margolis, R.

    2013-01-01

    The SunShot Vision Study explored the potential growth of solar markets if solar prices decreased by about 75% from 2010 to 2020. The SolarDS model was used to simulate rooftop PV demand for this study, based on several PV market assumptions--future electricity rates, customer access to financing, and others--in addition to the SunShot PV price projections. This paper finds that modeled PV demand is highly sensitive to several non-price market assumptions, particularly PV financing parameters.

  2. The Immoral Assumption Effect: Moralization Drives Negative Trait Attributions.

    PubMed

    Meindl, Peter; Johnson, Kate M; Graham, Jesse

    2016-04-01

    Jumping to negative conclusions about other people's traits is judged as morally bad by many people. Despite this, across six experiments (total N = 2,151), we find that multiple types of moral evaluations--even evaluations related to open-mindedness, tolerance, and compassion--play a causal role in these potentially pernicious trait assumptions. Our results also indicate that moralization affects negative-but not positive-trait assumptions, and that the effect of morality on negative assumptions cannot be explained merely by people's general (nonmoral) preferences or other factors that distinguish moral and nonmoral traits, such as controllability or desirability. Together, these results suggest that one of the more destructive human tendencies--making negative assumptions about others--can be caused by the better angels of our nature. PMID:26984017

  3. 7 CFR 1980.476 - Transfer and assumptions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., need not be consulted on a transfer and assumption case unless there is a change in loan terms. (p) If... on Line 24 as Net Collateral (Recovery). Approved protective advances and accrued interest...

  4. 7 CFR 1980.476 - Transfer and assumptions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., need not be consulted on a transfer and assumption case unless there is a change in loan terms. (p) If... on Line 24 as Net Collateral (Recovery). Approved protective advances and accrued interest...

  5. Supporting calculations and assumptions for use in WESF safetyanalysis

    SciTech Connect

    Hey, B.E.

    1997-03-07

    This document provides a single location for calculations and assumptions used in support of Waste Encapsulation and Storage Facility (WESF) safety analyses. It also provides the technical details and bases necessary to justify the contained results.

  6. Development of long operating cycle simplified BWR

    SciTech Connect

    Heki, H.; Nakamaru, M.; Maruya, T.; Hiraiwa, K.; Arai, K.; Narabayash, T.; Aritomi, M.

    2002-07-01

    This paper describes an innovative plant concept for long operating cycle simplified BWR (LSBWR) In this plant concept, 1) Long operating cycle ( 3 to 15 years), 2) Simplified systems and building, 3) Factory fabrication in module are discussed. Designing long operating core is based on medium enriched U-235 with burnable poison. Simplified systems and building are realized by using natural circulation with bottom located core, internal CRD and PCV with passive system and an integrated reactor and turbine building. This LSBWR concept will have make high degree of safety by IVR (In Vessel Retention) capability, large water inventory above the core region and no PCV vent to the environment due to PCCS (Passive Containment Cooling System) and internal vent tank. Integrated building concept could realize highly modular arrangement in hull structure (ship frame structure), ease of seismic isolation capability and high applicability of standardization and factory fabrication. (authors)

  7. Simplified models for exotic BSM searches

    NASA Astrophysics Data System (ADS)

    Heisig, Jan; Lessa, Andre; Quertenmont, Loic

    2015-12-01

    Simplified models are a successful way of interpreting current LHC searches for models beyond the standard model (BSM). So far simplified models have focused on topologies featuring a missing transverse energy (MET) signature. However, in some BSM theories other, more exotic, signatures occur. If a charged particle becomes long-lived on collider time scales — as it is the case in parts of the SUSY parameter space — it leads to a very distinct signature. We present an extension of the computer package SModelS which includes simplified models for heavy stable charged particles (HSCP). As a physical application we investigate the CMSSM stau co-annihilation strip containing long-lived staus, which presents a potential solution to the Lithium problem. Applying both MET and HSCP constraints we show that, for low values of tan β, all this region of parameter space either violates Dark Matter constraints or is excluded by LHC searches.

  8. Hypersonic Vehicle Propulsion System Simplified Model Development

    NASA Technical Reports Server (NTRS)

    Stueber, Thomas J.; Raitano, Paul; Le, Dzu K.; Ouzts, Peter

    2007-01-01

    This document addresses the modeling task plan for the hypersonic GN&C GRC team members. The overall propulsion system modeling task plan is a multi-step process and the task plan identified in this document addresses the first steps (short term modeling goals). The procedures and tools produced from this effort will be useful for creating simplified dynamic models applicable to a hypersonic vehicle propulsion system. The document continues with the GRC short term modeling goal. Next, a general description of the desired simplified model is presented along with simulations that are available to varying degrees. The simulations may be available in electronic form (FORTRAN, CFD, MatLab,...) or in paper form in published documents. Finally, roadmaps outlining possible avenues towards realizing simplified model are presented.

  9. Citizen preparedness for disasters: are current assumptions valid?

    PubMed

    Uscher-Pines, Lori; Chandra, Anita; Acosta, Joie; Kellermann, Arthur

    2012-06-01

    US government programs and communications regarding citizen preparedness for disasters rest on several untested, and therefore unverified, assumptions. We explore the assumptions related to citizen preparedness promotion and argue that in spite of extensive messaging about the importance of citizen preparedness and countless household surveys purporting to track the preparedness activities of individuals and households, the role individual Americans are being asked to play is largely based on conventional wisdom. Recommendations for conceptualizing and measuring citizen preparedness are discussed. PMID:22700027

  10. Heavy Flavor Simplified Models at the LHC

    SciTech Connect

    Essig, Rouven; Izaguirre, Eder; Kaplan, Jared; Wacker, Jay G.; /SLAC

    2012-04-03

    We consider a comprehensive set of simplified models that contribute to final states with top and bottom quarks at the LHC. These simplified models are used to create minimal search strategies that ensure optimal coverage of new heavy flavor physics involving the pair production of color octets and triplets. We provide a set of benchmarks that are representative of model space, which can be used by experimentalists to perform their own optimization of search strategies. For data sets larger than 1 fb{sup -1}, same-sign dilepton and 3b search regions become very powerful. Expected sensitivities from existing and optimized searches are given.

  11. Simplified cyclic structural analyses of SSME turbine blades

    NASA Technical Reports Server (NTRS)

    Kaufman, A.; Manderscheid, J. M.

    1986-01-01

    Anisotropic high-temperature alloys are used to meet the safety and durability requirements of turbine blades for high-pressure turbopumps in reusable space propulsion systems. The applicability to anisotropic components of a simplified inelastic structural analysis procedure developed at the NASA Lewis Research Center is assessed. The procedure uses as input the history of the total strain at the critical crack initiation location computed from elastic finite-element analyses. Cyclic heat transfer and structural analyses are performed for the first stage high-pressure fuel turbopump blade of the space shuttle main engine. The blade alloy is directionally solidified MAR-M 246 (nickel base). The analyses are based on a typical test stand engine cycle. Stress-strain histories for the airfoil critical location are computed using both the MARC nonlinear finite-element computer code and the simplified procedure. Additional cases are analyzed in which the material yield strength is arbitrarily reduced to increase the plastic strains and, therefore, the severity of the problem. Good agreement is shown between the predicted stress-strain solutions from the two methods. The simplified analysis uses about 0.02 percent (5 percent with the required elastic finite-element analyses) of the CPU time used by the nonlinear finite element analysis.

  12. A simplified technique of performing splenorenal shunt (Omar's technique).

    PubMed

    Shah, Omar Javed; Robbani, Irfan

    2005-01-01

    The splenorenal shunt procedure introduced by Robert Linton in 1947 is still used today in those regions of the world where portal hypertension is a common problem. However, because most surgeons find Linton's shunt procedure technically difficult, we felt that a simpler technique was needed. We present the surgical details and results of 20 splenorenal anastomosis procedures performed within a period of 30 months. Half of the patients (Group I) underwent Linton's conventional technique of splenorenal shunt; the other half (Group II) underwent a newly devised, simplified shunt technique. This new technique involves dissection of the fusion fascia of Toldt. The outcome of the 2 techniques was identical with respect to the reduction of preshunt portal pressure. However, our simplified technique was advantageous in that it significantly reduced the duration of surgery (P <0.001) and the amount of intraoperative blood loss (P <0.003). No patient died after either operation. Although Linton's splenorenal shunt is difficult and technically demanding, it is still routinely performed. The new technique described here, in addition to being simpler, helps achieve good vascular control, permits easier dissection of the splenic vein, enables an ideal anastomosis, decreases intraoperative blood loss, and reduces the duration of surgery. Therefore, we recommend the routine use of this simplified technique (Omar's technique) for the surgical treatment of portal hypertension. PMID:16429901

  13. Food additives

    MedlinePlus

    Food additives are substances that become part of a food product when they are added during the processing or making of that food. "Direct" food additives are often added during processing to: Add nutrients ...

  14. Gaining Algorithmic Insight through Simplifying Constraints.

    ERIC Educational Resources Information Center

    Ginat, David

    2002-01-01

    Discusses algorithmic problem solving in computer science education, particularly algorithmic insight, and focuses on the relevance and effectiveness of the heuristic simplifying constraints which involves simplification of a given problem to a problem in which constraints are imposed on the input data. Presents three examples involving…

  15. Simplified modeling for infiltration and radon entry

    SciTech Connect

    Sherman, M.H.

    1992-08-01

    Air leakage in the envelopes of residential buildings is the primary mechanism for provided ventilation to those buildings. For radon the same mechanisms that drive the ventilation, drive the radon entry This paper attempts to provide a simplified physical model that can be used to understand the interactions between the building leakage distribution, the forces that drive infiltration and ventilation, and indoor radon concentrations, Combining both ventilation and entry modeling together allows an estimation of Radon concentration and exposure to be made and demonstrates how changes in the envelope or ventilation system would affect it. This paper will develop simplified modeling approaches for estimating both ventilation rate and radon entry rate based on the air tightness of the envelope and the driving forces. These approaches will use conventional leakage values (i.e. effective leakage area ) to quantify the air tightness and include natural and mechanical driving forces. This paper will introduce a simplified parameter, the Radon Leakage Area, that quantifies the resistance to radon entry. To be practical for dwellings, modeling of the occupant exposures to indoor pollutants must be simple to use and not require unreasonable input data. This paper presents the derivation of the simplified physical model, and applies that model to representative situations to explore the tendencies to be expected under different circumstances.

  16. Simplified Tutorial Programming for Interactive CAI.

    ERIC Educational Resources Information Center

    Jelden, D. L.

    A validated instructional model generated on a large mainframe computer by the military was modified to a microcomputer format for use in programming tutorial computer assisted instruction (CAI) materials, and a simplified, compatible system of generating programs was identified--CP/M and MP/M from Digital Research Corporation. In order to…

  17. Simplified Fabrication of Helical Copper Antennas

    NASA Technical Reports Server (NTRS)

    Petro, Andrew

    2006-01-01

    A simplified technique has been devised for fabricating helical antennas for use in experiments on radio-frequency generation and acceleration of plasmas. These antennas are typically made of copper (for electrical conductivity) and must have a specific helical shape and precise diameter.

  18. Simplifying Data. USMES Beginning "How To" Set.

    ERIC Educational Resources Information Center

    Agro, Sally; And Others

    In this set of three booklets on simplifying data, primary grade students learn how to round off data and to find the median and average from sets of data. The major emphasis in all Unified Sciences and Mathematics for Elementary Schools (USMES) units is on open-ended, long-range investigations of real problems. In most instances students learn…

  19. Simplifying Data. USMES Intermediate "How To" Set.

    ERIC Educational Resources Information Center

    Agro, Sally; And Others

    In this set of six booklets on simplifying data, intermediate grade students learn how to tell what data show, find the median/mean/mode from sets of data, find different kinds of ranges, and use key numbers to compare two sets of data. The major emphasis in all Unified Sciences and Mathematics for Elementary Schools (USMES) units is on…

  20. Simplified Recipes for Day Care Centers.

    ERIC Educational Resources Information Center

    Asmussen, Patricia D.

    The spiral-bound collection of 156 simplified recipes is designed to help those who prepare food for groups of children at day care centers. The recipes provide for 25 child-size servings to meet the nutritional needs and appetites of children from 2 to 6 years of age. The first section gives general information on ladle and scoop sizes, weights…

  1. Simplified procedures for designing composite bolted joints

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    1988-01-01

    Simplified procedures are described to design and analyze single and multi-bolt composite joints. Numerical examples illustrate the use of these methods. Factors affecting composite bolted joints are summarized. References are cited where more detailed discussion is presented on specific aspects of composite bolted joints. Design variables associated with these joints are summarized in the appendix.

  2. Simplified Aid For Crew Rescue (SAFR)

    NASA Technical Reports Server (NTRS)

    Fisher, H. Thomas

    1990-01-01

    Viewgraphs and discussion of a Crew Emergency Rescue System (CERS) are presented. Topics covered include: functional description; operational description; interfaces with other subsystems/elements; simplified aid for crew rescue (SACR) characteristics; potential resource requirements; logistics, repair, and resupply; potential performance improvements; and automation impact.

  3. Food additives

    PubMed Central

    Spencer, Michael

    1974-01-01

    Food additives are discussed from the food technology point of view. The reasons for their use are summarized: (1) to protect food from chemical and microbiological attack; (2) to even out seasonal supplies; (3) to improve their eating quality; (4) to improve their nutritional value. The various types of food additives are considered, e.g. colours, flavours, emulsifiers, bread and flour additives, preservatives, and nutritional additives. The paper concludes with consideration of those circumstances in which the use of additives is (a) justified and (b) unjustified. PMID:4467857

  4. A VLSI architecture for simplified arithmetic Fourier transform algorithm

    NASA Technical Reports Server (NTRS)

    Reed, Irving S.; Shih, Ming-Tang; Truong, T. K.; Hendon, E.; Tufts, D. W.

    1992-01-01

    The arithmetic Fourier transform (AFT) is a number-theoretic approach to Fourier analysis which has been shown to perform competitively with the classical FFT in terms of accuracy, complexity, and speed. Theorems developed in a previous paper for the AFT algorithm are used here to derive the original AFT algorithm which Bruns found in 1903. This is shown to yield an algorithm of less complexity and of improved performance over certain recent AFT algorithms. A VLSI architecture is suggested for this simplified AFT algorithm. This architecture uses a butterfly structure which reduces the number of additions by 25 percent of that used in the direct method.

  5. Simplifying CEA through Excel, VBA, and Subeq

    NASA Technical Reports Server (NTRS)

    Foster, Ryan

    2004-01-01

    Many people use compound equilibrium programs for very different reasons, varying from refrigerators to light bulbs to rockets. A commonly used equilibrium program is CEA. CEA can take various inputs such as pressure, temperature, and volume along with numerous reactants and run them through equilibrium equations to obtain valuable output information, including products formed and their relative amounts. A little over a year ago, Bonnie McBride created the program subeq with the goal to simplify the calling of CEA. Subeq was also designed to be called by other programs, including Excel, through the use of Visual Basic for Applications (VBA). The largest advantage of using Excel is that it allows the user to input the information in a colorful and user-friendly environment while allowing VBA to run subeq, which is in the form of a FORTRAN DLL (Dynamic Link Library). Calling subeq in this form makes it much faster than if it were converted to VBA. Since subeq requires such large lists of reactant and product names, all of which can't be passed in as an array, subeq had to be changed to accept very long strings of reactants and products. To pass this string and adjust the transfer of input and output parameters, the subeq DLL had to be changed. One program that does this is Compaq Visual FORTRAN, which allows DLLs to be edited, debugged, and compiled. Compaq Visual FORTRAN uses FORTRAN 90/95, which has additional features to that of FORTRAN 77. My goals this summer include finishing up the excel spreadsheet of subeq, which I started last summer, and putting it on the Internet so that others can use it without having to download my spreadsheet. To finish up the spreadsheet I will need to work on debugging current options and problems. I will also work on making it as robust as possible, so that all errors that may arise will be clearly communicated to the user. New features will be added old ones will be changed as I receive comments from people using the spreadsheet

  6. The steady-state assumption in oscillating and growing systems.

    PubMed

    Reimers, Alexandra-M; Reimers, Arne C

    2016-10-01

    The steady-state assumption, which states that the production and consumption of metabolites inside the cell are balanced, is one of the key aspects that makes an efficient analysis of genome-scale metabolic networks possible. It can be motivated from two different perspectives. In the time-scales perspective, we use the fact that metabolism is much faster than other cellular processes such as gene expression. Hence, the steady-state assumption is derived as a quasi-steady-state approximation of the metabolism that adapts to the changing cellular conditions. In this article we focus on the second perspective, stating that on the long run no metabolite can accumulate or deplete. In contrast to the first perspective it is not immediately clear how this perspective can be captured mathematically and what assumptions are required to obtain the steady-state condition. By presenting a mathematical framework based on the second perspective we demonstrate that the assumption of steady-state also applies to oscillating and growing systems without requiring quasi-steady-state at any time point. However, we also show that the average concentrations may not be compatible with the average fluxes. In summary, we establish a mathematical foundation for the steady-state assumption for long time periods that justifies its successful use in many applications. Furthermore, this mathematical foundation also pinpoints unintuitive effects in the integration of metabolite concentrations using nonlinear constraints into steady-state models for long time periods. PMID:27363728

  7. Why is it Doing That? - Assumptions about the FMS

    NASA Technical Reports Server (NTRS)

    Feary, Michael; Immanuel, Barshi; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    In the glass cockpit, it's not uncommon to hear exclamations such as "why is it doing that?". Sometimes pilots ask "what were they thinking when they set it this way?" or "why doesn't it tell me what it's going to do next?". Pilots may hold a conceptual model of the automation that is the result of fleet lore, which may or may not be consistent with what the engineers had in mind. But what did the engineers have in mind? In this study, we present some of the underlying assumptions surrounding the glass cockpit. Engineers and designers make assumptions about the nature of the flight task; at the other end, instructor and line pilots make assumptions about how the automation works and how it was intended to be used. These underlying assumptions are seldom recognized or acknowledged, This study is an attempt to explicitly arti culate such assumptions to better inform design and training developments. This work is part of a larger project to support training strategies for automation.

  8. Development of a simplified biofilm model

    NASA Astrophysics Data System (ADS)

    Sarkar, Sushovan; Mazumder, Debabrata

    2015-11-01

    A simplified approach for analyzing the biofilm process in deriving an easy model has been presented. This simplified biofilm model formulated correlations between substrate concentration in the influent/effluent and at biofilm-liquid interface along with substrate flux and biofilm thickness. The model essentially considered the external mass transport according to Fick's Law, steady state substrate as well as biomass balance for attached growth microorganisms. In substrate utilization, Monod growth kinetics has been followed incorporating relevant boundary conditions at the liquid-biofilm interface and at the attachment surface. The numerical solution of equations was accomplished using Runge-Kutta method and accordingly an integrated computer program was developed. The model has been successfully applied in a distinct set of trials with varying range of representative input variables. The model performance was compared with available existing methods and it was found an easy, accurate method that can be used for process design of biofilm reactor.

  9. Simplified models of mixed dark matter

    SciTech Connect

    Cheung, Clifford; Sanford, David E-mail: dsanford@caltech.edu

    2014-02-01

    We explore simplified models of mixed dark matter (DM), defined here to be a stable relic composed of a singlet and an electroweak charged state. Our setup describes a broad spectrum of thermal DM candidates that can naturally accommodate the observed DM abundance but are subject to substantial constraints from current and upcoming direct detection experiments. We identify ''blind spots'' at which the DM-Higgs coupling is identically zero, thus nullifying direct detection constraints on spin independent scattering. Furthermore, we characterize the fine-tuning in mixing angles, i.e. well-tempering, required for thermal freeze-out to accommodate the observed abundance. Present and projected limits from LUX and XENON1T force many thermal relic models into blind spot tuning, well-tempering, or both. This simplified model framework generalizes bino-Higgsino DM in the MSSM, singlino-Higgsino DM in the NMSSM, and scalar DM candidates that appear in models of extended Higgs sectors.

  10. Two simplified procedures for predicting cyclic material response from a strain history

    NASA Technical Reports Server (NTRS)

    Kaufman, A.; Moreno, V.

    1985-01-01

    Simplified inelastic analysis procedures were developed at NASA Lewis and Pratt & Whitney Aircraft for predicting the stress-strain response at the critical location of a thermomechanically cycled structure. These procedures are intended primarily for use as economical structural analysis tools in the early design stages of aircraft engine hot section components where nonlinear finite-element analyses would be prohibitively expensive. Both simplified methods use as input the total strain history calculated from a linear elastic analysis. The elastic results are modified to approximate the characteristics of the inelastic cycle by incremental solution techniques. A von Mises yield criterion is used to determine the onset of active plasticity. The fundamental assumption of these methods is that the inelastic strain is local and constrained from redistribution by the surrounding elastic material.

  11. A simplified solar cell array modelling program

    NASA Technical Reports Server (NTRS)

    Hughes, R. D.

    1982-01-01

    As part of the energy conversion/self sufficiency efforts of DSN engineering, it was necessary to have a simplified computer model of a solar photovoltaic (PV) system. This article describes the analysis and simplifications employed in the development of a PV cell array computer model. The analysis of the incident solar radiation, steady state cell temperature and the current-voltage characteristics of a cell array are discussed. A sample cell array was modelled and the results are presented.

  12. Simplified robot arm dynamics for control

    NASA Technical Reports Server (NTRS)

    Bejczy, A. K.; Paul, R. P.

    1981-01-01

    A brief summary and evaluation is presented on the use of symbolic state equation techniques in order to represent robot arm dynamics with sufficient accuracy for controlling arm motion. The use of homogeneous transformations and the Lagrangian formulation of mechanics offers a convenient frame for the derivation, analysis and simplification of complex robot dynamics equations. It is pointed out that simplified state equations can represent robot arm dynamics with good accuracy.

  13. Simplified dichromated gelatin hologram recording process

    NASA Technical Reports Server (NTRS)

    Georgekutty, Tharayil G.; Liu, Hua-Kuang

    1987-01-01

    A simplified method for making dichromated gelatin (DCG) holographic optical elements (HOE) has been discovered. The method is much less tedious and it requires a period of processing time comparable with that for processing a silver halide hologram. HOE characteristics including diffraction efficiency (DE), linearity, and spectral sensitivity have been quantitatively investigated. The quality of the holographic grating is very high. Ninety percent or higher diffraction efficiency has been achieved in simple plane gratings made by this process.

  14. Simplified Linear Multivariable Control Of Robots

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1989-01-01

    Simplified method developed to design control system that makes joints of robot follow reference trajectories. Generic design includes independent multivariable feedforward and feedback controllers. Feedforward controller based on inverse of linearized model of dynamics of robot and implements control law that contains only proportional and first and second derivatives of reference trajectories with respect to time. Feedback controller, which implements control law of proportional, first-derivative, and integral terms, makes tracking errors converge toward zero as time passes.

  15. Transformation in Reverse: Naive Assumptions of an Urban Educator

    ERIC Educational Resources Information Center

    Hagiwara, Sumi; Wray, Susan

    2009-01-01

    The complexity of urban contexts is often subsumed into generalizations and deficit assumptions of urban communities and its members by those unfamiliar with urban culture. This is especially true for teachers seeking work in urban schools. This article addresses the complex interpretations of urban through the lens of a White male graduate…

  16. 7 CFR 4287.134 - Transfer and assumption.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 15 2012-01-01 2012-01-01 false Transfer and assumption. 4287.134 Section 4287.134 Agriculture Regulations of the Department of Agriculture (Continued) RURAL BUSINESS-COOPERATIVE SERVICE AND RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE SERVICING Servicing Business and Industry...

  17. 7 CFR 4287.134 - Transfer and assumption.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 15 2013-01-01 2013-01-01 false Transfer and assumption. 4287.134 Section 4287.134 Agriculture Regulations of the Department of Agriculture (Continued) RURAL BUSINESS-COOPERATIVE SERVICE AND RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE SERVICING Servicing Business and Industry...

  18. 7 CFR 4287.134 - Transfer and assumption.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 15 2014-01-01 2014-01-01 false Transfer and assumption. 4287.134 Section 4287.134 Agriculture Regulations of the Department of Agriculture (Continued) RURAL BUSINESS-COOPERATIVE SERVICE AND RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE SERVICING Servicing Business and Industry...

  19. 7 CFR 4287.134 - Transfer and assumption.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 15 2011-01-01 2011-01-01 false Transfer and assumption. 4287.134 Section 4287.134 Agriculture Regulations of the Department of Agriculture (Continued) RURAL BUSINESS-COOPERATIVE SERVICE AND RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE SERVICING Servicing Business and Industry...

  20. Sensitivity Analysis for Hierarchical Models Employing "t" Level-1 Assumptions.

    ERIC Educational Resources Information Center

    Seltzer, Michael; Novak, John; Choi, Kilchan; Lim, Nelson

    2002-01-01

    Examines the ways in which level-1 outliers can impact the estimation of fixed effects and random effects in hierarchical models (HMs). Also outlines and illustrates the use of Markov Chain Monte Carlo algorithms for conducting sensitivity analyses under "t" level-1 assumptions, including algorithms for settings in which the degrees of freedom at…

  1. 46 CFR 172.087 - Cargo loading assumptions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 7 2014-10-01 2014-10-01 false Cargo loading assumptions. 172.087 Section 172.087 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SUBDIVISION AND STABILITY SPECIAL RULES PERTAINING TO BULK CARGOES Special Rules Pertaining to a Barge That Carries a Hazardous Liquid...

  2. 46 CFR 172.087 - Cargo loading assumptions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 7 2012-10-01 2012-10-01 false Cargo loading assumptions. 172.087 Section 172.087 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SUBDIVISION AND STABILITY SPECIAL RULES PERTAINING TO BULK CARGOES Special Rules Pertaining to a Barge That Carries a Hazardous Liquid...

  3. 46 CFR 172.087 - Cargo loading assumptions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 7 2013-10-01 2013-10-01 false Cargo loading assumptions. 172.087 Section 172.087 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SUBDIVISION AND STABILITY SPECIAL RULES PERTAINING TO BULK CARGOES Special Rules Pertaining to a Barge That Carries a Hazardous Liquid...

  4. 46 CFR 172.087 - Cargo loading assumptions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 7 2010-10-01 2010-10-01 false Cargo loading assumptions. 172.087 Section 172.087 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SUBDIVISION AND STABILITY SPECIAL RULES PERTAINING TO BULK CARGOES Special Rules Pertaining to a Barge That Carries a Hazardous Liquid...

  5. 46 CFR 172.087 - Cargo loading assumptions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 7 2011-10-01 2011-10-01 false Cargo loading assumptions. 172.087 Section 172.087 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SUBDIVISION AND STABILITY SPECIAL RULES PERTAINING TO BULK CARGOES Special Rules Pertaining to a Barge That Carries a Hazardous Liquid...

  6. On the "Independence of Trials-Assumption" in Geometric Distribution

    ERIC Educational Resources Information Center

    Al-Saleh, Mohammad Fraiwan

    2008-01-01

    In this note, it is shown through an example that the assumption of the independence of Bernoulli trials in the geometric experiment may unexpectedly not be satisfied. The example can serve as a suitable and useful classroom activity for students in introductory probability courses.

  7. Assumptions Underlying the Identification of Gifted and Talented Students

    ERIC Educational Resources Information Center

    Brown, Scott W.; Renzuli, Joseph S.; Gubbins, E. Jean; Siegle, Del; Zhang, Wanli; Chen, Ching-Hui

    2005-01-01

    This study examined a national sample of classroom teachers, teachers of the gifted, administrators, and consultants from rural, suburban, and urban areas regarding their assumptions about the gifted identification process. Respondents indicated the degree to which they agreed or disagreed with 20 items that reflected guidelines for a…

  8. The quantum formulation derived from assumptions of epistemic processes

    NASA Astrophysics Data System (ADS)

    Helland, Inge S.

    2015-04-01

    Motivated by Quantum Bayesianism I give background for a general epistemic approach to quantum mechanics, where complementarity and symmetry are the only essential features. A general definition of a symmetric epistemic setting is introduced, and for this setting the basic Hilbert space formalism is arrived at under certain technical assumptions. Other aspects of ordinary quantum mechanics will be developed from the same basis elsewhere.

  9. Educational Expansion in Ghana: Economic Assumptions and Expectations

    ERIC Educational Resources Information Center

    Rolleston, Caine; Oketch, Moses

    2008-01-01

    The neo-classical "human capital theory" continues to be invoked as part of the rationale for educational expansion in the developing world. While the theory provides a route from educational inputs to economic outputs in terms of increased incomes and standards of living, the route is contingent and relies upon a number of key assumptions. This…

  10. 7 CFR 1980.476 - Transfer and assumptions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 14 2011-01-01 2011-01-01 false Transfer and assumptions. 1980.476 Section 1980.476 Agriculture Regulations of the Department of Agriculture (Continued) RURAL HOUSING SERVICE, RURAL BUSINESS-COOPERATIVE SERVICE, RURAL UTILITIES SERVICE, AND FARM SERVICE AGENCY, DEPARTMENT OF AGRICULTURE (CONTINUED) PROGRAM REGULATIONS...

  11. 7 CFR 1980.366 - Transfer and assumption.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 14 2013-01-01 2013-01-01 false Transfer and assumption. 1980.366 Section 1980.366 Agriculture Regulations of the Department of Agriculture (Continued) RURAL HOUSING SERVICE, RURAL BUSINESS-COOPERATIVE SERVICE, RURAL UTILITIES SERVICE, AND FARM SERVICE AGENCY, DEPARTMENT OF AGRICULTURE (CONTINUED) PROGRAM REGULATIONS...

  12. 7 CFR 1980.366 - Transfer and assumption.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 14 2012-01-01 2012-01-01 false Transfer and assumption. 1980.366 Section 1980.366 Agriculture Regulations of the Department of Agriculture (Continued) RURAL HOUSING SERVICE, RURAL BUSINESS-COOPERATIVE SERVICE, RURAL UTILITIES SERVICE, AND FARM SERVICE AGENCY, DEPARTMENT OF AGRICULTURE (CONTINUED) PROGRAM REGULATIONS...

  13. 14 CFR 27.473 - Ground loading conditions and assumptions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Ground loading conditions and assumptions. 27.473 Section 27.473 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL CATEGORY ROTORCRAFT Strength Requirements Ground...

  14. 14 CFR 27.473 - Ground loading conditions and assumptions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Ground loading conditions and assumptions. 27.473 Section 27.473 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL CATEGORY ROTORCRAFT Strength Requirements Ground...

  15. 40 CFR 264.150 - State assumption of responsibility.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 26 2011-07-01 2011-07-01 false State assumption of responsibility. 264.150 Section 264.150 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS FOR OWNERS AND OPERATORS OF HAZARDOUS WASTE TREATMENT, STORAGE, AND...

  16. 40 CFR 264.150 - State assumption of responsibility.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 25 2010-07-01 2010-07-01 false State assumption of responsibility. 264.150 Section 264.150 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS FOR OWNERS AND OPERATORS OF HAZARDOUS WASTE TREATMENT, STORAGE, AND...

  17. Errors in surface irrigation evaluation from incorrect model assumptions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Some two-dozen methods have been proposed in the literature for estimating an infiltration function from field measurements. These methods vary in their data requirements and analytical rigor, however most assume some functional form of the infiltration equations. The assumptions regarding the influ...

  18. Viruses, Murphy's Law, and the Dangers of Assumptions....

    ERIC Educational Resources Information Center

    Lester, Dan

    1999-01-01

    An experienced library technology manager relates what happened in the wake of a serious library computer virus attack, which he accidentally unleashed. The narrative describes the combination of coincidences, mistakes, assumptions, and delays that caused the incident, and outlines the 10 key lessons learned. (AEF)

  19. Philosophical Assumptions and Contemporary Research Perspectives: A Course Supplement.

    ERIC Educational Resources Information Center

    Fowler, Gene D.

    To supplement course materials for classes in communication theory and research methods, this paper compares philosophical assumptions underlying three approaches to communication research: scientific, which stresses quantitative methods of analysis; humanistic, which encompasses many conflicting techniques but has as a common element--the…

  20. The Assumptive World of Three State Policy Researchers.

    ERIC Educational Resources Information Center

    Sroufe, Gerald E.

    1985-01-01

    A critique of a research study regarding policy formation at the state level is presented, focusing on the "assumptive world" of the researchers. While the researchers have created a new vista for study in this area, there is a great need for improved mthodology. (CB)

  1. Ten Frequent Assumptions of Cultural Bias in Counseling.

    ERIC Educational Resources Information Center

    Pedersen, Paul

    1987-01-01

    Identifies 10 of the most frequently encountered examples of cultural bias that consistently emerge in the literature about multicultural counseling and development. Assumptions are described in the areas of normal behavior, individualism, limits of academic disciplines, dependence on abstract words, independence, client support systems, linear…

  2. Spatial Angular Compounding for Elastography without the Incompressibility Assumption

    PubMed Central

    Rao, Min; Varghese, Tomy

    2007-01-01

    Spatial-angular compounding is a new technique that enables the reduction of noise artifacts in ultrasound elastography. Previous results using spatial angular compounding, however, were based on the use of the tissue incompressibility assumption. Compounded elastograms were obtained from a spatially-weighted average of local strain estimated from radiofrequency echo signals acquired at different insonification angles. In this paper, we present a new method for reducing the noise artifacts in the axial strain elastogram utilizing a least-squares approach on the angular displacement estimates that does not use the incompressibility assumption. This method produces axial strain elastograms with higher image quality, compared to noncompounded axial strain elastograms, and is referred to as the least-squares angular-compounding approach for elastography. To distinguish between these two angular compounding methods, the spatial-angular compounding with angular weighting based on the tissue incompressibility assumption is referred to as weighted compounding. In this paper, we compare the performance of the two angular-compounding techniques for elastography using beam steering on a linear-array transducer. Quantitative experimental results demonstrate that least-squares compounding provides comparable but smaller improvements in both the elastographic signal-to-noise ratio and the contrast-to-noise ratio, as compared to the weighted-compounding method. Ultrasound simulation results suggest that the least-squares compounding method performs better and provide accurate and robust results when compared to the weighted compounding method, in the case where the incompressibility assumption does not hold. PMID:16761786

  3. Evaluation of assumptions in soil moisture triple collocation analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Triple collocation analysis (TCA) enables estimation of error variances for three or more products that retrieve or estimate the same geophysical variable using mutually-independent methods. Several statistical assumptions regarding the statistical nature of errors (e.g., mutual independence and ort...

  4. Making Predictions about Chemical Reactivity: Assumptions and Heuristics

    ERIC Educational Resources Information Center

    Maeyer, Jenine; Talanquer, Vicente

    2013-01-01

    Diverse implicit cognitive elements seem to support but also constrain reasoning in different domains. Many of these cognitive constraints can be thought of as either implicit assumptions about the nature of things or reasoning heuristics for decision-making. In this study we applied this framework to investigate college students'…

  5. Challenging Our Assumptions: Helping a Baby Adjust to Center Care.

    ERIC Educational Resources Information Center

    Elliot, Enid

    2003-01-01

    Contends that assumptions concerning infants' adjustment to child center care need to be tempered with attention to observation, thought, and commitment to each individual baby. Describes the Options Daycare program for pregnant teens and young mothers. Presents a case study illustrating the need for openness in strategy and planning for…

  6. Woman's Moral Development in Search of Philosophical Assumptions.

    ERIC Educational Resources Information Center

    Sichel, Betty A.

    1985-01-01

    Examined is Carol Gilligan's thesis that men and women use different moral languages to resolve moral dilemmas, i.e., women speak a language of caring and responsibility, and men speak a language of rights and justice. Her thesis is not grounded with adequate philosophical assumptions. (Author/RM)

  7. Assessing and Developing the Concept of Assumptions in Science Teachers.

    ERIC Educational Resources Information Center

    Yip, Din Yan

    2001-01-01

    Describes a method using small group and whole class discussions with guiding questions to enable teachers to construct successfully the concept of assumptions and develop a better appreciation of the nature and limitations of the process of scientific inquiry. (Author/SAH)

  8. 40 CFR 261.150 - State assumption of responsibility.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 25 2010-07-01 2010-07-01 false State assumption of responsibility. 261.150 Section 261.150 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) IDENTIFICATION AND LISTING OF HAZARDOUS WASTE Financial Requirements for Management...

  9. Assessment of Complex Performances: Limitations of Key Measurement Assumptions.

    ERIC Educational Resources Information Center

    Delandshere, Ginette; Petrosky, Anthony R.

    1998-01-01

    Examines measurement concepts and assumptions traditionally used in educational assessment, using the Early Adolescence/English Language Arts assessment developed for the National Board for Professional Teaching Standards as a context. The use of numerical ratings in complex performance assessment is questioned. (SLD)

  10. Male and Female Assumptions About Colleagues' Views of Their Competence.

    ERIC Educational Resources Information Center

    Heilman, Madeline E.; Kram, Kathy E.

    1983-01-01

    Compared the assumptions of 100 male and female employees about colleagues' views of their performance on a joint task. Results indicated women anticipated more blame for a joint failure, less credit for a joint success, and a work image of lesser effectiveness, regardless of the co-worker's sex. (JAC)

  11. Quantum cryptography in real-life applications: Assumptions and security

    NASA Astrophysics Data System (ADS)

    Zhao, Yi

    Quantum cryptography, or quantum key distribution (QKD), provides a means of unconditionally secure communication. The security is in principle based on the fundamental laws of physics. Security proofs show that if quantum cryptography is appropriately implemented, even the most powerful eavesdropper cannot decrypt the message from a cipher. The implementations of quantum crypto-systems in real life may not fully comply with the assumptions made in the security proofs. Such discrepancy between the experiment and the theory can be fatal to the security of a QKD system. In this thesis we address a number of these discrepancies. A perfect single-photon source is often assumed in many security proofs. However, a weak coherent source is widely used in a real-life QKD implementation. Decoy state protocols have been proposed as a novel approach to dramatically improve the performance of a weak coherent source based QKD implementation without jeopardizing its security. Here, we present the first experimental demonstrations of decoy state protocols. Our experimental scheme was later adopted by most decoy state QKD implementations. In the security proof of decoy state protocols as well as many other QKD protocols, it is widely assumed that a sender generates a phase-randomized coherent state. This assumption has been enforced in few implementations. We close this gap in two steps: First, we implement and verify the phase randomization experimentally; second, we prove the security of a QKD implementation without the coherent state assumption. In many security proofs of QKD, it is assumed that all the detectors on the receiver's side have identical detection efficiencies. We show experimentally that this assumption may be violated in a commercial QKD implementation due to an eavesdropper's malicious manipulation. Moreover, we show that the eavesdropper can learn part of the final key shared by the legitimate users as a consequence of this violation of the assumptions.

  12. Tests of the critical assumptions of the dilution method for estimating bacterivory by microeucaryotes.

    PubMed

    Tremaine, S C; Mills, A L

    1987-12-01

    The critical assumptions of the dilution method for estimating grazing rates of microzooplankton were tested by using a community from the sediment-water interface of Lake Anna, Va. Determination of the appropriate computational model was achieved by regression analysis; the exponential model was appropriate for bacterial growth at Lake Anna. The assumption that the change in grazing pressure is linearly proportional to the dilution factor was tested by analysis of variance with a lack-of-fit test. There was a significant (P < 0.0001) linear (P > 0.05) relationship between the dilution factor and time-dependent change in ln bacterial abundance. The assumption that bacterial growth is not altered by possible substrate enrichment in the dilution treatment was tested by amending diluted water with various amounts of dissolved organic carbon (either yeast extract or extracted carbon from lake sediments). Additions of carbon did not significantly alter bacterial growth rates during the incubation period (24 h). On the basis of these results, the assumptions of the dilution method proved to be valid for the system examined. PMID:16347507

  13. Impact of one-layer assumption on diffuse reflectance spectroscopy of skin

    NASA Astrophysics Data System (ADS)

    Hennessy, Ricky; Markey, Mia K.; Tunnell, James W.

    2015-02-01

    Diffuse reflectance spectroscopy (DRS) can be used to noninvasively measure skin properties. To extract skin properties from DRS spectra, you need a model that relates the reflectance to the tissue properties. Most models are based on the assumption that skin is homogenous. In reality, skin is composed of multiple layers, and the homogeneity assumption can lead to errors. In this study, we analyze the errors caused by the homogeneity assumption. This is accomplished by creating realistic skin spectra using a computational model, then extracting properties from those spectra using a one-layer model. The extracted parameters are then compared to the parameters used to create the modeled spectra. We used a wavelength range of 400 to 750 nm and a source detector separation of 250 μm. Our results show that use of a one-layer skin model causes underestimation of hemoglobin concentration [Hb] and melanin concentration [mel]. Additionally, the magnitude of the error is dependent on epidermal thickness. The one-layer assumption also causes [Hb] and [mel] to be correlated. Oxygen saturation is overestimated when it is below 50% and underestimated when it is above 50%. We also found that the vessel radius factor used to account for pigment packaging is correlated with epidermal thickness.

  14. Provably-secure (Chinese government) SM2 and simplified SM2 key exchange protocols.

    PubMed

    Yang, Ang; Nam, Junghyun; Kim, Moonseong; Choo, Kim-Kwang Raymond

    2014-01-01

    We revisit the SM2 protocol, which is widely used in Chinese commercial applications and by Chinese government agencies. Although it is by now standard practice for protocol designers to provide security proofs in widely accepted security models in order to assure protocol implementers of their security properties, the SM2 protocol does not have a proof of security. In this paper, we prove the security of the SM2 protocol in the widely accepted indistinguishability-based Bellare-Rogaway model under the elliptic curve discrete logarithm problem (ECDLP) assumption. We also present a simplified and more efficient version of the SM2 protocol with an accompanying security proof. PMID:25276863

  15. Simplified Analysis of Pulse Detonation Rocket Engine Blowdown Gasdynamics and Performance

    NASA Technical Reports Server (NTRS)

    Morris, C. I.; Rodgers, Stephen L. (Technical Monitor)

    2002-01-01

    Pulse detonation rocket engines (PDREs) offer potential performance improvements over conventional designs, but represent a challenging modellng task. A simplified model for an idealized, straight-tube, single-shot PDRE blowdown process and thrust determination is described and implemented. In order to form an assessment of the accuracy of the model, the flowfield time history is compared to experimental data from Stanford University. Parametric Studies of the effect of mixture stoichiometry, initial fill temperature, and blowdown pressure ratio on the performance of a PDRE are performed using the model. PDRE performance is also compared with a conventional steady-state rocket engine over a range of pressure ratios using similar gasdynamic assumptions.

  16. Provably-Secure (Chinese Government) SM2 and Simplified SM2 Key Exchange Protocols

    PubMed Central

    Nam, Junghyun; Kim, Moonseong

    2014-01-01

    We revisit the SM2 protocol, which is widely used in Chinese commercial applications and by Chinese government agencies. Although it is by now standard practice for protocol designers to provide security proofs in widely accepted security models in order to assure protocol implementers of their security properties, the SM2 protocol does not have a proof of security. In this paper, we prove the security of the SM2 protocol in the widely accepted indistinguishability-based Bellare-Rogaway model under the elliptic curve discrete logarithm problem (ECDLP) assumption. We also present a simplified and more efficient version of the SM2 protocol with an accompanying security proof. PMID:25276863

  17. Food additives.

    PubMed

    Berglund, F

    1978-01-01

    The use of additives to food fulfils many purposes, as shown by the index issued by the Codex Committee on Food Additives: Acids, bases and salts; Preservatives, Antioxidants and antioxidant synergists; Anticaking agents; Colours; Emulfifiers; Thickening agents; Flour-treatment agents; Extraction solvents; Carrier solvents; Flavours (synthetic); Flavour enhancers; Non-nutritive sweeteners; Processing aids; Enzyme preparations. Many additives occur naturally in foods, but this does not exclude toxicity at higher levels. Some food additives are nutrients, or even essential nutritents, e.g. NaCl. Examples are known of food additives causing toxicity in man even when used according to regulations, e.g. cobalt in beer. In other instances, poisoning has been due to carry-over, e.g. by nitrate in cheese whey - when used for artificial feed for infants. Poisonings also occur as the result of the permitted substance being added at too high levels, by accident or carelessness, e.g. nitrite in fish. Finally, there are examples of hypersensitivity to food additives, e.g. to tartrazine and other food colours. The toxicological evaluation, based on animal feeding studies, may be complicated by impurities, e.g. orthotoluene-sulfonamide in saccharin; by transformation or disappearance of the additive in food processing in storage, e.g. bisulfite in raisins; by reaction products with food constituents, e.g. formation of ethylurethane from diethyl pyrocarbonate; by metabolic transformation products, e.g. formation in the gut of cyclohexylamine from cyclamate. Metabolic end products may differ in experimental animals and in man: guanylic acid and inosinic acid are metabolized to allantoin in the rat but to uric acid in man. The magnitude of the safety margin in man of the Acceptable Daily Intake (ADI) is not identical to the "safety factor" used when calculating the ADI. The symptoms of Chinese Restaurant Syndrome, although not hazardous, furthermore illustrate that the whole ADI

  18. Experience with simplified inelastic analysis of piping designed for elevated temperature service

    SciTech Connect

    Severud, L.K.

    1980-03-01

    Screening rules and preliminary design of FFTF piping were developed in 1974 based on expected behavior and engineering judgment, approximate calculations, and a few detailed inelastic analyses of pipelines. This paper provides findings from six additional detailed inelastic analyses with correlations to the simplified analysis screening rules. In addition, simplified analysis methods for treating weldment local stresses and strains as well as fabrication induced flaws are described. Based on the FFTF experience, recommendations for future Code and technology work to reduce design analysis costs are identified.

  19. Simplified quaternary signed-digit arithmetic and its optical implementation

    NASA Astrophysics Data System (ADS)

    Li, Guoqiang; Liu, Liren; Cheng, Huiquan; Jing, Hongmei

    1997-02-01

    A simplified two-step quaternary signed-digit addition algorithm is presented. In contrast to the previously reported techniques using a large number of six-variable or four-variable minterms, the proposed algorithm requires only 10 minterms in the first step and 6 minterms in the second step. Furthermore, our scheme uses only two variables for each minterm. Therefore, the information to be stored is greatly reduced and the system complexity is decreased. With a shared-content-addressable memory (SCAM), it needs to store only one set of minterms independent of the operand length, and consequently, the system size does not increase with the increase of the operand digits. For optical implementation, an incoherent correlator based SCAM processor unit can be used to perform the two-step addition. The unit is very simple, easy to align and implement, and insensitive to the environment. An experimental result is given.

  20. Exact Solution of the Gyration Radius of an Individual's Trajectory for a Simplified Human Regular Mobility Model

    NASA Astrophysics Data System (ADS)

    Yan, Xiao-Yong; Han, Xiao-Pu; Zhou, Tao; Wang, Bing-Hong

    2011-12-01

    We propose a simplified human regular mobility model to simulate an individual's daily travel with three sequential activities: commuting to workplace, going to do leisure activities and returning home. With the assumption that the individual has a constant travel speed and inferior limit of time at home and in work, we prove that the daily moving area of an individual is an ellipse, and finally obtain an exact solution of the gyration radius. The analytical solution captures the empirical observation well.

  1. Pullback attractors of the two-dimensional non-autonomous simplified Ericksen-Leslie system for nematic liquid crystal flows

    NASA Astrophysics Data System (ADS)

    You, Bo; Li, Fang

    2016-08-01

    This paper is concerned with the long-time behaviour of the two-dimensional non-autonomous simplified Ericksen-Leslie system for nematic liquid crystal flows introduced in Lin and Liu (Commun Pure Appl Math, 48:501-537, 1995) with a non-autonomous forcing bulk term and order parameter field boundary conditions. In this paper, we prove the existence of pullback attractors and estimate the upper bound of its fractal dimension under some suitable assumptions.

  2. A Simplified Model of Choice Behavior under Uncertainty

    PubMed Central

    Lin, Ching-Hung; Lin, Yu-Kai; Song, Tzu-Jiun; Huang, Jong-Tsun; Chiu, Yao-Chu

    2016-01-01

    The Iowa Gambling Task (IGT) has been standardized as a clinical assessment tool (Bechara, 2007). Nonetheless, numerous research groups have attempted to modify IGT models to optimize parameters for predicting the choice behavior of normal controls and patients. A decade ago, most researchers considered the expected utility (EU) model (Busemeyer and Stout, 2002) to be the optimal model for predicting choice behavior under uncertainty. However, in recent years, studies have demonstrated that models with the prospect utility (PU) function are more effective than the EU models in the IGT (Ahn et al., 2008). Nevertheless, after some preliminary tests based on our behavioral dataset and modeling, it was determined that the Ahn et al. (2008) PU model is not optimal due to some incompatible results. This study aims to modify the Ahn et al. (2008) PU model to a simplified model and used the IGT performance of 145 subjects as the benchmark data for comparison. In our simplified PU model, the best goodness-of-fit was found mostly as the value of α approached zero. More specifically, we retested the key parameters α, λ, and A in the PU model. Notably, the influence of the parameters α, λ, and A has a hierarchical power structure in terms of manipulating the goodness-of-fit in the PU model. Additionally, we found that the parameters λ and A may be ineffective when the parameter α is close to zero in the PU model. The present simplified model demonstrated that decision makers mostly adopted the strategy of gain-stay loss-shift rather than foreseeing the long-term outcome. However, there are other behavioral variables that are not well revealed under these dynamic-uncertainty situations. Therefore, the optimal behavioral models may not have been found yet. In short, the best model for predicting choice behavior under dynamic-uncertainty situations should be further evaluated. PMID:27582715

  3. A Simplified Model of Choice Behavior under Uncertainty.

    PubMed

    Lin, Ching-Hung; Lin, Yu-Kai; Song, Tzu-Jiun; Huang, Jong-Tsun; Chiu, Yao-Chu

    2016-01-01

    The Iowa Gambling Task (IGT) has been standardized as a clinical assessment tool (Bechara, 2007). Nonetheless, numerous research groups have attempted to modify IGT models to optimize parameters for predicting the choice behavior of normal controls and patients. A decade ago, most researchers considered the expected utility (EU) model (Busemeyer and Stout, 2002) to be the optimal model for predicting choice behavior under uncertainty. However, in recent years, studies have demonstrated that models with the prospect utility (PU) function are more effective than the EU models in the IGT (Ahn et al., 2008). Nevertheless, after some preliminary tests based on our behavioral dataset and modeling, it was determined that the Ahn et al. (2008) PU model is not optimal due to some incompatible results. This study aims to modify the Ahn et al. (2008) PU model to a simplified model and used the IGT performance of 145 subjects as the benchmark data for comparison. In our simplified PU model, the best goodness-of-fit was found mostly as the value of α approached zero. More specifically, we retested the key parameters α, λ, and A in the PU model. Notably, the influence of the parameters α, λ, and A has a hierarchical power structure in terms of manipulating the goodness-of-fit in the PU model. Additionally, we found that the parameters λ and A may be ineffective when the parameter α is close to zero in the PU model. The present simplified model demonstrated that decision makers mostly adopted the strategy of gain-stay loss-shift rather than foreseeing the long-term outcome. However, there are other behavioral variables that are not well revealed under these dynamic-uncertainty situations. Therefore, the optimal behavioral models may not have been found yet. In short, the best model for predicting choice behavior under dynamic-uncertainty situations should be further evaluated. PMID:27582715

  4. Mechanism of thermonuclear burning propagation in a helium layer on a neutron star surface: A simplified adiabatic model

    NASA Astrophysics Data System (ADS)

    Simonenko, V. A.; Gryaznykh, D. A.; Litvinenko, I. A.; Lykov, V. A.; Shushlebin, A. N.

    2012-04-01

    Some thermonuclear X-ray bursters exhibit a high-frequency (about 300 Hz or more) brightness modulation at the rising phase of some bursts. These oscillations are explained by inhomogeneous heating of the surface layer on a rapidly rotating neutron star due to the finite propagation speed of thermonuclear burning. We suggest and substantiate a mechanism of this propagation that is consistent with experimental data. Initially, thermonuclear ignition occurs in a small region of the neutron star surface layer. The burning products rapidly rise and spread in the upper atmospheric layers due to turbulent convection. The accumulation of additional matter leads to matter compression and ignition at the bottom of the layer. This determines the propagation of the burning front. To substantiate this mechanism, we use the simplifying assumptions about a helium composition of the neutron star atmosphere and its initial adiabatic structure with a density of 1.75 × 108 g cm-3 at the bottom. 2D numerical simulations have been performed using a modified particle method in the adiabatic approximation.

  5. An epidemic model to evaluate the homogeneous mixing assumption

    NASA Astrophysics Data System (ADS)

    Turnes, P. P.; Monteiro, L. H. A.

    2014-11-01

    Many epidemic models are written in terms of ordinary differential equations (ODE). This approach relies on the homogeneous mixing assumption; that is, the topological structure of the contact network established by the individuals of the host population is not relevant to predict the spread of a pathogen in this population. Here, we propose an epidemic model based on ODE to study the propagation of contagious diseases conferring no immunity. The state variables of this model are the percentages of susceptible individuals, infectious individuals and empty space. We show that this dynamical system can experience transcritical and Hopf bifurcations. Then, we employ this model to evaluate the validity of the homogeneous mixing assumption by using real data related to the transmission of gonorrhea, hepatitis C virus, human immunodeficiency virus, and obesity.

  6. Systematic Model Building Based on Quark-Lepton Complementarity Assumptions

    NASA Astrophysics Data System (ADS)

    Winter, Walter

    2008-02-01

    In this talk, we present a procedure to systematically generate a large number of valid mass matrix textures from very generic assumptions. Compared to plain anarchy arguments, we postulate some structure for the theory, such as a possible connection between quarks and leptons, and a mechanism to generate flavor structure. We illustrate how this parameter space can be used to test the exclusion power of future experiments, and we point out that one can systematically generate embeddings in ZN product flavor symmetry groups.

  7. Questioning Engelhardt's assumptions in Bioethics and Secular Humanism.

    PubMed

    Ahmadi Nasab Emran, Shahram

    2016-06-01

    In Bioethics and Secular Humanism: The Search for a Common Morality, Tristram Engelhardt examines various possibilities of finding common ground for moral discourse among people from different traditions and concludes their futility. In this paper I will argue that many of the assumptions on which Engelhardt bases his conclusion about the impossibility of a content-full secular bioethics are problematic. By starting with the notion of moral strangers, there is no possibility, by definition, for a content-full moral discourse among moral strangers. It means that there is circularity in starting the inquiry with a definition of moral strangers, which implies that they do not share enough moral background or commitment to an authority to allow for reaching a moral agreement, and concluding that content-full morality is impossible among moral strangers. I argue that assuming traditions as solid and immutable structures that insulate people across their boundaries is problematic. Another questionable assumption in Engelhardt's work is the idea that religious and philosophical traditions provide content-full moralities. As the cardinal assumption in Engelhardt's review of the various alternatives for a content-full moral discourse among moral strangers, I analyze his foundationalist account of moral reasoning and knowledge and indicate the possibility of other ways of moral knowledge, besides the foundationalist one. Then, I examine Engelhardt's view concerning the futility of attempts at justifying a content-full secular bioethics, and indicate how the assumptions have shaped Engelhardt's critique of the alternatives for the possibility of content-full secular bioethics. PMID:26715286

  8. Simplified Explosive Joining of Tubes to Fittings

    NASA Technical Reports Server (NTRS)

    Bement, L. J.; Bailey, J. W.; Perry, R.; Finch, M. S.

    1987-01-01

    Technique simplifies tube-to-fitting joining, as compared to fusion welding, and provides improvement on standard procedures used to join tubes explosively to tube fittings. Special tool inserted into tube to be joined. Tool allows strip of ribbon explosive to be placed right at joint. Ribbon explosive and mild detonating fuse allows use of smaller charge. Assembled tool storable, and process amenable to automation. Assembly of components, insertion of tool into weld site, and joining operation mechanized without human contact. Used to assemble components in nuclear reactors or in other environments hostile to humans.

  9. Simplified stock markets described by number operators

    NASA Astrophysics Data System (ADS)

    Bagarello, F.

    2009-06-01

    In this paper we continue our systematic analysis of the operatorial approach previously proposed in an economical context and we discuss a mixed toy model of a simplified stock market, i.e. a model in which the price of the shares is given as an input. We deduce the time evolution of the portfolio of the various traders of the market, as well as of other observable quantities. As in a previous paper, we solve the equations of motion by means of a fixed point like approximation.

  10. Simplified dynamic buckling assessment of steel containments

    SciTech Connect

    Farrar, C.R.; Duffey, T.A.; Renick, D.H.

    1993-02-01

    A simplified, three-degree-of-freedom analytical procedure for performing a response spectrum buckling analysis of a thin containment shell is developed. Two numerical examples with R/t values which bound many existing steel containments are used to illustrate the procedure. The role of damping on incipient buckling acceleration level is evaluated for a regulatory seismic spectrum using the two numerical examples. The zero-period acceleration level that causes incipient buckling in either of the two containments increases 31% when damping is increased from 1% to 4% of critical. Comparisons with finite element results on incipient buckling levels are favorable.

  11. Chronic Meningitis: Simplifying a Diagnostic Challenge.

    PubMed

    Baldwin, Kelly; Whiting, Chris

    2016-03-01

    Chronic meningitis can be a diagnostic dilemma for even the most experienced clinician. Many times, the differential diagnosis is broad and encompasses autoimmune, neoplastic, and infectious etiologies. This review will focus on a general approach to chronic meningitis to simplify the diagnostic challenges many clinicians face. The article will also review the most common etiologies of chronic meningitis in some detail including clinical presentation, diagnostic testing, treatment, and outcomes. By using a case-based approach, we will focus on the key elements of clinical presentation and laboratory analysis that will yield the most rapid and accurate diagnosis in these complicated cases. PMID:26888190

  12. Potlining Additives

    SciTech Connect

    Rudolf Keller

    2004-08-10

    In this project, a concept to improve the performance of aluminum production cells by introducing potlining additives was examined and tested. Boron oxide was added to cathode blocks, and titanium was dissolved in the metal pool; this resulted in the formation of titanium diboride and caused the molten aluminum to wet the carbonaceous cathode surface. Such wetting reportedly leads to operational improvements and extended cell life. In addition, boron oxide suppresses cyanide formation. This final report presents and discusses the results of this project. Substantial economic benefits for the practical implementation of the technology are projected, especially for modern cells with graphitized blocks. For example, with an energy savings of about 5% and an increase in pot life from 1500 to 2500 days, a cost savings of $ 0.023 per pound of aluminum produced is projected for a 200 kA pot.

  13. Phosphazene additives

    SciTech Connect

    Harrup, Mason K; Rollins, Harry W

    2013-11-26

    An additive comprising a phosphazene compound that has at least two reactive functional groups and at least one capping functional group bonded to phosphorus atoms of the phosphazene compound. One of the at least two reactive functional groups is configured to react with cellulose and the other of the at least two reactive functional groups is configured to react with a resin, such as an amine resin of a polycarboxylic acid resin. The at least one capping functional group is selected from the group consisting of a short chain ether group, an alkoxy group, or an aryloxy group. Also disclosed are an additive-resin admixture, a method of treating a wood product, and a wood product.

  14. Development of Generation System of Simplified Digital Maps

    NASA Astrophysics Data System (ADS)

    Uchimura, Keiichi; Kawano, Masato; Tokitsu, Hiroki; Hu, Zhencheng

    In recent years, digital maps have been used in a variety of scenarios, including car navigation systems and map information services over the Internet. These digital maps are formed by multiple layers of maps of different scales; the map data most suitable for the specific situation are used. Currently, the production of map data of different scales is done by hand due to constraints related to processing time and accuracy. We conducted research concerning technologies for automatic generation of simplified map data from detailed map data. In the present paper, the authors propose the following: (1) a method to transform data related to streets, rivers, etc. containing widths into line data, (2) a method to eliminate the component points of the data, and (3) a method to eliminate data that lie below a certain threshold. In addition, in order to evaluate the proposed method, a user survey was conducted; in this survey we compared maps generated using the proposed method with the commercially available maps. From the viewpoint of the amount of data reduction and processing time, and on the basis of the results of the survey, we confirmed the effectiveness of the automatic generation of simplified maps using the proposed methods.

  15. An experimental approach to a simplified model of human birth.

    PubMed

    Lehn, Andrea M; Baumer, Alexa; Leftwich, Megan C

    2016-07-26

    This study presents a simplified experimental model of labor for the study of fetal lie and amniotic fluid properties. It mimics a ventouse (vacuum extraction) delivery to study the effect of amniotic fluid properties on force transfer to a passive fetus. The simplified vacuum delivery consists of a solid ovate spheroid being pulled from a passive, flexible spherical elastic shell filled with fluid. We compare the force necessary to remove the ovate fetus in fluids of varying properties. Additionally, the fetal lie-angular deviation from maternal/fetal spinal alignment-is changed by 5° intervals and the pullout force is measured. In both the concentric ovate experiments, the force to remove the fetus changes with the properties of the fluid occupying the space between the fetus and the uterus. Increasing the fluid viscosity by 35% decreases the maximum fetal removal force by up to 52.5%. Furthermore, while the force is dominated by the elastic force of the latex uterus, the properties of the amniotic fluid can significantly decrease the total removal force. This study demonstrates that the fluid components of a birth model can significantly alter the forces associated with fetus removal. This suggests that complete studies of human parturition should be designed to include both the material and fluid systems. PMID:26684434

  16. Simplified signal processing for impedance spectroscopy with spectrally sparse sequences

    NASA Astrophysics Data System (ADS)

    Annus, P.; Land, R.; Reidla, M.; Ojarand, J.; Mughal, Y.; Min, M.

    2013-04-01

    Classical method for measurement of the electrical bio-impedance involves excitation with sinusoidal waveform. Sinusoidal excitation at fixed frequency points enables wide variety of signal processing options, most general of them being Fourier transform. Multiplication with two quadrature waveforms at desired frequency could be easily accomplished both in analogue and in digital domains, even simplest quadrature square waves can be considered, which reduces signal processing task in analogue domain to synchronous switching followed by low pass filter, and in digital domain requires only additions. So called spectrally sparse excitation sequences (SSS), which have been recently introduced into bio-impedance measurement domain, are very reasonable choice when simultaneous multifrequency excitation is required. They have many good properties, such as ease of generation and good crest factor compared to similar multisinusoids. Typically, the usage of discrete or fast Fourier transform in signal processing step is considered so far. Usage of simplified methods nevertheless would reduce computational burden, and enable simpler, less costly and less energy hungry signal processing platforms. Accuracy of the measurement with SSS excitation when using different waveforms for quadrature demodulation will be compared in order to evaluate the feasibility of the simplified signal processing. Sigma delta modulated sinusoid (binary signal) is considered to be a good alternative for a synchronous demodulation.

  17. Tax Subsidies for Employer-Sponsored Health Insurance: Updated Microsimulation Estimates and Sensitivity to Alternative Incidence Assumptions

    PubMed Central

    Miller, G Edward; Selden, Thomas M

    2013-01-01

    Objective To estimate 2012 tax expenditures for employer-sponsored insurance (ESI) in the United States and to explore the sensitivity of estimates to assumptions regarding the incidence of employer premium contributions. Data Sources Nationally representative Medical Expenditure Panel Survey data from the 2005–2007 Household Component (MEPS-HC) and the 2009–2010 Insurance Component (MEPS IC). Study Design We use MEPS HC workers to construct synthetic workforces for MEPS IC establishments, applying the workers' marginal tax rates to the establishments' insurance premiums to compute the tax subsidy, in aggregate and by establishment characteristics. Simulation enables us to examine the sensitivity of ESI tax subsidy estimates to a range of scenarios for the within-firm incidence of employer premium contributions when workers have heterogeneous health risks and make heterogeneous plan choices. Principal Findings We simulate the total ESI tax subsidy for all active, civilian U.S. workers to be $257.4 billion in 2012. In the private sector, the subsidy disproportionately flows to workers in large establishments and establishments with predominantly high wage or full-time workforces. The estimates are remarkably robust to alternative incidence assumptions. Conclusions The aggregate value of the ESI tax subsidy and its distribution across firms can be reliably estimated using simplified incidence assumptions. PMID:23398400

  18. Evaluating Organic Aerosol Model Performance: Impact of two Embedded Assumptions

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Giroux, E.; Roth, H.; Yin, D.

    2004-05-01

    Organic aerosols are important due to their abundance in the polluted lower atmosphere and their impact on human health and vegetation. However, modeling organic aerosols is a very challenging task because of the complexity of aerosol composition, structure, and formation processes. Assumptions and their associated uncertainties in both models and measurement data make model performance evaluation a truly demanding job. Although some assumptions are obvious, others are hidden and embedded, and can significantly impact modeling results, possibly even changing conclusions about model performance. This paper focuses on analyzing the impact of two embedded assumptions on evaluation of organic aerosol model performance. One assumption is about the enthalpy of vaporization widely used in various secondary organic aerosol (SOA) algorithms. The other is about the conversion factor used to obtain ambient organic aerosol concentrations from measured organic carbon. These two assumptions reflect uncertainties in the model and in the ambient measurement data, respectively. For illustration purposes, various choices of the assumed values are implemented in the evaluation process for an air quality model based on CMAQ (the Community Multiscale Air Quality Model). Model simulations are conducted for the Lower Fraser Valley covering Southwest British Columbia, Canada, and Northwest Washington, United States, for a historical pollution episode in 1993. To understand the impact of the assumed enthalpy of vaporization on modeling results, its impact on instantaneous organic aerosol yields (IAY) through partitioning coefficients is analysed first. The analysis shows that utilizing different enthalpy of vaporization values causes changes in the shapes of IAY curves and in the response of SOA formation capability of reactive organic gases to temperature variations. These changes are then carried into the air quality model and cause substantial changes in the organic aerosol modeling

  19. A CRITICAL EXAMINATION OF THE FUNDAMENTAL ASSUMPTIONS OF SOLAR FLARE AND CORONAL MASS EJECTION MODELS

    SciTech Connect

    Spicer, D. S.; Bingham, R.; Harrison, R.

    2013-05-01

    The fundamental assumptions of conventional solar flare and coronal mass ejection (CME) theory are re-examined. In particular, the common theoretical assumption that magnetic energy that drives flares and CMEs can be stored in situ in the corona with sufficient energy density is found wanting. In addition, the observational constraint that flares and CMEs produce non-thermal electrons with fluxes of order 10{sup 34}-10{sup 36} electrons s{sup -1}, with energies of order 10-20 keV, must also be explained. This constraint when imposed on the ''standard model'' for flares and CMEs is found to miss the mark by many orders of magnitude. We suggest, in conclusion, there are really only two possible ways to explain the requirements of observations and theory: flares and CMEs are caused by mass-loaded prominences or driven directly by emerging magnetized flux.

  20. Simplified SBLOCA Analysis of AP1000

    SciTech Connect

    Brown, William L.

    2004-07-01

    The AP1000 is a 1000 MWe advanced nuclear power plant design that uses passive safety features such as a multi-stage, automatic depressurization system (ADS) and gravity-driven, safety injection from core make-up tanks (CMTs) and an in-containment refueling water storage tank (IRWST) to mitigate SBLOCA events. The period of most safety significance for AP1000 during a SBLOCA event is typically associated with the actuation of the fourth stage of the ADS and subsequent transition from CMT to IRWST safety injection. As this period of a SBLOCA is generally of a quasi-steady nature, the integral performance of the AP1000 can be understood and evaluated with a simplified model of the reactor vessel, ADS, and safety injection from the CMTs and IRWST. The simplified model of the AP1000 consists of a series of steady state simulations that uses drift flux in the core region and homogeneous treatment of the core exit region including the ADS flow paths to generate a family of core flow demand curves as a function of system pressure (i.e. mass flow required to satisfy core cooling). These core flow demand curves are plotted against passive safety system supply curves from the CMTs and IRWST to demonstrate the adequacy of the integral performance of the AP1000 during the most important phase of a SBLOCA. (author)

  1. Paleostress inversion: A multi-parametric geomechanical evaluation of the Wallace-Bott assumptions

    NASA Astrophysics Data System (ADS)

    Lejri, Mostfa; Maerten, Frantz; Maerten, Laurent; Soliva, Roger

    2015-08-01

    Wallace (1951) and Bott (1959) were the first to introduce the idea that the slip on each fault surface has the same direction and sense as the maximum shear stress resolved on that surface. However, this simplified hypothesis is questionable since fault mechanical interactions may induce slip reorientations. Earlier numerical geomechanical models confirmed that the slickenlines (slip vectors) are not necessarily parallel to the maximum resolved shear stress but are consistent with local stress perturbations. This leads us to ask as to what extent the Wallace and Bott simplifications are reliable as a basis hypothesis for stress inversion from fault slip data. Here, a geomechanical multi-parametric study using a 3D boundary element method, covering (i) fault geometries such as intersected faults or corrugated fault surfaces, (ii) the full range of Andersonian state of stress, (iii) fault friction, (iv) fault fluid pressure, (v) half space effect and (vi), rock properties, is performed in order to understand the effect of each parameter on the misfit angle between geomechanical slip vectors and the resolved shear stresses. It is shown that significant misfit angles can be found under specific configurations invalidating the Wallace and Bott assumptions, even though fault friction tends to minimize the misfit. We therefore conclude that in such cases, stress inversions based on fault slip data should be interpreted with care.

  2. Evaluating risk factor assumptions: a simulation-based approach

    PubMed Central

    2011-01-01

    Background Microsimulation models are an important tool for estimating the comparative effectiveness of interventions through prediction of individual-level disease outcomes for a hypothetical population. To estimate the effectiveness of interventions targeted toward high risk groups, the mechanism by which risk factors influence the natural history of disease must be specified. We propose a method for evaluating these risk factor assumptions as part of model-building. Methods We used simulation studies to examine the impact of risk factor assumptions on the relative rate (RR) of colorectal cancer (CRC) incidence and mortality for a cohort with a risk factor compared to a cohort without the risk factor using an extension of the CRC-SPIN model for colorectal cancer. We also compared the impact of changing age at initiation of screening colonoscopy for different risk mechanisms. Results Across CRC-specific risk factor mechanisms, the RR of CRC incidence and mortality decreased (towards one) with increasing age. The rate of change in RRs across age groups depended on both the risk factor mechanism and the strength of the risk factor effect. Increased non-CRC mortality attenuated the effect of CRC-specific risk factors on the RR of CRC when both were present. For each risk factor mechanism, earlier initiation of screening resulted in more life years gained, though the magnitude of life years gained varied across risk mechanisms. Conclusions Simulation studies can provide insight into both the effect of risk factor assumptions on model predictions and the type of data needed to calibrate risk factor models. PMID:21899767

  3. A Comparison of the Free Ride and CISK Assumptions.

    NASA Astrophysics Data System (ADS)

    Strunge Pedersen, Torben

    1991-08-01

    In a recent paper Fraedrich and McBride have studied the relation between the `free ride' and CISK (conditional instability of the second kind) assumptions in a well-known two-layer model. Here the comparison is extended to a more general case. For this purpose the free ride and CISK assumptions are compared in linearized models with special emphasis on the small-scale limit. To this end a general solution of the linearized CISK problem is presented. The free ride can be interpretated both as a local and an integral constraint. It is shown within the context of analytic models that the CISK assumption satisfies the integrated free ride in the small-scale limit. However, interpretating the free ride as an integral constraint yields a solution that differs qualitatively from the CISK solution even though both satisfy the required balance. On the other hand, if the free ride is applied locally, the special constraint is obtained, which states that the nondimensional function must be unity at the top of the Ekman layer, and in this case the free ride and CISK solution becomes identical in the small-scale limit. From this, it is concluded that the free ride is not identical to CISK, but rather it constitutes a special subset of the CISK solutions. Further, the general CISK solution, which differs from that of the free ride, actually satisfies the local free ride balance except at the lowest levels of the atmosphere. This breakdown of the balance appears to be in accordance with results based on observations.

  4. User assumptions about information retrieval systems: Ethical concerns

    SciTech Connect

    Froehlich, T.J.

    1994-12-31

    Information professionals, whether designers, intermediaries, database producers or vendors, bear some responsibility for the information that they make available to users of information systems. The users of such systems may tend to make many assumptions about the information that a system provides, such as believing: that the data are comprehensive, current and accurate, that the information resources or databases have same degree of quality and consistency of indexing; that the abstracts, if they exist, correctly and adequate reflect the content of the article; that there is consistency informs of author names or journal titles or indexing within and across databases; that there is standardization in and across databases; that once errors are detected, they are corrected; that appropriate choices of databases or information resources are a relatively easy matter, etc. The truth is that few of these assumptions are valid in commercia or corporate or organizational databases. However, given these beliefs and assumptions by many users, often promoted by information providers, information professionals, impossible, should intervene to warn users about the limitations and constraints of the databases they are using. With the growth of the Internet and end-user products (e.g., CD-ROMs), such interventions have significantly declined. In such cases, information should be provided on start-up or through interface screens, indicating to users, the constraints and orientation of the system they are using. The principle of {open_quotes}caveat emptor{close_quotes} is naive and socially irresponsible: information professionals or systems have an obligation to provide some framework or context for the information that users are accessing.

  5. Systematic Model Building Based on Quark-Lepton Complementarity Assumptions

    SciTech Connect

    Winter, Walter

    2008-02-21

    In this talk, we present a procedure to systematically generate a large number of valid mass matrix textures from very generic assumptions. Compared to plain anarchy arguments, we postulate some structure for the theory, such as a possible connection between quarks and leptons, and a mechanism to generate flavor structure. We illustrate how this parameter space can be used to test the exclusion power of future experiments, and we point out that one can systematically generate embeddings in Z{sub N} product flavor symmetry groups.

  6. Sensitivity of fine sediment source apportionment to mixing model assumptions

    NASA Astrophysics Data System (ADS)

    Cooper, Richard; Krueger, Tobias; Hiscock, Kevin; Rawlins, Barry

    2015-04-01

    Mixing models have become increasingly common tools for quantifying fine sediment redistribution in river catchments. The associated uncertainties may be modelled coherently and flexibly within a Bayesian statistical framework (Cooper et al., 2015). However, there is more than one way to represent these uncertainties because the modeller has considerable leeway in making error assumptions and model structural choices. In this presentation, we demonstrate how different mixing model setups can impact upon fine sediment source apportionment estimates via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges and subsurface material) under base flow conditions between August 2012 and August 2013 (Cooper et al., 2014). Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ~76%), comparison of apportionment estimates reveals varying degrees of sensitivity to changing prior parameter distributions, inclusion of covariance terms, incorporation of time-variant distributions and methods of proportion characterisation. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup and between a Bayesian and a popular Least Squares optimisation approach. Our OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon fine sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model setup prior to conducting fine sediment source apportionment investigations

  7. Diversion assumptions for high-powered research reactors

    SciTech Connect

    Binford, F.T.

    1984-01-01

    This study deals with diversion assumptions for high-powered research reactors -- specifically, MTR fuel; pool- or tank-type research reactors with light-water moderator; and water, beryllium, or graphite reflectors, and which have a power level of 25 MW(t) or more. The objective is to provide assistance to the IAEA in documentation of criteria and inspection observables related to undeclared plutonium production in the reactors described above, including: criteria for undeclared plutonium production, necessary design information for implementation of these criteria, verification guidelines including neutron physics and heat transfer, and safeguards measures to facilitate the detection of undeclared plutonium production at large research reactors.

  8. Analysis of the proof test with power law assumptions

    NASA Astrophysics Data System (ADS)

    Hanson, Thomas A.

    1994-03-01

    Prooftesting optical fiber is required to assure a minimum strength over all lengths of fiber. This is done as the fiber is wound onto a spool by applying a tensile stress over a length of fiber as it passes a stress region. The failure of weak flaws assures a minimum strength of lengths that survive the test. Flaw growth is assumed to follow the power law. Distributions of initial flaw size are assumed to be of the Weibull type. Experimental data are presented to validate these assumptions.

  9. Local strain redistribution corrections for a simplified inelastic analysis procedure based on an elastic finite-element analysis

    NASA Technical Reports Server (NTRS)

    Kaufman, A.; Hwang, S. Y.

    1985-01-01

    Strain redistribution corrections were developed for a simplified inelastic analysis procedure to economically calculate material cyclic response at the critical location of a structure for life prediction proposes. The method was based on the assumption that the plastic region in the structure is local and the total strain history required for input can be defined from elastic finite-element analyses. Cyclic stress-strain behavior was represented by a bilinear kinematic hardening model. The simplified procedure predicts stress-strain response with reasonable accuracy for thermally cycled problems but needs improvement for mechanically load-cycled problems. Neuber-type corrections were derived and incorporated in the simplified procedure to account for local total strain redistribution under cyclic mechanical loading. The corrected simplified method was used on a mechanically load-cycled benchmark notched-plate problem. The predicted material response agrees well with the nonlinear finite-element solutions for the problem. The simplified analysis computer program was 0.3% of the central processor unit time required for a nonlinear finite-element analysis.

  10. Detailed and simplified nonequilibrium helium ionization in the solar atmosphere

    SciTech Connect

    Golding, Thomas Peter; Carlsson, Mats; Leenaarts, Jorrit E-mail: mats.carlsson@astro.uio.no

    2014-03-20

    Helium ionization plays an important role in the energy balance of the upper chromosphere and transition region. Helium spectral lines are also often used as diagnostics of these regions. We carry out one-dimensional radiation-hydrodynamics simulations of the solar atmosphere and find that the helium ionization is set mostly by photoionization and direct collisional ionization, counteracted by radiative recombination cascades. By introducing an additional recombination rate mimicking the recombination cascades, we construct a simplified three-level helium model atom consisting of only the ground states. This model atom is suitable for modeling nonequilibrium helium ionization in three-dimensional numerical models. We perform a brief investigation of the formation of the He I 10830 and He II 304 spectral lines. Both lines show nonequilibrium features that are not recovered with statistical equilibrium models, and caution should therefore be exercised when such models are used as a basis for interpretating observations.

  11. Simplified Analysis Methods for Primary Load Designs at Elevated Temperatures

    SciTech Connect

    Carter, Peter; Jetter, Robert I; Sham, Sam

    2011-01-01

    The use of simplified (reference stress) analysis methods is discussed and illustrated for primary load high temperature design. Elastic methods are the basis of the ASME Section III, Subsection NH primary load design procedure. There are practical drawbacks with this approach, particularly for complex geometries and temperature gradients. The paper describes an approach which addresses these difficulties through the use of temperature-dependent elastic-perfectly plastic analysis. Correction factors are defined to address difficulties traditionally associated with discontinuity stresses, inelastic strain concentrations and multiaxiality. A procedure is identified to provide insight into how this approach could be implemented but clearly there is additional work to be done to define and clarify the procedural steps to bring it to the point where it could be adapted into code language.

  12. Simplified three-phase transformer model for electromagnetic transient studies

    SciTech Connect

    Chimklai, S.; Marti, J.R.

    1995-07-01

    This paper presents a simplified high-frequency model for three-phase, two- and three-winding transformers. The model is based on the classical 60-Hz equivalent circuit, extended to high frequencies by the addition of the winding capacitances and the synthesis of the frequency-dependent short-circuit branch by an RLC equivalent network. By retaining the T-form of the classical model, it is possible to separate the frequency-dependent series branch from the constant-valued shunt capacitances. Since the short-circuit branch can be synthesized by a minimum-phase-shift rational approximation, the mathematical complications of fitting mutual impedance or admittance functions are avoided and the model is guaranteed to be numerically absolutely stable. Experimental tests were performed on actual power transformers to determine the parameters of the model. EMTP simulation results are also presented.

  13. Stability analysis and numerical simulation of simplified solid rocket motors

    NASA Astrophysics Data System (ADS)

    Boyer, G.; Casalis, G.; Estivalèzes, J.-L.

    2013-08-01

    This paper investigates the Parietal Vortex Shedding (PVS) instability that significantly influences the Pressure Oscillations of the long and segmented solid rocket motors. The eigenmodes resulting from the stability analysis of a simplified configuration, namely, a cylindrical duct with sidewall injection, are presented. They are computed taking into account the presence of a wall injection defect, which is shown to induce hydrodynamic instabilities at discrete frequencies. These instabilities exhibit eigenfunctions in good agreement with the measured PVS vortical structures. They are successfully compared in terms of temporal evolution and frequencies to the unsteady hydrodynamic fluctuations computed by numerical simulations. In addition, this study has shown that the hydrodynamic instabilities associated with the PVS are the driving force of the flow dynamics, since they are responsible for the emergence of pressure waves propagating at the same frequency.

  14. Simplified motional heating rate measurements of trapped ions

    SciTech Connect

    Epstein, R. J.; Seidelin, S.; Leibfried, D.; Wesenberg, J. H.; Bollinger, J. J.; Amini, J. M.; Blakestad, R. B.; Britton, J.; Home, J. P.; Itano, W. M.; Jost, J. D.; Knill, E.; Langer, C.; Ozeri, R.; Shiga, N.; Wineland, D. J.

    2007-09-15

    We have measured motional heating rates of trapped atomic ions, a factor that can influence multi-ion quantum logic gate fidelities. Two simplified techniques were developed for this purpose: one relies on Raman sideband detection implemented with a single laser source, while the second is even simpler and is based on time-resolved fluorescence detection during Doppler recooling. We applied these methods to determine heating rates in a microfrabricated surface-electrode trap made of gold on fused quartz, which traps ions 40 {mu}m above its surface. Heating rates obtained from the two techniques were found to be in reasonable agreement. In addition, the trap gives rise to a heating rate of 300{+-}30 s{sup -1} for a motional frequency of 5.25 MHz, substantially below the trend observed in other traps.

  15. Detailed and Simplified Nonequilibrium Helium Ionization in the Solar Atmosphere

    NASA Astrophysics Data System (ADS)

    Golding, Thomas Peter; Carlsson, Mats; Leenaarts, Jorrit

    2014-03-01

    Helium ionization plays an important role in the energy balance of the upper chromosphere and transition region. Helium spectral lines are also often used as diagnostics of these regions. We carry out one-dimensional radiation-hydrodynamics simulations of the solar atmosphere and find that the helium ionization is set mostly by photoionization and direct collisional ionization, counteracted by radiative recombination cascades. By introducing an additional recombination rate mimicking the recombination cascades, we construct a simplified three-level helium model atom consisting of only the ground states. This model atom is suitable for modeling nonequilibrium helium ionization in three-dimensional numerical models. We perform a brief investigation of the formation of the He I 10830 and He II 304 spectral lines. Both lines show nonequilibrium features that are not recovered with statistical equilibrium models, and caution should therefore be exercised when such models are used as a basis for interpretating observations.

  16. A simplified view of blazars: the neutrino background

    NASA Astrophysics Data System (ADS)

    Padovani, P.; Petropoulou, M.; Giommi, P.; Resconi, E.

    2015-09-01

    Blazars have been suggested as possible neutrino sources long before the recent IceCube discovery of high-energy neutrinos. We re-examine this possibility within a new framework built upon the blazar simplified view and a self-consistent modelling of neutrino emission from individual sources. The former is a recently proposed paradigm that explains the diverse statistical properties of blazars adopting minimal assumptions on blazars' physical and geometrical properties. This view, tested through detailed Monte Carlo simulations, reproduces the main features of radio, X-ray, and γ-ray blazar surveys and also the extragalactic γ-ray background at energies ≳ 10 GeV. Here, we add a hadronic component for neutrino production and estimate the neutrino emission from BL Lacertae objects as a class, `calibrated' by fitting the spectral energy distributions of a preselected sample of such objects and their (putative) neutrino spectra. Unlike all previous papers on this topic, the neutrino background is then derived by summing up at a given energy the fluxes of each BL Lac in the simulation, all characterized by their own redshift, synchrotron peak energy, γ-ray flux, etc. Our main result is that BL Lacs as a class can explain the neutrino background seen by IceCube above ˜0.5 PeV while they only contribute ˜10 per cent at lower energies, leaving room to some other population(s)/physical mechanism. However, one cannot also exclude the possibility that individual BL Lacs still make a contribution at the ≈20 per cent level to the IceCube low-energy events. Our scenario makes specific predictions, which are testable in the next few years.

  17. Analysis of Modeling Assumptions used in Production Cost Models for Renewable Integration Studies

    SciTech Connect

    Stoll, Brady; Brinkman, Gregory; Townsend, Aaron; Bloom, Aaron

    2016-01-01

    Renewable energy integration studies have been published for many different regions exploring the question of how higher penetration of renewable energy will impact the electric grid. These studies each make assumptions about the systems they are analyzing; however the effect of many of these assumptions has not been yet been examined and published. In this paper we analyze the impact of modeling assumptions in renewable integration studies, including the optimization method used (linear or mixed-integer programming) and the temporal resolution of the dispatch stage (hourly or sub-hourly). We analyze each of these assumptions on a large and a small system and determine the impact of each assumption on key metrics including the total production cost, curtailment of renewables, CO2 emissions, and generator starts and ramps. Additionally, we identified the impact on these metrics if a four-hour ahead commitment step is included before the dispatch step and the impact of retiring generators to reduce the degree to which the system is overbuilt. We find that the largest effect of these assumptions is at the unit level on starts and ramps, particularly for the temporal resolution, and saw a smaller impact at the aggregate level on system costs and emissions. For each fossil fuel generator type we measured the average capacity started, average run-time per start, and average number of ramps. Linear programming results saw up to a 20% difference in number of starts and average run time of traditional generators, and up to a 4% difference in the number of ramps, when compared to mixed-integer programming. Utilizing hourly dispatch instead of sub-hourly dispatch saw no difference in coal or gas CC units for either start metric, while gas CT units had a 5% increase in the number of starts and 2% increase in the average on-time per start. The number of ramps decreased up to 44%. The smallest effect seen was on the CO2 emissions and total production cost, with a 0.8% and 0

  18. Thermodynamic behaviour of simplified geothermal reservoirs

    SciTech Connect

    Hiriart, G.; Sanchez, E.

    1985-01-22

    Starting from the basic laws of conservation of mass and energy, the differential equations that represent the thermodynamic behavior of a simplified geothermal reservoir are derived. Its application is limited to a reservoir of high permeability as it usually occurs in the central zone of a geothermal field. A very practical method to solve numerically the equations is presented, based on the direct use of the steam tables. The method, based in one general equation, is extended and illustrated with a numerical example to the case of segregated mass extraction, variable influx and heat exchange between rock and fluid. As it is explained, the method can be easily coupled to several influx models already developed somewhere else. The proposed model can become an important tool to solve practical problems, where like in Los Azufres Mexico, the geothermal field can be divided in an inner part where flashing occurs and an exterior field where storage of water plays the main role.

  19. Aeroacoustic Analysis of a Simplified Landing Gear

    NASA Technical Reports Server (NTRS)

    Lockard, David P.; Khorrami, Mehdi, R.; Li, Fei

    2004-01-01

    A hybrid approach is used to investigate the noise generated by a simplified landing gear without small scale parts such as hydraulic lines and fasteners. The Ffowcs Williams and Hawkings equation is used to predict the noise at far-field observer locations from flow data provided by an unsteady computational fluid dynamics calculation. A simulation with 13 million grid points has been completed, and comparisons are made between calculations with different turbulence models. Results indicate that the turbulence model has a profound effect on the levels and character of the unsteadiness. Flow data on solid surfaces and a set of permeable surfaces surrounding the gear have been collected. Noise predictions using the porous surfaces appear to be contaminated by errors caused by large wake fluctuations passing through the surfaces. However, comparisons between predictions using the solid surfaces with the near-field CFD solution are in good agreement giving confidence in the far-field results.

  20. Simplified Model of Nonlinear Landau Damping

    SciTech Connect

    N. A. Yampolsky and N. J. Fisch

    2009-07-16

    The nonlinear interaction of a plasma wave with resonant electrons results in a plateau in the electron distribution function close to the phase velocity of the plasma wave. As a result, Landau damping of the plasma wave vanishes and the resonant frequency of the plasma wave downshifts. However, this simple picture is invalid when the external driving force changes the plasma wave fast enough so that the plateau cannot be fully developed. A new model to describe amplification of the plasma wave including the saturation of Landau damping and the nonlinear frequency shift is proposed. The proposed model takes into account the change of the plasma wave amplitude and describes saturation of the Landau damping rate in terms of a single fluid equation, which simplifies the description of the inherently kinetic nature of Landau damping. A proposed fluid model, incorporating these simplifications, is verified numerically using a kinetic Vlasov code.

  1. Simplifying the circuit of Josephson parametric converters

    NASA Astrophysics Data System (ADS)

    Abdo, Baleegh; Brink, Markus; Chavez-Garcia, Jose; Keefe, George

    Josephson parametric converters (JPCs) are quantum-limited three-wave mixing devices that can play various important roles in quantum information processing in the microwave domain, including amplification of quantum signals, transduction of quantum information, remote entanglement of qubits, nonreciprocal amplification, and circulation of signals. However, the input-output and biasing circuit of a state-of-the-art JPC consists of bulky components, i.e. two commercial off-chip broadband 180-degree hybrids, four phase-matched short coax cables, and one superconducting magnetic coil. Such bulky hardware significantly hinders the integration of JPCs in scalable quantum computing architectures. In my talk, I will present ideas on how to simplify the JPC circuit and show preliminary experimental results

  2. Combustion Safety Simplified Test Protocol Field Study

    SciTech Connect

    Brand, L.; Cautley, D.; Bohac, D.; Francisco, P.; Shen, L.; Gloss, S.

    2015-11-01

    Combustions safety is an important step in the process of upgrading homes for energy efficiency. There are several approaches used by field practitioners, but researchers have indicated that the test procedures in use are complex to implement and provide too many false positives. Field failures often mean that the house is not upgraded until after remediation or not at all, if not include in the program. In this report the PARR and NorthernSTAR DOE Building America Teams provide a simplified test procedure that is easier to implement and should produce fewer false positives. A survey of state weatherization agencies on combustion safety issues, details of a field data collection instrumentation package, summary of data collected over seven months, data analysis and results are included. The project team collected field data on 11 houses in 2015.

  3. Simplified fundamental force and mass measurements

    NASA Astrophysics Data System (ADS)

    Robinson, I. A.

    2016-08-01

    The watt balance relates force or mass to the Planck constant h, the metre and the second. It enables the forthcoming redefinition of the unit of mass within the SI by measuring the Planck constant in terms of mass, length and time with an uncertainty of better than 2 parts in 108. To achieve this, existing watt balances require complex and time-consuming alignment adjustments limiting their use to a few national metrology laboratories. This paper describes a simplified construction and operating principle for a watt balance which eliminates the need for the majority of these adjustments and is readily scalable using either electromagnetic or electrostatic actuators. It is hoped that this will encourage the more widespread use of the technique for a wide range of measurements of force or mass. For example: thrust measurements for space applications which would require only measurements of electrical quantities and velocity/displacement.

  4. Structure and strategy in encoding simplified graphs

    NASA Technical Reports Server (NTRS)

    Schiano, Diane J.; Tversky, Barbara

    1992-01-01

    Tversky and Schiano (1989) found a systematic bias toward the 45-deg line in memory for the slopes of identical lines when embedded in graphs, but not in maps, suggesting the use of a cognitive reference frame specifically for encoding meaningful graphs. The present experiments explore this issue further using the linear configurations alone as stimuli. Experiments 1 and 2 demonstrate that perception and immediate memory for the slope of a test line within orthogonal 'axes' are predictable from purely structural considerations. In Experiments 3 and 4, subjects were instructed to use a diagonal-reference strategy in viewing the stimuli, which were described as 'graphs' only in Experiment 3. Results for both studies showed the diagonal bias previously found only for graphs. This pattern provides converging evidence for the diagonal as a cognitive reference frame in encoding linear graphs, and demonstrates that even in highly simplified displays, strategic factors can produce encoding biases not predictable solely from stimulus structure alone.

  5. Entropy reduction via simplified image contourization

    NASA Technical Reports Server (NTRS)

    Turner, Martin J.

    1993-01-01

    The process of contourization is presented which converts a raster image into a set of plateaux or contours. These contours can be grouped into a hierarchical structure, defining total spatial inclusion, called a contour tree. A contour coder has been developed which fully describes these contours in a compact and efficient manner and is the basis for an image compression method. Simplification of the contour tree has been undertaken by merging contour tree nodes thus lowering the contour tree's entropy. This can be exploited by the contour coder to increase the image compression ratio. By applying general and simple rules derived from physiological experiments on the human vision system, lossy image compression can be achieved which minimizes noticeable artifacts in the simplified image.

  6. Space station ECLSS simplified integrated test

    NASA Technical Reports Server (NTRS)

    Schunk, Richard G.; Bagdigian, Robert M.; Carrasquillo, Robyn L.; Ogle, Kathyrn Y.; Wieland, Paul O.

    1989-01-01

    A discussion of the Space Station Simplified Integrated Test (SIT) was conducted. The first in a series of three integrated Environmental Control and Life Support (ECLS) system tests, the primary objectives of the SIT were to verify proper operation of ECLS subsystems functioning in an integrated fashion as well as to gather preliminary performance data for the partial ECLS system used in the test. A description of the SIT configuration, a summary of events, a discussion of anomalies that occurred during the test, and detailed results and analysis from individual measurements and water and gas samples taken during the test are included. The preprototype ECLS hardware used in the test is reported providing an overall process description and theory of operation for each hardware item.

  7. Simplifying cardiovascular magnetic resonance pulse sequence terminology.

    PubMed

    Friedrich, Matthias G; Bucciarelli-Ducci, Chiara; White, James A; Plein, Sven; Moon, James C; Almeida, Ana G; Kramer, Christopher M; Neubauer, Stefan; Pennell, Dudley J; Petersen, Steffen E; Kwong, Raymond Y; Ferrari, Victor A; Schulz-Menger, Jeanette; Sakuma, Hajime; Schelbert, Erik B; Larose, Éric; Eitel, Ingo; Carbone, Iacopo; Taylor, Andrew J; Young, Alistair; de Roos, Albert; Nagel, Eike

    2014-01-01

    We propose a set of simplified terms to describe applied Cardiovascular Magnetic Resonance (CMR) pulse sequence techniques in clinical reports, scientific articles and societal guidelines or recommendations. Rather than using various technical details in clinical reports, the description of the technical approach should be based on the purpose of the pulse sequence. In scientific papers or other technical work, this should be followed by a more detailed description of the pulse sequence and settings. The use of a unified set of widely understood terms would facilitate the communication between referring physicians and CMR readers by increasing the clarity of CMR reports and thus improve overall patient care. Applied in research articles, its use would facilitate non-expert readers' understanding of the methodology used and its clinical meaning. PMID:25551695

  8. Field test of a biological assumption of instream flow models

    SciTech Connect

    Cada, G.F.; Sale, M.J.; Cushman, R.M.; Loar, J.M.

    1983-01-01

    Hydraulic-rating methods are an attractive means of deriving instream flow recommendations at many small hydropower sites because they represent a compromise between relatively inexpensive, low-resolution, discharge methods and the costly, complex, habitat evaluation models. Like the other methods, however, they rely on certain biological assumptions about the relationship between aquatic biota and streamflow characteristics. One such assumption is that benthic production available as food for fishes is proportional to stream bottom area. Wetted perimeter is an easily measured physical parameter which represents bottom area and that is a function of discharge. Therefore, wetted perimeter should reflect the benthic food resource available to support stream fishes under varying flows. As part of a larger effort to compare a number of existing instream flow assessment methods in southern Appalachian trout streams, we examined the validity of the benthos/wetted perimeter relationship at four field sites. Benthos samples were taken at permanent riffle transects over a variety of discharges and were used to relate observed benthos densities to the fluctuations in wetted perimeter and streamflow in these systems. For most of the sites and taxa examined, benthic densities did not show a consistent relationship with discharge/wetted perimeter dynamics. Our analysis indicates that simple physical habitat descriptors obtained from hydraulic-rating models do not provide sufficient information on the response of benthic organisms to decreased discharges. Consequently, these methods may not be sufficient to protect aquatic resources in water-use conflicts.

  9. The extended evolutionary synthesis: its structure, assumptions and predictions

    PubMed Central

    Laland, Kevin N.; Uller, Tobias; Feldman, Marcus W.; Sterelny, Kim; Müller, Gerd B.; Moczek, Armin; Jablonka, Eva; Odling-Smee, John

    2015-01-01

    Scientific activities take place within the structured sets of ideas and assumptions that define a field and its practices. The conceptual framework of evolutionary biology emerged with the Modern Synthesis in the early twentieth century and has since expanded into a highly successful research program to explore the processes of diversification and adaptation. Nonetheless, the ability of that framework satisfactorily to accommodate the rapid advances in developmental biology, genomics and ecology has been questioned. We review some of these arguments, focusing on literatures (evo-devo, developmental plasticity, inclusive inheritance and niche construction) whose implications for evolution can be interpreted in two ways—one that preserves the internal structure of contemporary evolutionary theory and one that points towards an alternative conceptual framework. The latter, which we label the ‘extended evolutionary synthesis' (EES), retains the fundaments of evolutionary theory, but differs in its emphasis on the role of constructive processes in development and evolution, and reciprocal portrayals of causation. In the EES, developmental processes, operating through developmental bias, inclusive inheritance and niche construction, share responsibility for the direction and rate of evolution, the origin of character variation and organism–environment complementarity. We spell out the structure, core assumptions and novel predictions of the EES, and show how it can be deployed to stimulate and advance research in those fields that study or use evolutionary biology. PMID:26246559

  10. The contour method cutting assumption: error minimization and correction

    SciTech Connect

    Prime, Michael B; Kastengren, Alan L

    2010-01-01

    The recently developed contour method can measure 2-D, cross-sectional residual-stress map. A part is cut in two using a precise and low-stress cutting technique such as electric discharge machining. The contours of the new surfaces created by the cut, which will not be flat if residual stresses are relaxed by the cutting, are then measured and used to calculate the original residual stresses. The precise nature of the assumption about the cut is presented theoretically and is evaluated experimentally. Simply assuming a flat cut is overly restrictive and misleading. The critical assumption is that the width of the cut, when measured in the original, undeformed configuration of the body is constant. Stresses at the cut tip during cutting cause the material to deform, which causes errors. The effect of such cutting errors on the measured stresses is presented. The important parameters are quantified. Experimental procedures for minimizing these errors are presented. An iterative finite element procedure to correct for the errors is also presented. The correction procedure is demonstrated on experimental data from a steel beam that was plastically bent to put in a known profile of residual stresses.

  11. Storage coefficient revisited: is purely vertical strain a good assumption?

    PubMed

    Burbey, T J

    2001-01-01

    The storage coefficient that is used ubiquitously today was first defined by the analytical work of Theis and Jacob over a half-century ago. Inherent within this definition is the restriction of purely vertical compression of the aquifer during a reduction in pressure. The assumption is revisited and quantitatively evaluated by comparing numerical results using both one- and three-dimensional strain models in the presence of three-dimensional flow. Results indicate that (1) calculated hydraulic head values are nearly identical for both models; (2) the release of water from storage in terms of volume strain is nearly identical for both models and that the location of maximum production moves outward from the well as a function of time; (3) the vertical strain components are markedly different with at least 50% of the total volume of water pumped originating from horizontal strain (and increasing to as much as 70%); and (4) for the one-dimensional strain model to yield the necessary quantity of water to the pumped well, the resulting vertical compaction (land subsidence) is as much as four times greater and vertical strain is as much as 60% greater than the three-dimensional strain model. Results indicate that small changes in porosity resulting from horizontal strain can yield extremely large quantities of water to the pumping well. This study suggests that the assumption of purely vertical strain used in the definition of the storage coefficient is not valid. PMID:11341012

  12. Stable isotopes and elasmobranchs: tissue types, methods, applications and assumptions.

    PubMed

    Hussey, N E; MacNeil, M A; Olin, J A; McMeans, B C; Kinney, M J; Chapman, D D; Fisk, A T

    2012-04-01

    Stable-isotope analysis (SIA) can act as a powerful ecological tracer with which to examine diet, trophic position and movement, as well as more complex questions pertaining to community dynamics and feeding strategies or behaviour among aquatic organisms. With major advances in the understanding of the methodological approaches and assumptions of SIA through dedicated experimental work in the broader literature coupled with the inherent difficulty of studying typically large, highly mobile marine predators, SIA is increasingly being used to investigate the ecology of elasmobranchs (sharks, skates and rays). Here, the current state of SIA in elasmobranchs is reviewed, focusing on available tissues for analysis, methodological issues relating to the effects of lipid extraction and urea, the experimental dynamics of isotopic incorporation, diet-tissue discrimination factors, estimating trophic position, diet and mixing models and individual specialization and niche-width analyses. These areas are discussed in terms of assumptions made when applying SIA to the study of elasmobranch ecology and the requirement that investigators standardize analytical approaches. Recommendations are made for future SIA experimental work that would improve understanding of stable-isotope dynamics and advance their application in the study of sharks, skates and rays. PMID:22497393

  13. Do No Evil: Unnoticed Assumptions in Accounts of Conscience Protection.

    PubMed

    Pilkington, Bryan C

    2016-03-01

    In this paper, I argue that distinctions between traditional and contemporary accounts of conscience protections, such as the account offered by Aulisio and Arora, fail. These accounts fail because they require an impoverished conception of our moral lives. This failure is due to unnoticed assumptions about the distinction between the traditional and contemporary articulations of conscience protection. My argument proceeds as follows: First, I highlight crucial assumptions in Aulisio and Arora's argument. Next, I argue that respecting maximal play in values, though a fine goal in our liberal democratic society, raises a key issue in exactly the situations that matter in these cases. Finally, I argue that too much weight is given to a too narrow conception of values. There are differences between appeals to conscience that are appropriately categorized as traditional or contemporary, and a way to make sense of conscience in the contemporary medical landscape is needed. However, the normative implications drawn by Aulisio and Arora do not follow from this distinction without much further argument. I conclude that their paper is a helpful illustration the complexity of this issue and of a common view about conscience, but insofar as their view fails to account for the richness of our moral life, they fail to resolve the issue at hand. PMID:25771783

  14. Literal grid map models for animal navigation: Assumptions and predictions.

    PubMed

    Turner, Rebecca M; Walker, Michael M; Postlethwaite, Claire M

    2016-09-01

    Many animals can navigate from unfamiliar locations to a familiar target location with no outward route information or direct sensory contact with the target or any familiar landmarks. Several models have been proposed to explain this phenomenon, one possibility being a literal interpretation of a grid map. In this paper we systematically compare four such models, which we label: Correct Bicoordinate navigation, both Target and Release site based, Approximate Bicoordinate navigation, and Directional navigation. Predictions of spatial patterns of initial orientation errors and efficiencies depend on a combination of assumptions about the navigation mechanism and the geometry of the environmental coordinate fields used as model inputs. When coordinates axes are orthogonal at the target the predictions from the Correct Bicoordinate (Target based) model and Approximate Bicoordinate model are identical. However, if the coordinate axes are non-orthogonal different regional patterns of initial orientation errors and efficiencies can be expected from these two models. Field anomalies produce high magnitudes of orientation errors close to the target, while region-wide nonlinearity leads to orientation errors increasing with distance from the target. In general, initial orientation error patterns are more useful for distinguishing between different assumption combinations than efficiencies. We discuss how consideration of model predictions may be helpful in the design of experiments. PMID:27266672

  15. Do we need various assumptions to get a good FCN?

    NASA Astrophysics Data System (ADS)

    Huang, C.; Zhang, M.

    2015-08-01

    Free core nutation (FCN) is a rotational modes of the Earth with fluid core. All traditional theoretical methods produce FCN period near 460 sidereal days with PREM Earth model, while precise observations (VLBI + SG tides) say it is approximately 430 days. In order to fill this big gap, astronomers and geophysicists give various assumptions, e.g., increasing core-mantle-boundary (CMB) flattening by about 5%, a strong coupling between nutation and geomagnetic field near CMB, viscous coupling, or topographical coupling cross CMB, etc. Do we really need these unproved assumptions? or is it only the problem of these traditional theoretical methods themselves? Earth models (e.g., PREM) provide accurate and robust profiles of physical parameters, like density and Lame parameters, but their radial derivatives, which are also used in all traditional methods to calculate normal modes (e.g., FCN), nutation and tides of non-rigid Earth theoretically, are not so trustable as the parameters themselves. Moreover, the truncation of the expansion series of displacement vector and stress tensor in traditional methods is also of question. A new stratified spectral method is proposed and applied to the computation of normal modes, to avoid these problems. Our primary result of the FCN period is 435 ± 3 sidereal days.

  16. The extended evolutionary synthesis: its structure, assumptions and predictions.

    PubMed

    Laland, Kevin N; Uller, Tobias; Feldman, Marcus W; Sterelny, Kim; Müller, Gerd B; Moczek, Armin; Jablonka, Eva; Odling-Smee, John

    2015-08-22

    Scientific activities take place within the structured sets of ideas and assumptions that define a field and its practices. The conceptual framework of evolutionary biology emerged with the Modern Synthesis in the early twentieth century and has since expanded into a highly successful research program to explore the processes of diversification and adaptation. Nonetheless, the ability of that framework satisfactorily to accommodate the rapid advances in developmental biology, genomics and ecology has been questioned. We review some of these arguments, focusing on literatures (evo-devo, developmental plasticity, inclusive inheritance and niche construction) whose implications for evolution can be interpreted in two ways—one that preserves the internal structure of contemporary evolutionary theory and one that points towards an alternative conceptual framework. The latter, which we label the 'extended evolutionary synthesis' (EES), retains the fundaments of evolutionary theory, but differs in its emphasis on the role of constructive processes in development and evolution, and reciprocal portrayals of causation. In the EES, developmental processes, operating through developmental bias, inclusive inheritance and niche construction, share responsibility for the direction and rate of evolution, the origin of character variation and organism-environment complementarity. We spell out the structure, core assumptions and novel predictions of the EES, and show how it can be deployed to stimulate and advance research in those fields that study or use evolutionary biology. PMID:26246559

  17. Miniaturized photoelectric angular sensor with simplified design

    NASA Astrophysics Data System (ADS)

    Dumbravescu, Niculae; Schiaua, Silviu

    1999-09-01

    In building the movable elements of robots, peripheral devices and measuring apparata, increasing the resolution of the angular sensor systems, based on incremental rotary encoders, is essential, together with decreasing the complexity, dimensions and weight. Especially when the angular sensor is integrated in a measuring system, belonging to a programmed light airplane for surveillance, the key issue is to reduce both dimensions and weight. This can be done using a simplified design, which consists in the following solutions: replacement of the fragile Cr on glass substrate, 1.5 mm thick (normally used for the fabrication of incremental disks), with light Cr on polycarbonate substrate, with only 0.15 mm thick; the absence of collimating optics (based on microlenses, used in IR emitter-photocell receiver assembly), as a result of the good coupling efficiency (due to the possible approaching of these elements at minimum 0.45 mm); the shrinkage of the disk's diameters to only 14 mm; the use of surface mounting devices and the related surface mounting technology, enabling to reduce dimensions and weight. The maximum number of slits on a 14 mm diameter dividing disk, usually obtained in a Cr on polycarbonate version, being approx. 1000, no problem occurs in our case, for 360 slits. The requested angular resolution (only 0.5 degrees for the light airplane), using the whole classical '4x digital multiplication' is not necessary, but a lower one of only 2x, resulting in a simplified electronics. The proposed design permitted, that an original arrangement, for building a small size, lightweight, heavy-duty incremental transducer based angular sensor system, to be obtained, useful not only in avionics, but also in robotics, or other special applications. Besides, extending the number of fixed gratings (masks) allows, that many primary signals to be derived, and a further increase in resolution of even 6 angular minutes to be obtained from the initial 360 slits.

  18. Augmenting simplified habit reversal in the treatment of oral-digital habits exhibited by individuals with mental retardation.

    PubMed Central

    Long, E S; Miltenberger, R G; Ellingson, S A; Ott, S M

    1999-01-01

    We investigated whether a simplified habit reversal treatment eliminates fingernail biting and related oral-digital habits exhibited by individuals with mild to moderate mental retardation. Although simplified habit reversal did little to decrease the target behaviors for 3 of 4 participants, simplified habit reversal plus additional treatment procedures decreased the behavior to near-zero levels for all participants. These procedures included remote prompting, remote contingencies involving differential reinforcement plus response cost, and differential reinforcement of nail growth. Limitations of habit reversal for individuals with mental retardation along with directions for future research involving therapist-mediated treatment procedures, particularly those involving remote prompting and remote contingencies, are discussed. PMID:10513029

  19. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    SciTech Connect

    Shao, Kan; Gift, Jeffrey S.; Setzer, R. Woodrow

    2013-11-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  20. Assessing the robustness of quantitative fatty acid signature analysis to assumption violations

    USGS Publications Warehouse

    Bromaghin, Jeffrey; Budge, Suzanne M.; Thiemann, Gregory W.; Rode, Karyn D.

    2016-01-01

    In most QFASA applications, investigators will generally have some knowledge of the prey available to predators and be able to assess the completeness of prey signature data and sample additional prey as necessary. Conversely, because calibration coefficients are derived from feeding trials with captive animals and their values may be sensitive to consumer physiology and nutritional status, their applicability to free-ranging animals is difficult to establish. We therefore recommend that investigators first make any improvements to the prey signature data that seem warranted and then base estimation on the Aitchison distance measure, as it appears to minimize risk from violations of the assumption that is most difficult to verify.

  1. Cost and Performance Assumptions for Modeling Electricity Generation Technologies

    SciTech Connect

    Tidball, Rick; Bluestein, Joel; Rodriguez, Nick; Knoke, Stu

    2010-11-01

    The goal of this project was to compare and contrast utility scale power plant characteristics used in data sets that support energy market models. Characteristics include both technology cost and technology performance projections to the year 2050. Cost parameters include installed capital costs and operation and maintenance (O&M) costs. Performance parameters include plant size, heat rate, capacity factor or availability factor, and plant lifetime. Conventional, renewable, and emerging electricity generating technologies were considered. Six data sets, each associated with a different model, were selected. Two of the data sets represent modeled results, not direct model inputs. These two data sets include cost and performance improvements that result from increased deployment as well as resulting capacity factors estimated from particular model runs; other data sets represent model input data. For the technologies contained in each data set, the levelized cost of energy (LCOE) was also evaluated, according to published cost, performance, and fuel assumptions.

  2. Commentary: profiling by appearance and assumption: beyond race and ethnicity.

    PubMed

    Sapién, Robert E

    2010-04-01

    In this issue, Acquaviva and Mintz highlight issues regarding racial profiling in medicine and how it is perpetuated through medical education: Physicians are taught to make subjective determinations of race and/or ethnicity in case presentations, and such assumptions may affect patient care. The author of this commentary believes that the discussion should be broadened to include profiling on the basis of general appearance. The author reports personal experiences as someone who has profiled and been profiled by appearance-sometimes by skin color, sometimes by other physical attributes. In the two cases detailed here, patient care could have been affected had the author not become aware of his practices in such situations. The author advocates raising awareness of profiling in the broader sense through training. PMID:20354369

  3. Deconstructing Community for Conservation: Why Simple Assumptions are Not Sufficient.

    PubMed

    Waylen, Kerry Ann; Fischer, Anke; McGowan, Philip J K; Milner-Gulland, E J

    2013-01-01

    Many conservation policies advocate engagement with local people, but conservation practice has sometimes been criticised for a simplistic understanding of communities and social context. To counter this, this paper explores social structuring and its influences on conservation-related behaviours at the site of a conservation intervention near Pipar forest, within the Seti Khola valley, Nepal. Qualitative and quantitative data from questionnaires and Rapid Rural Appraisal demonstrate how links between groups directly and indirectly influence behaviours of conservation relevance (including existing and potential resource-use and proconservation activities). For low-status groups the harvesting of resources can be driven by others' preference for wild foods, whilst perceptions of elite benefit-capture may cause reluctance to engage with future conservation interventions. The findings reiterate the need to avoid relying on simple assumptions about 'community' in conservation, and particularly the relevance of understanding relationships between groups, in order to understand natural resource use and implications for conservation. PMID:23956483

  4. Linear irreversible heat engines based on local equilibrium assumptions

    NASA Astrophysics Data System (ADS)

    Izumida, Yuki; Okuda, Koji

    2015-08-01

    We formulate an endoreversible finite-time Carnot cycle model based on the assumptions of local equilibrium and constant energy flux, where the efficiency and the power are expressed in terms of the thermodynamic variables of the working substance. By analyzing the entropy production rate caused by the heat transfer in each isothermal process during the cycle, and using the endoreversible condition applied to the linear response regime, we identify the thermodynamic flux and force of the present system and obtain a linear relation that connects them. We calculate the efficiency at maximum power in the linear response regime by using the linear relation, which agrees with the Curzon-Ahlborn (CA) efficiency known as the upper bound in this regime. This reason is also elucidated by rewriting our model into the form of the Onsager relations, where our model turns out to satisfy the tight-coupling condition leading to the CA efficiency.

  5. Special education and the regular education initiative: basic assumptions.

    PubMed

    Jenkins, J R; Pious, C G; Jewell, M

    1990-04-01

    The regular education initiative (REI) is a thoughtful response to identified problems in our system for educating low-performing children, but it is a not a detailed blueprint for changing the system. Educators must achieve consensus on what the REI actually proposes. The authors infer from the REI literature five assumptions regarding the roles and responsibilities of elementary regular classroom teachers, concluding that these teachers and specialists form a partnership, but the classroom teachers are ultimately in charge of the instruction of all children in their classrooms, including those who are not succeeding in the mainstream. A discussion of the target population and of several partnership models further delineates REI issues and concerns. PMID:2185027

  6. Experimental assessment of unvalidated assumptions in classical plasticity theory.

    SciTech Connect

    Brannon, Rebecca Moss; Burghardt, Jeffrey A.; Bauer, Stephen J.; Bronowski, David R.

    2009-01-01

    This report investigates the validity of several key assumptions in classical plasticity theory regarding material response to changes in the loading direction. Three metals, two rock types, and one ceramic were subjected to non-standard loading directions, and the resulting strain response increments were displayed in Gudehus diagrams to illustrate the approximation error of classical plasticity theories. A rigorous mathematical framework for fitting classical theories to the data, thus quantifying the error, is provided. Further data analysis techniques are presented that allow testing for the effect of changes in loading direction without having to use a new sample and for inferring the yield normal and flow directions without having to measure the yield surface. Though the data are inconclusive, there is indication that classical, incrementally linear, plasticity theory may be inadequate over a certain range of loading directions. This range of loading directions also coincides with loading directions that are known to produce a physically inadmissible instability for any nonassociative plasticity model.

  7. Uncovering Metaethical Assumptions in Bioethical Discourse across Cultures.

    PubMed

    Sullivan, Laura Specker

    2016-03-01

    Much of bioethical discourse now takes place across cultures. This does not mean that cross-cultural understanding has increased. Many cross-cultural bioethical discussions are marked by entrenched disagreement about whether and why local practices are justified. In this paper, I argue that a major reason for these entrenched disagreements is that problematic metaethical commitments are hidden in these cross-cultural discourses. Using the issue of informed consent in East Asia as an example of one such discourse, I analyze two representative positions in the discussion and identify their metaethical commitments. I suggest that the metaethical assumptions of these positions result from their shared method of ethical justification: moral principlism. I then show why moral principlism is problematic in cross-cultural analyses and propose a more useful method for pursuing ethical justification across cultures. PMID:27157111

  8. Assumptions, ambiguities, and possibilities in interdisciplinary population health research.

    PubMed

    Whitfield, Kyle; Reid, Colleen

    2004-01-01

    The rhetoric of "interdisciplinary," "multi-disciplinary," and "transdisciplinary" permeates many population health research projects, funding proposals, and strategic initiatives. Working across, with, and between disciplines is touted as a way to advance knowledge, answer more complex questions, and work more meaningfully with users of research. From our own experiences and involvement in the 2003 CIHR Institute for Public and Population Health's Summer Institute, interdisciplinary population health research (IPHR) remains ambiguously defined and poorly understood. In this commentary, we critically explore some characteristics and ongoing assumptions associated with IPHR and propose questions to ensure a more deliberate research process. It is our hope that population health researchers and the CIHR will consider these questions to help strengthen IPHR. PMID:15622792

  9. Dynamic Group Diffie-Hellman Key Exchange under standard assumptions

    SciTech Connect

    Bresson, Emmanuel; Chevassut, Olivier; Pointcheval, David

    2002-02-14

    Authenticated Diffie-Hellman key exchange allows two principals communicating over a public network, and each holding public-private keys, to agree on a shared secret value. In this paper we study the natural extension of this cryptographic problem to a group of principals. We begin from existing formal security models and refine them to incorporate major missing details (e.g., strong-corruption and concurrent sessions). Within this model we define the execution of a protocol for authenticated dynamic group Diffie-Hellman and show that it is provably secure under the decisional Diffie-Hellman assumption. Our security result holds in the standard model and thus provides better security guarantees than previously published results in the random oracle model.

  10. Evaluation and Application of Andragogical Assumptions to the Adult Online Learning Environment

    ERIC Educational Resources Information Center

    Blondy, Laurie C.

    2007-01-01

    The usefulness and application of andragogical assumptions has long been debated by adult educators. The assumptions of andragogy are often criticized due to the lack of empirical evidence to support them, even though several educational theories are represented within the assumptions. In adult online education, these assumptions represent an…

  11. 20 CFR 416.1090 - Assumption when we make a finding of substantial failure.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... SECURITY INCOME FOR THE AGED, BLIND, AND DISABLED Determinations of Disability Assumption of Disability... responsibility for performing the disability determination function from the State agency, whether the assumption... of assumption. The date of any partial or complete assumption of the disability...

  12. 20 CFR 416.1090 - Assumption when we make a finding of substantial failure.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... SECURITY INCOME FOR THE AGED, BLIND, AND DISABLED Determinations of Disability Assumption of Disability... responsibility for performing the disability determination function from the State agency, whether the assumption... of assumption. The date of any partial or complete assumption of the disability...

  13. 20 CFR 416.1090 - Assumption when we make a finding of substantial failure.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... SECURITY INCOME FOR THE AGED, BLIND, AND DISABLED Determinations of Disability Assumption of Disability... responsibility for performing the disability determination function from the State agency, whether the assumption... of assumption. The date of any partial or complete assumption of the disability...

  14. 20 CFR 416.1090 - Assumption when we make a finding of substantial failure.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... SECURITY INCOME FOR THE AGED, BLIND, AND DISABLED Determinations of Disability Assumption of Disability... responsibility for performing the disability determination function from the State agency, whether the assumption... of assumption. The date of any partial or complete assumption of the disability...

  15. 20 CFR 416.1090 - Assumption when we make a finding of substantial failure.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... SECURITY INCOME FOR THE AGED, BLIND, AND DISABLED Determinations of Disability Assumption of Disability... responsibility for performing the disability determination function from the State agency, whether the assumption... of assumption. The date of any partial or complete assumption of the disability...

  16. Simplified Ion Thruster Xenon Feed System for NASA Science Missions

    NASA Technical Reports Server (NTRS)

    Snyder, John Steven; Randolph, Thomas M.; Hofer, Richard R.; Goebel, Dan M.

    2009-01-01

    The successful implementation of ion thruster technology on the Deep Space 1 technology demonstration mission paved the way for its first use on the Dawn science mission, which launched in September 2007. Both Deep Space 1 and Dawn used a "bang-bang" xenon feed system which has proven to be highly successful. This type of feed system, however, is complex with many parts and requires a significant amount of engineering work for architecture changes. A simplified feed system, with fewer parts and less engineering work for architecture changes, is desirable to reduce the feed system cost to future missions. An attractive new path for ion thruster feed systems is based on new components developed by industry in support of commercial applications of electric propulsion systems. For example, since the launch of Deep Space 1 tens of mechanical xenon pressure regulators have successfully flown on commercial spacecraft using electric propulsion. In addition, active proportional flow controllers have flown on the Hall-thruster-equipped Tacsat-2, are flying on the ion thruster GOCE mission, and will fly next year on the Advanced EHF spacecraft. This present paper briefly reviews the Dawn xenon feed system and those implemented on other xenon electric propulsion flight missions. A simplified feed system architecture is presented that is based on assembling flight-qualified components in a manner that will reduce non-recurring engineering associated with propulsion system architecture changes, and is compared to the NASA Dawn standard. The simplified feed system includes, compared to Dawn, passive high-pressure regulation, a reduced part count, reduced complexity due to cross-strapping, and reduced non-recurring engineering work required for feed system changes. A demonstration feed system was assembled using flight-like components and used to operate a laboratory NSTAR-class ion engine. Feed system components integrated into a single-string architecture successfully operated

  17. Halo-independent direct detection analyses without mass assumptions

    NASA Astrophysics Data System (ADS)

    Anderson, Adam J.; Fox, Patrick J.; Kahn, Yonatan; McCullough, Matthew

    2015-10-01

    Results from direct detection experiments are typically interpreted by employing an assumption about the dark matter velocity distribution, with results presented in the mχ-σn plane. Recently methods which are independent of the DM halo velocity distribution have been developed which present results in the vmin-tilde g plane, but these in turn require an assumption on the dark matter mass. Here we present an extension of these halo-independent methods for dark matter direct detection which does not require a fiducial choice of the dark matter mass. With a change of variables from vmin to nuclear recoil momentum (pR), the full halo-independent content of an experimental result for any dark matter mass can be condensed into a single plot as a function of a new halo integral variable, which we call tilde h(pR). The entire family of conventional halo-independent tilde g(vmin) plots for all DM masses are directly found from the single tilde h(pR) plot through a simple rescaling of axes. By considering results in tilde h(pR) space, one can determine if two experiments are inconsistent for all masses and all physically possible halos, or for what range of dark matter masses the results are inconsistent for all halos, without the necessity of multiple tilde g(vmin) plots for different DM masses. We conduct a sample analysis comparing the CDMS II Si events to the null results from LUX, XENON10, and SuperCDMS using our method and discuss how the results can be strengthened by imposing the physically reasonable requirement of a finite halo escape velocity.

  18. Stochastic uncertainty models for the luminance consistency assumption.

    PubMed

    Corpetti, Thomas; Mémin, Etienne

    2012-02-01

    In this paper, a stochastic formulation of the brightness consistency used in many computer vision problems involving dynamic scenes (for instance, motion estimation or point tracking) is proposed. Usually, this model, which assumes that the luminance of a point is constant along its trajectory, is expressed in a differential form through the total derivative of the luminance function. This differential equation linearly links the point velocity to the spatial and temporal gradients of the luminance function. However, when dealing with images, the available information only holds at discrete time and on a discrete grid. In this paper, we formalize the image luminance as a continuous function transported by a flow known only up to some uncertainties related to such a discretization process. Relying on stochastic calculus, we define a formulation of the luminance function preservation in which these uncertainties are taken into account. From such a framework, it can be shown that the usual deterministic optical flow constraint equation corresponds to our stochastic evolution under some strong constraints. These constraints can be relaxed by imposing a weaker temporal assumption on the luminance function and also in introducing anisotropic intensity-based uncertainties. We also show that these uncertainties can be computed at each point of the image grid from the image data and hence provide meaningful information on the reliability of the motion estimates. To demonstrate the benefit of such a stochastic formulation of the brightness consistency assumption, we have considered a local least-squares motion estimator relying on this new constraint. This new motion estimator significantly improves the quality of the results. PMID:21791410

  19. Halo-independent direct detection analyses without mass assumptions

    SciTech Connect

    Anderson, Adam J.; Fox, Patrick J.; Kahn, Yonatan; McCullough, Matthew

    2015-10-06

    Results from direct detection experiments are typically interpreted by employing an assumption about the dark matter velocity distribution, with results presented in the m{sub χ}−σ{sub n} plane. Recently methods which are independent of the DM halo velocity distribution have been developed which present results in the v{sub min}−g-tilde plane, but these in turn require an assumption on the dark matter mass. Here we present an extension of these halo-independent methods for dark matter direct detection which does not require a fiducial choice of the dark matter mass. With a change of variables from v{sub min} to nuclear recoil momentum (p{sub R}), the full halo-independent content of an experimental result for any dark matter mass can be condensed into a single plot as a function of a new halo integral variable, which we call h-til-tilde(p{sub R}). The entire family of conventional halo-independent g-tilde(v{sub min}) plots for all DM masses are directly found from the single h-tilde(p{sub R}) plot through a simple rescaling of axes. By considering results in h-tilde(p{sub R}) space, one can determine if two experiments are inconsistent for all masses and all physically possible halos, or for what range of dark matter masses the results are inconsistent for all halos, without the necessity of multiple g-tilde(v{sub min}) plots for different DM masses. We conduct a sample analysis comparing the CDMS II Si events to the null results from LUX, XENON10, and SuperCDMS using our method and discuss how the results can be strengthened by imposing the physically reasonable requirement of a finite halo escape velocity.

  20. Halo-independent direct detection analyses without mass assumptions

    SciTech Connect

    Anderson, Adam J.; Fox, Patrick J.; Kahn, Yonatan; McCullough, Matthew

    2015-10-06

    Results from direct detection experiments are typically interpreted by employing an assumption about the dark matter velocity distribution, with results presented in the mχ – σn plane. Recently methods which are independent of the DM halo velocity distribution have been developed which present results in the vmin – g~ plane, but these in turn require an assumption on the dark matter mass. Here we present an extension of these halo-independent methods for dark matter direct detection which does not require a fiducial choice of the dark matter mass. With a change of variables from vmin to nuclear recoil momentum (pR), the full halo-independent content of an experimental result for any dark matter mass can be condensed into a single plot as a function of a new halo integral variable, which we call tilde h(pR). The entire family of conventional halo-independent tilde g~(vmin) plots for all DM masses are directly found from the single tilde h~(pR) plot through a simple rescaling of axes. By considering results in tildeh~(pR) space, one can determine if two experiments are inconsistent for all masses and all physically possible halos, or for what range of dark matter masses the results are inconsistent for all halos, without the necessity of multiple tilde g~(vmin) plots for different DM masses. As a result, we conduct a sample analysis comparing the CDMS II Si events to the null results from LUX, XENON10, and SuperCDMS using our method and discuss how the results can be strengthened by imposing the physically reasonable requirement of a finite halo escape velocity.

  1. Halo-independent direct detection analyses without mass assumptions

    DOE PAGESBeta

    Anderson, Adam J.; Fox, Patrick J.; Kahn, Yonatan; McCullough, Matthew

    2015-10-06

    Results from direct detection experiments are typically interpreted by employing an assumption about the dark matter velocity distribution, with results presented in the mχ – σn plane. Recently methods which are independent of the DM halo velocity distribution have been developed which present results in the vmin – g~ plane, but these in turn require an assumption on the dark matter mass. Here we present an extension of these halo-independent methods for dark matter direct detection which does not require a fiducial choice of the dark matter mass. With a change of variables from vmin to nuclear recoil momentum (pR),more » the full halo-independent content of an experimental result for any dark matter mass can be condensed into a single plot as a function of a new halo integral variable, which we call tilde h(pR). The entire family of conventional halo-independent tilde g~(vmin) plots for all DM masses are directly found from the single tilde h~(pR) plot through a simple rescaling of axes. By considering results in tildeh~(pR) space, one can determine if two experiments are inconsistent for all masses and all physically possible halos, or for what range of dark matter masses the results are inconsistent for all halos, without the necessity of multiple tilde g~(vmin) plots for different DM masses. As a result, we conduct a sample analysis comparing the CDMS II Si events to the null results from LUX, XENON10, and SuperCDMS using our method and discuss how the results can be strengthened by imposing the physically reasonable requirement of a finite halo escape velocity.« less

  2. A simplified electrostatic model for hydrolase catalysis.

    PubMed

    Pessoa Filho, Pedro de Alcantara; Prausnitz, John M

    2015-07-01

    Toward the development of an electrostatic model for enzyme catalysis, the active site of the enzyme is represented by a cavity whose surface (and beyond) is populated by electric charges as determined by pH and the enzyme's structure. The electric field in the cavity is obtained from electrostatics and a suitable computer program. The key chemical bond in the substrate, at its ends, has partial charges with opposite signs determined from published force-field parameters. The electric field attracts one end of the bond and repels the other, causing bond tension. If that tension exceeds the attractive force between the atoms, the bond breaks; the enzyme is then a successful catalyst. To illustrate this very simple model, based on numerous assumptions, some results are presented for three hydrolases: hen-egg white lysozyme, bovine trypsin and bovine ribonuclease. Attention is given to the effect of pH. PMID:25881958

  3. Simplified Modeling of Oxidation of Hydrocarbons

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Harstad, Kenneth

    2008-01-01

    A method of simplified computational modeling of oxidation of hydrocarbons is undergoing development. This is one of several developments needed to enable accurate computational simulation of turbulent, chemically reacting flows. At present, accurate computational simulation of such flows is difficult or impossible in most cases because (1) the numbers of grid points needed for adequate spatial resolution of turbulent flows in realistically complex geometries are beyond the capabilities of typical supercomputers now in use and (2) the combustion of typical hydrocarbons proceeds through decomposition into hundreds of molecular species interacting through thousands of reactions. Hence, the combination of detailed reaction- rate models with the fundamental flow equations yields flow models that are computationally prohibitive. Hence, further, a reduction of at least an order of magnitude in the dimension of reaction kinetics is one of the prerequisites for feasibility of computational simulation of turbulent, chemically reacting flows. In the present method of simplified modeling, all molecular species involved in the oxidation of hydrocarbons are classified as either light or heavy; heavy molecules are those having 3 or more carbon atoms. The light molecules are not subject to meaningful decomposition, and the heavy molecules are considered to decompose into only 13 specified constituent radicals, a few of which are listed in the table. One constructs a reduced-order model, suitable for use in estimating the release of heat and the evolution of temperature in combustion, from a base comprising the 13 constituent radicals plus a total of 26 other species that include the light molecules and related light free radicals. Then rather than following all possible species through their reaction coordinates, one follows only the reduced set of reaction coordinates of the base. The behavior of the base was examined in test computational simulations of the combustion of

  4. New weak keys in simplified IDEA

    NASA Astrophysics Data System (ADS)

    Hafman, Sari Agustini; Muhafidzah, Arini

    2016-02-01

    Simplified IDEA (S-IDEA) is simplified version of International Data Encryption Algorithm (IDEA) and useful teaching tool to help students to understand IDEA. In 2012, Muryanto and Hafman have found a weak key class in the S-IDEA by used differential characteristics in one-round (0, ν, 0, ν) → (0,0, ν, ν) on the first round to produce input difference (0,0, ν, ν) on the fifth round. Because Muryanto and Hafman only use three differential characteristics in one-round, we conducted a research to find new differential characteristics in one-round and used it to produce new weak key classes of S-IDEA. To find new differential characteristics in one-round of S-IDEA, we applied a multiplication mod 216+1 on input difference and combination of active sub key Z1, Z4, Z5, Z6. New classes of weak keys are obtained by combining all of these characteristics and use them to construct two new differential characteristics in full-round of S-IDEA with or without the 4th round sub key. In this research, we found six new differential characteristics in one round and combined them to construct two new differential characteristics in full-round of S-IDEA. When two new differential characteristics in full-round of S-IDEA are used and the 4th round sub key required, we obtain 2 new classes of weak keys, 213 and 28. When two new differential characteristics in full-round of S-IDEA are used, yet the 4th round sub key is not required, the weak key class of 213 will be 221 and 28 will be 210. Membership test can not be applied to recover the key bits in those weak key classes. The recovery of those unknown key bits can only be done by using brute force attack. The simulation result indicates that the bit of the key can be recovered by the longest computation time of 0,031 ms.

  5. Design for a simplified cochlear implant system.

    PubMed

    An, Soon Kwan; Park, Se-Ik; Jun, Sang Beom; Lee, Choong Jae; Byun, Kyung Min; Sung, Jung Hyun; Wilson, Blake S; Rebscher, Stephen J; Oh, Seung Ha; Kim, Sung June

    2007-06-01

    A simplified cochlear implant (CI) system would be appropriate for widespread use in developing countries. Here, we describe a CI that we have designed to realize such a concept. The system implements 8 channels of processing and stimulation using the continuous interleaved sampling (CIS) strategy. A generic digital signal processing (DSP) chip is used for the processing, and the filtering functions are performed with a fast Fourier transform (FFT) of a microphone or other input. Data derived from the processing are transmitted through an inductive link using pulse width modulation (PWM) encoding and amplitude shift keying (ASK) modulation. The same link is used in the reverse direction for backward telemetry of electrode and system information. A custom receiver-stimulator chip has been developed that demodulates incoming data using pulse counting and produces charge balanced biphasic pulses at 1000 pulses/s/electrode. This chip is encased in a titanium package that is hermetically sealed using a simple but effective method. A low cost metal-silicon hybrid mold has been developed for fabricating an intracochlear electrode array with 16 ball-shaped stimulating contacts. PMID:17554817

  6. Simplified tube models for entangled supramolecular polymers

    NASA Astrophysics Data System (ADS)

    Boudara, Victor; Read, Daniel

    2015-03-01

    This presentation describes current efforts investigating non-linear rheology of entangled, supramolecular polymeric materials. We describe two recently developed models: 1) We have developed a simplified model for the rheology of entangled telechelic star polymers. This is based on a pre-averaged orientation tensor, a stretch equation, and stretch-dependant probability of detachment of the sticker. In both linear and non-linear regimes, we produce maps of the whole parameter space, indicating the parameter values for which qualitative changes in response to flow are predicted. Results in the linear rheology regime are consistent with previous more detailed models and are in qualitative agreement with experimental data. 2) Using the same modelling framework, we investigate entangled linear polymers with stickers along the backbone. We use a set of coupled equations to describe the stretch between each stickers, and use equations similar to our star model for attachment/detachment of the sticky groups. This model is applicable to industrial polymers such as entangled thermoplastic elasomers, or functionalised model linear polymers. The work leading to these results has received funding from the People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme (FP7/2007-2013) under REA Grant Agreement No. 607937 (SUPOLEN).

  7. Combustion Safety Simplified Test Protocol Field Study

    SciTech Connect

    Brand, L; Cautley, D.; Bohac, D.; Francisco, P.; Shen, L.; Gloss, S.

    2015-11-05

    "9Combustions safety is an important step in the process of upgrading homes for energy efficiency. There are several approaches used by field practitioners, but researchers have indicated that the test procedures in use are complex to implement and provide too many false positives. Field failures often mean that the house is not upgraded until after remediation or not at all, if not include in the program. In this report the PARR and NorthernSTAR DOE Building America Teams provide a simplified test procedure that is easier to implement and should produce fewer false positives. A survey of state weatherization agencies on combustion safety issues, details of a field data collection instrumentation package, summary of data collected over seven months, data analysis and results are included. The project provides several key results. State weatherization agencies do not generally track combustion safety failures, the data from those that do suggest that there is little actual evidence that combustion safety failures due to spillage from non-dryer exhaust are common and that only a very small number of homes are subject to the failures. The project team collected field data on 11 houses in 2015. Of these homes, two houses that demonstrated prolonged and excessive spillage were also the only two with venting systems out of compliance with the National Fuel Gas Code. The remaining homes experienced spillage that only occasionally extended beyond the first minute of operation. Combustion zone depressurization, outdoor temperature, and operation of individual fans all provide statistically significant predictors of spillage.

  8. Simplified Dynamic Analysis of Grinders Spindle Node

    NASA Astrophysics Data System (ADS)

    Demec, Peter

    2014-12-01

    The contribution deals with the simplified dynamic analysis of surface grinding machine spindle node. Dynamic analysis is based on the use of the transfer matrix method, which is essentially a matrix form of method of initial parameters. The advantage of the described method, despite the seemingly complex mathematical apparatus, is primarily, that it does not require for solve the problem of costly commercial software using finite element method. All calculations can be made for example in MS Excel, which is advantageous especially in the initial stages of constructing of spindle node for the rapid assessment of the suitability its design. After detailing the entire structure of spindle node is then also necessary to perform the refined dynamic analysis in the environment of FEM, which it requires the necessary skills and experience and it is therefore economically difficult. This work was developed within grant project KEGA No. 023TUKE-4/2012 Creation of a comprehensive educational - teaching material for the article Production technique using a combination of traditional and modern information technology and e-learning.

  9. Simplified Optics and Controls for Laser Communications

    NASA Technical Reports Server (NTRS)

    Chen, Chien-Chung; Hemmati, Hamid

    2006-01-01

    A document discusses an architecture of a spaceborne laser communication system that provides for a simplified control subsystem that stabilizes the line of sight in a desired direction. Heretofore, a typical design for a spaceborne laser communication system has called for a high-bandwidth control loop, a steering mirror and associated optics, and a fast steering-mirror actuator to stabilize the line of sight in the presence of vibrations. In the present architecture, the need for this fast steering-mirror subsystem is eliminated by mounting the laser-communication optics on a disturbance-free platform (DFP) that suppresses coupling of vibrations to the optics by 60 dB. Taking advantage of microgravitation, in the DFP, the optical assembly is free-flying relative to the rest of the spacecraft, and a low-spring-constant pointing control subsystem exerts small forces to regulate the position and orientation of the optics via voice coils. All steering is effected via the DFP, which can be controlled in all six degrees of freedom relative to the spacecraft. A second control loop, closed around a position sensor and the spacecraft attitude-control system, moves the spacecraft as needed to prevent mechanical contact with the optical assembly.

  10. Simplified methods for calculating photodissociation rates

    NASA Technical Reports Server (NTRS)

    Shimazaki, T.; Ogawa, T.; Farrell, B. C.

    1977-01-01

    Simplified methods for calculating the transmission of solar UV radiation and the dissociation coefficients of various molecules are compared. A significant difference sometimes appears in calculations of the individual band, but the total transmission and the total dissociation coefficients integrated over the entire SR (solar radiation) band region agree well between the methods. The ambiguities in the solar flux data affect the calculated dissociation coefficients more strongly than does the method. A simpler method is developed for the purpose of reducing the computation time and computer memory size necessary for storing coefficients of the equations. The new method can reduce the computation time by a factor of more than 3 and the memory size by a factor of more than 50 compared with the Hudson-Mahle method, and yet the result agrees within 10 percent (in most cases much less) with the original Hudson-Mahle results, except for H2O and CO2. A revised method is necessary for these two molecules, whose absorption cross sections change very rapidly over the SR band spectral range.

  11. Interferometric phase reconstruction using simplified coherence network

    NASA Astrophysics Data System (ADS)

    Zhang, Kui; Song, Ruiqing; Wang, Hui; Wu, Di; Wang, Hua

    2016-09-01

    Interferometric time-series analysis techniques, which extend the traditional differential radar interferometry, have demonstrated a strong capability for monitoring ground surface displacement. Such techniques are able to obtain the temporal evolution of ground deformation within millimeter accuracy by using a stack of synthetic aperture radar (SAR) images. In order to minimize decorrelation between stacked SAR images, the phase reconstruction technique has been developed recently. The main idea of this technique is to reform phase observations along a SAR stack by taking advantage of a maximum likelihood estimator which is defined on the coherence matrix estimated from each target. However, the phase value of a coherence matrix element might be considerably biased when its corresponding coherence is low. In this case, it will turn to an outlying sample affecting the corresponding phase reconstruction process. In order to avoid this problem, a new approach is developed in this paper. This approach considers a coherence matrix element to be an arc in a network. A so-called simplified coherence network (SCN) is constructed to decrease the negative impact of outlying samples. Moreover, a pointed iterative strategy is designed to resolve the transformed phase reconstruction problem defined on a SCN. For validation purposes, the proposed method is applied to 29 real SAR images. The results demonstrate that the proposed method has an excellent computational efficiency and could obtain more reliable phase reconstruction solutions compared to the traditional method using phase triangulation algorithm.

  12. Simplified liquid oxygen propellant conditioning concepts

    NASA Technical Reports Server (NTRS)

    Cleary, N. L.; Holt, K. A.; Flachbart, R. H.

    1995-01-01

    Current liquid oxygen feed systems waste propellant and use hardware, unnecessary during flight, to condition the propellant at the engine turbopumps prior to launch. Simplified liquid oxygen propellant conditioning concepts are being sought for future launch vehicles. During a joint program, four alternative propellant conditioning options were studied: (1) passive recirculation; (2) low bleed through the engine; (3) recirculation lines; and (4) helium bubbling. The test configuration for this program was based on a vehicle design which used a main recirculation loop that was insulated on the downcomer and uninsulated on the upcomer. This produces a natural convection recirculation flow. The test article for this program simulated a feedline which ran from the main recirculation loop to the turbopump. The objective was to measure the temperature profile of this test article. Several parameters were varied from the baseline case to determine their effects on the temperature profile. These parameters included: flow configuration, feedline slope, heat flux, main recirculation loop velocity, pressure, bleed rate, helium bubbling, and recirculation lines. The heat flux, bleed rate, and recirculation configurations produced the greatest changes from the baseline temperature profile. However, the temperatures in the feedline remained subcooled. Any of the options studied could be used in future vehicles.

  13. Simplified Analysis of Pulse Detonation Rocket Engine B1owdown Gasdynamics and Performance

    NASA Technical Reports Server (NTRS)

    Morris, Christopher I.

    2001-01-01

    -state rocket engine is provided. The effect of constant-gamma and equilibrium chemistry assumptions is also examined. Additionally, in order to form an assessment of the accuracy of the model, the flowfield time history is compared to experimental data from Stanford University.

  14. Estimating ETAS: the effects of truncation, missing data, and model assumptions

    NASA Astrophysics Data System (ADS)

    Seif, Stefanie; Mignan, Arnaud; Zechar, Jeremy; Werner, Maximilian; Wiemer, Stefan

    2016-04-01

    The Epidemic-Type Aftershock Sequence (ETAS) model is widely used to describe the occurrence of earthquakes in space and time, but there has been little discussion of the limits of, and influences on, its estimation. What has been established is that ETAS parameter estimates are influenced by missing data (e.g., earthquakes are not reliably detected during lively aftershock sequences) and by simplifying assumptions (e.g., that aftershocks are isotropically distributed). In this article, we investigate the effect of truncation: how do parameter estimates depend on the cut-off magnitude, Mcut, above which parameters are estimated? We analyze catalogs from southern California and Italy and find that parameter variations as a function of Mcut are caused by (i) changing sample size (which affects e.g. Omori's cconstant) or (ii) an intrinsic dependence on Mcut (as Mcut increases, absolute productivity and background rate decrease). We also explore the influence of another form of truncation - the finite catalog length - that can bias estimators of the branching ratio. Being also a function of Omori's p-value, the true branching ratio is underestimated by 45% to 5% for 1.05< p <1.2. Finite sample size affects the variation of the branching ratio estimates. Moreover, we investigate the effect of missing aftershocks and find that the ETAS productivity parameters (α and K0) and the Omoris c-value are significantly changed only for low Mcut=2.5. We further find that conventional estimation errors for these parameters, inferred from simulations that do not account for aftershock incompleteness, are underestimated by, on average, a factor of six.

  15. Potentialities of TEC topping: A simplified view of parametric effects

    NASA Technical Reports Server (NTRS)

    Morris, J. F.

    1980-01-01

    An examination of the benefits of thermionic-energy-conversion (TEC)-topped power plants and methods of increasing conversion efficiency are discussed. Reductions in the cost of TEC modules yield direct decreases in the cost of electricity (COE) from TEC-topped central station power plants. Simplified COE, overall-efficiency charts presented illustrate this trend. Additional capital-cost diminution results from designing more compact furnaces with considerably increased heat transfer rates allowable and desirable for high temperature TEC and heat pipes. Such improvements can evolve of the protection from hot corrosion and slag as well as the thermal expansion compatibilities offered by silicon-carbide clads on TEC-heating surfaces. Greater efficiencies and far fewer modules are possible with high-temperature, high-power-density TEC: This decreases capital and fuel costs much more and substantially increases electric power outputs for fixed fuel inputs. In addition to more electricity, less pollution, and lower costs, TEC topping used directly in coal-combustion products contributes balance-of-payment gains.

  16. On Some Unwarranted Tacit Assumptions in Cognitive Neuroscience†

    PubMed Central

    Mausfeld, Rainer

    2011-01-01

    The cognitive neurosciences are based on the idea that the level of neurons or neural networks constitutes a privileged level of analysis for the explanation of mental phenomena. This paper brings to mind several arguments to the effect that this presumption is ill-conceived and unwarranted in light of what is currently understood about the physical principles underlying mental achievements. It then scrutinizes the question why such conceptions are nevertheless currently prevailing in many areas of psychology. The paper argues that corresponding conceptions are rooted in four different aspects of our common-sense conception of mental phenomena and their explanation, which are illegitimately transferred to scientific enquiry. These four aspects pertain to the notion of explanation, to conceptions about which mental phenomena are singled out for enquiry, to an inductivist epistemology, and, in the wake of behavioristic conceptions, to a bias favoring investigations of input–output relations at the expense of enquiries into internal principles. To the extent that the cognitive neurosciences methodologically adhere to these tacit assumptions, they are prone to turn into a largely a-theoretical and data-driven endeavor while at the same time enhancing the prospects for receiving widespread public appreciation of their empirical findings. PMID:22435062

  17. Finite Element Simulations to Explore Assumptions in Kolsky Bar Experiments.

    SciTech Connect

    Crum, Justin

    2015-08-05

    The chief purpose of this project has been to develop a set of finite element models that attempt to explore some of the assumptions in the experimental set-up and data reduction of the Kolsky bar experiment. In brief, the Kolsky bar, sometimes referred to as the split Hopkinson pressure bar, is an experimental apparatus used to study the mechanical properties of materials at high strain rates. Kolsky bars can be constructed to conduct experiments in tension or compression, both of which are studied in this paper. The basic operation of the tension Kolsky bar is as follows: compressed air is inserted into the barrel that contains the striker; the striker accelerates towards the left and strikes the left end of the barrel producing a tensile stress wave that propogates first through the barrel and then down the incident bar, into the specimen, and finally the transmission bar. In the compression case, the striker instead travels to the right and impacts the incident bar directly. As the stress wave travels through an interface (e.g., the incident bar to specimen connection), a portion of the pulse is transmitted and the rest reflected. The incident pulse, as well as the transmitted and reflected pulses are picked up by two strain gauges installed on the incident and transmitted bars as shown. By interpreting the data acquired by these strain gauges, the stress/strain behavior of the specimen can be determined.

  18. Assumptions of the primordial spectrum and cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Shafieloo, Arman; Souradeep, Tarun

    2011-10-01

    The observables of the perturbed universe, cosmic microwave background (CMB) anisotropy and large structures depend on a set of cosmological parameters, as well as the assumed nature of primordial perturbations. In particular, the shape of the primordial power spectrum (PPS) is, at best, a well-motivated assumption. It is known that the assumed functional form of the PPS in cosmological parameter estimation can affect the best-fit-parameters and their relative confidence limits. In this paper, we demonstrate that a specific assumed form actually drives the best-fit parameters into distinct basins of likelihood in the space of cosmological parameters where the likelihood resists improvement via modifications to the PPS. The regions where considerably better likelihoods are obtained allowing free-form PPS lie outside these basins. In the absence of a preferred model of inflation, this raises a concern that current cosmological parameter estimates are strongly prejudiced by the assumed form of PPS. Our results strongly motivate approaches toward simultaneous estimation of the cosmological parameters and the shape of the primordial spectrum from upcoming cosmological data. It is equally important for theorists to keep an open mind towards early universe scenarios that produce features in the PPS.

  19. Are waves of relational assumptions eroding traditional analysis?

    PubMed

    Meredith-Owen, William

    2013-11-01

    The author designates as 'traditional' those elements of psychoanalytic presumption and practice that have, in the wake of Fordham's legacy, helped to inform analytical psychology and expand our capacity to integrate the shadow. It is argued that this element of the broad spectrum of Jungian practice is in danger of erosion by the underlying assumptions of the relational approach, which is fast becoming the new establishment. If the maps of the traditional landscape of symbolic reference (primal scene, Oedipus et al.) are disregarded, analysts are left with only their own self-appointed authority with which to orientate themselves. This self-centric epistemological basis of the relationalists leads to a revision of 'analytic attitude' that may be therapeutic but is not essentially analytic. This theme is linked to the perennial challenge of balancing differentiation and merger and traced back, through Chasseguet-Smirgel, to its roots in Genesis. An endeavour is made to illustrate this within the Journal convention of clinically based discussion through a commentary on Colman's (2013) avowedly relational treatment of the case material presented in his recent Journal paper 'Reflections on knowledge and experience' and through an assessment of Jessica Benjamin's (2004) relational critique of Ron Britton's (1989) transference embodied approach. PMID:24237206

  20. Validity of conventional assumptions concerning flexible response. Research report

    SciTech Connect

    Gutierrez, M.J.

    1989-01-01

    The North Atlantic Treaty Organization is an alliance for collective defense. Made up of 16 countries, NATO has been a successful alliance because there has been no war in Europe since 1945. In 1967, NATO adopted the strategy of flexible response, a strategy dependent upon conventional, tactical nuclear, and strategic nuclear weapons to provide deterrence from a Warsaw Pact attack. Although successful, NATO is suffering from an erosion in conventional strength. NATO continues to make assumptions about its conventional capabilities to successfully meet the requirements of the flexible response strategy. In the present day world of NATO, there is limited funding, a fact that is not likely to change any time in the foreseeable future. Limited funding makes it impossible to buy all the conventional force structure needed to ideally support the current strategy, also a fact that is unlikely to change. This paper shows limitations in some of the ways NATO assumes it can conventionally perform its mission. It is the author's position that NATO should modernize its conventional thinking to make it more in line with the realities of the situation NATO finds itself in today.

  1. Observing gravitational-wave transient GW150914 with minimal assumptions

    NASA Astrophysics Data System (ADS)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Behnke, B.; Bejger, M.; Bell, A. S.; Bell, C. J.; Berger, B. K.; Bergman, J.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackburn, L.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, G.; Bogan, C.; Bohe, A.; Bojtos, P.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chakraborty, R.; Chatterji, S.; Chalermsongsak, T.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chen, H. Y.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Clark, M.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dattilo, V.; Dave, I.; Daveloza, H. P.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.; DeRosa, R. T.; De Rosa, R.; DeSalvo, R.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dojcinoski, G.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fournier, J.-D.; Franco, S.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gatto, A.; Gaur, G.; Gehrels, N.; Gemme, G.; Gendre, B.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Haas, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hinder, I.; Hoak, D.; Hodge, K. A.; Hofman, D.; Hollitt, S. E.; Holt, K.; Holz, D. E.; Hopkins, P.; Hosken, D. J.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Idrisy, A.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Islas, G.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Haris, K.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kawazoe, F.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.; Key, J. S.; Khalaidovski, A.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, C.; Kim, J.; Kim, K.; Kim, Nam-Gyu; Kim, Namjun; Kim, Y.-M.; King, E. J.; King, P. J.; Kinsey, M.; Kinzel, D. L.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Kokeyama, K.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Laguna, P.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B. M.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Logue, J.; Lombardi, A. L.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lück, H.; Lundgren, A. P.; Luo, J.; Lynch, R.; Ma, Y.; MacDonald, T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Magee, R. M.; Mageswaran, M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R. M.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nedkova, K.; Nelemans, G.; Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Page, J.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Phelps, M.; Piccinni, O.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Premachandra, S. S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Serna, G.; Setyawati, Y.; Sevigny, A.; Shaddock, D. A.; Shah, S.; Shahriar, M. S.; Shaltev, M.; Shao, Z.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sigg, D.; Silva, A. D.; Simakov, D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Tonelli, M.; Torres, C. V.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; White, D. J.; Whiting, B. F.; Williams, D.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Wright, J. L.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, H.; Yvert, M.; ZadroŻny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, F.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration

    2016-06-01

    The gravitational-wave signal GW150914 was first identified on September 14, 2015, by searches for short-duration gravitational-wave transients. These searches identify time-correlated transients in multiple detectors with minimal assumptions about the signal morphology, allowing them to be sensitive to gravitational waves emitted by a wide range of sources including binary black hole mergers. Over the observational period from September 12 to October 20, 2015, these transient searches were sensitive to binary black hole mergers similar to GW150914 to an average distance of ˜600 Mpc . In this paper, we describe the analyses that first detected GW150914 as well as the parameter estimation and waveform reconstruction techniques that initially identified GW150914 as the merger of two black holes. We find that the reconstructed waveform is consistent with the signal from a binary black hole merger with a chirp mass of ˜30 M⊙ and a total mass before merger of ˜70 M⊙ in the detector frame.

  2. 26 CFR 1.41-9 - Alternative simplified credit.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 1 2010-04-01 2010-04-01 true Alternative simplified credit. 1.41-9 Section 1.41-9 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY INCOME TAX INCOME TAXES Credits Against Tax § 1.41-9 Alternative simplified credit. For further guidance, see § 1.41-9T....

  3. 26 CFR 1.41-9 - Alternative simplified credit.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 1 2011-04-01 2009-04-01 true Alternative simplified credit. 1.41-9 Section 1.41-9 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY INCOME TAX INCOME TAXES Credits Against Tax § 1.41-9 Alternative simplified credit. For further guidance, see § 1.41-9T....

  4. 26 CFR 1.41-9 - Alternative simplified credit.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... June 9, 2011, see § 1.41-9T as contained in 26 CFR part 1, revised April 1, 2011. ... 26 Internal Revenue 1 2014-04-01 2013-04-01 true Alternative simplified credit. 1.41-9 Section 1... Credits Against Tax § 1.41-9 Alternative simplified credit. (a) Determination of credit. At the...

  5. 12 CFR 3.211 - Simplified supervisory formula approach (SSFA).

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 1 2014-01-01 2014-01-01 false Simplified supervisory formula approach (SSFA). 3.211 Section 3.211 Banks and Banking COMPTROLLER OF THE CURRENCY, DEPARTMENT OF THE TREASURY CAPITAL ADEQUACY STANDARDS Risk-Weighted Assets-Market Risk § 3.211 Simplified supervisory formula approach (SSFA). (a) General requirements. To use...

  6. 12 CFR 217.144 - Simplified supervisory formula approach (SSFA).

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 2 2014-01-01 2014-01-01 false Simplified supervisory formula approach (SSFA... Simplified supervisory formula approach (SSFA). (a) General requirements for the SSFA. To use the SSFA to determine the risk weight for a securitization exposure, a Board-regulated institution must have data...

  7. 12 CFR 217.211 - Simplified supervisory formula approach (SSFA).

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 2 2014-01-01 2014-01-01 false Simplified supervisory formula approach (SSFA... Simplified supervisory formula approach (SSFA). (a) General requirements. To use the SSFA to determine the... the weight for each exposure) total capital requirement of the underlying exposures calculated...

  8. 12 CFR 324.211 - Simplified supervisory formula approach (SSFA).

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 5 2014-01-01 2014-01-01 false Simplified supervisory formula approach (SSFA). 324.211 Section 324.211 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION REGULATIONS AND STATEMENTS OF GENERAL POLICY CAPITAL ADEQUACY OF FDIC-SUPERVISED INSTITUTIONS Risk-Weighted Assets-Market Risk § 324.211 Simplified supervisory...

  9. 7 CFR 273.25 - Simplified Food Stamp Program.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 4 2014-01-01 2014-01-01 false Simplified Food Stamp Program. 273.25 Section 273.25 Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE FOOD STAMP AND FOOD DISTRIBUTION PROGRAM CERTIFICATION OF ELIGIBLE HOUSEHOLDS Program Alternatives § 273.25 Simplified...

  10. 7 CFR 273.25 - Simplified Food Stamp Program.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 4 2013-01-01 2013-01-01 false Simplified Food Stamp Program. 273.25 Section 273.25 Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE FOOD STAMP AND FOOD DISTRIBUTION PROGRAM CERTIFICATION OF ELIGIBLE HOUSEHOLDS Program Alternatives § 273.25 Simplified...

  11. 7 CFR 273.25 - Simplified Food Stamp Program.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false Simplified Food Stamp Program. 273.25 Section 273.25 Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE FOOD STAMP AND FOOD DISTRIBUTION PROGRAM CERTIFICATION OF ELIGIBLE HOUSEHOLDS § 273.25 Simplified Food Stamp Program....

  12. Naturally Simplified Input, Comprehension, and Second Language Acquisition.

    ERIC Educational Resources Information Center

    Ellis, Rod

    This article examines the concept of simplification in second language (SL) learning, reviewing research on the simplified input that both naturalistic and classroom SL learners receive. Research indicates that simplified input, particularly if derived from naturally occurring interactions, does aid comprehension but has not been shown to…

  13. 26 CFR 1.41-9 - Alternative simplified credit.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... June 9, 2011, see § 1.41-9T as contained in 26 CFR part 1, revised April 1, 2011. ... 26 Internal Revenue 1 2013-04-01 2013-04-01 false Alternative simplified credit. 1.41-9 Section 1... Credits Against Tax § 1.41-9 Alternative simplified credit. (a) Determination of credit. At the...

  14. Communication: A simplified coupled-cluster Lagrangian for polarizable embedding.

    PubMed

    Krause, Katharina; Klopper, Wim

    2016-01-28

    A simplified coupled-cluster Lagrangian, which is linear in the Lagrangian multipliers, is proposed for the coupled-cluster treatment of a quantum mechanical system in a polarizable environment. In the simplified approach, the amplitude equations are decoupled from the Lagrangian multipliers and the energy obtained from the projected coupled-cluster equation corresponds to a stationary point of the Lagrangian. PMID:26827193

  15. Simplified models for heat transfer in rooms

    NASA Astrophysics Data System (ADS)

    Graca, Guilherme C. C. Carrilho Da

    Buildings protect their occupants from the outside environment. As a semi-enclosed environment, buildings tend to contain the internally generated heat and air pollutants, as well as the solar and conductive heat gains that can occur in the facade. In the warmer months of the year this generally leads to overheating, creating a need for a cooling system. Ventilation air replaces contaminated air in the building and is often used as the dominant medium for heat transfer between indoor and outdoor environments. The goal of the research presented in this thesis is to develop a better understanding of the important parameters in the performance of ventilation systems and to develop simplified convective heat transfer models. The general approach used in this study seeks to capture the dominant physical processes for these problems with first order accuracy, and develop simple models that show the correct system behavior trends. Dimensional analysis, in conjunction with simple momentum and energy conservation, scaled model experiments and numerical simulations, is used to improve airflow and heat transfer rate predictions in both single and multi room ventilation systems. This study includes the three commonly used room ventilation modes: mixing, displacement and cross-ventilation. A new modeling approach to convective heat transfer between the building and the outside is presented: the concept of equivalent room heat transfer coefficient. The new model quantifies the reduction in heat transfer between ventilation air and internal room surfaces caused by limited thermal capacity and temperature variation of the air for the three modes studied. Particular emphasis is placed on cross-ventilation, and on the development of a simple model to characterize the airflow patterns that occur in this case. The implementation of the models in a building thermal simulation software tool is presented as well as comparisons between model predictions, experimental results and complex

  16. Cosmology without Einstein's assumption that inertial mass produces gravity

    NASA Astrophysics Data System (ADS)

    Ellis, Homer G.

    2015-06-01

    Giving up Einstein's assumption, implicit in his 1916 field equations, that inertial mass, even in its appearance as energy, is equivalent to active gravitational mass and therefore is a source of gravity allows revising the field equations to a form in which a positive cosmological constant is seen to (mis)represent a uniform negative net mass density of gravitationally attractive and gravitationally repulsive matter. Field equations with both positive and negative active gravitational mass densities of both primordial and continuously created matter, incorporated along with two scalar fields to 'relax the constraints' on the spacetime geometry, yield cosmological solutions that exhibit inflation, deceleration, coasting, acceleration, and a 'big bounce' instead of a 'big bang,' and provide good fits to a Hubble diagram of Type Ia supernovae data. The repulsive matter is identified as the back sides of the 'drainholes' introduced by the author in 1973 as solutions of those same field equations. Drainholes (prototypical examples of 'traversable wormholes') are topological tunnels in space which gravitationally attract on their front, entrance sides, and repel more strongly on their back, exit sides. The front sides serve both as the gravitating cores of the visible, baryonic particles of primordial matter and as the continuously created, invisible particles of the 'dark matter' needed to hold together the large-scale structures seen in the universe; the back sides serve as the misnamed 'dark energy' driving the current acceleration of the expansion of the universe. Formation of cosmic voids, walls, filaments and nodes is attributed to expulsion of drainhole entrances from regions populated by drainhole exits and accumulation of the entrances on boundaries separating those regions.

  17. The assumption of equilibrium in models of migration.

    PubMed

    Schachter, J; Althaus, P G

    1993-02-01

    In recent articles Evans (1990) and Harrigan and McGregor (1993) (hereafter HM) scrutinized the equilibrium model of migration presented in a 1989 paper by Schachter and Althaus. This model used standard microeconomics to analyze gross interregional migration flows based on the assumption that gross flows are in approximate equilibrium. HM criticized the model as theoretically untenable, while Evans summoned empirical as well as theoretical objections. HM claimed that equilibrium of gross migration flows could be ruled out on theoretical grounds. They argued that the absence of net migration requires that either all regions have equal populations or that unsustainable regional migration propensities must obtain. In fact some moves are inter- and other are intraregional. It does not follow, however, that the number of interregional migrants will be larger for the more populous region. Alternatively, a country could be divided into a large number of small regions that have equal populations. With uniform propensities to move, each of these analytical regions would experience in equilibrium zero net migration. Hence, the condition that net migration equal zero is entirely consistent with unequal distributions of population across regions. The criticisms of Evans were based both on flawed reasoning and on misinterpretation of the results of a number of econometric studies. His reasoning assumed that the existence of demand shifts as found by Goldfarb and Yezer (1987) and Topel (1986) invalidated the equilibrium model. The equilibrium never really obtains exactly, but economic modeling of migration properly begins with a simple equilibrium model of the system. A careful reading of the papers Evans cited in support of his position showed that in fact they affirmed rather than denied the appropriateness of equilibrium modeling. Zero net migration together with nonzero gross migration are not theoretically incompatible with regional heterogeneity of population, wages, or

  18. Application of a simplified theory of ELF propagation to a simplified worldwide model of the ionosphere

    NASA Astrophysics Data System (ADS)

    Behroozi-Toosi, A. B.; Booker, H. G.

    1980-12-01

    The simplified theory of ELF wave propagation in the earth-ionosphere transmission lines developed by Booker (1980) is applied to a simplified worldwide model of the ionosphere. The theory, which involves the comparison of the local vertical refractive index gradient with the local wavelength in order to classify the altitude into regions of low and high gradient, is used for a model of electron and negative ion profiles in the D and E regions below 150 km. Attention is given to the frequency dependence of ELF propagation at a middle latitude under daytime conditions, the daytime latitude dependence of ELF propagation at the equinox, the effects of sunspot, seasonal and diurnal variations on propagation, nighttime propagation neglecting and including propagation above 100 km, and the effect on daytime ELF propagation of a sudden ionospheric disturbance. The numerical values obtained by the method for the propagation velocity and attenuation rate are shown to be in general agreement with the analytic Naval Ocean Systems Center computer program. It is concluded that the method employed gives more physical insights into propagation processes than any other method, while requiring less effort and providing maximal accuracy.

  19. Highly efficient blue and warm white organic light-emitting diodes with a simplified structure

    NASA Astrophysics Data System (ADS)

    Li, Xiang-Long; Ouyang, Xinhua; Chen, Dongcheng; Cai, Xinyi; Liu, Ming; Ge, Ziyi; Cao, Yong; Su, Shi-Jian

    2016-03-01

    Two blue fluorescent emitters were utilized to construct simplified organic light-emitting diodes (OLEDs) and the remarkable difference in device performance was carefully illustrated. A maximum current efficiency of 4.84 cd A-1 (corresponding to a quantum efficiency of 4.29%) with a Commission Internationale de l’Eclairage (CIE) coordinate of (0.144, 0.127) was achieved by using N,N-diphenyl-4″-(1-phenyl-1H-benzo[d]imidazol-2-yl)-[1, 1‧:4‧, 1″-terphenyl]-4-amine (BBPI) as a non-doped emission layer of the simplified blue OLEDs without carrier-transport layers. In addition, simplified fluorescent/phosphorescent (F/P) hybrid warm white OLEDs without carrier-transport layers were fabricated by utilizing BBPI as (1) the blue emitter and (2) the host of a complementary yellow phosphorescent emitter (PO-01). A maximum current efficiency of 36.8 cd A-1 and a maximum power efficiency of 38.6 lm W-1 were achieved as a result of efficient energy transfer from the host to the guest and good triplet exciton confinement on the phosphorescent molecules. The blue and white OLEDs are among the most efficient simplified fluorescent blue and F/P hybrid white devices, and their performance is even comparable to that of most previously reported complicated multi-layer devices with carrier-transport layers.

  20. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert M.

    2013-01-01

    A new regression model search algorithm was developed that may be applied to both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The algorithm is a simplified version of a more complex algorithm that was originally developed for the NASA Ames Balance Calibration Laboratory. The new algorithm performs regression model term reduction to prevent overfitting of data. It has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a regression model search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression model. Therefore, the simplified algorithm is not intended to replace the original algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new search algorithm.

  1. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2013-01-01

    A new regression model search algorithm was developed in 2011 that may be used to analyze both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The new algorithm is a simplified version of a more complex search algorithm that was originally developed at the NASA Ames Balance Calibration Laboratory. The new algorithm has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression models. Therefore, the simplified search algorithm is not intended to replace the original search algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm either fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new regression model search algorithm.

  2. Simplified Models for Dark Matter and Missing Energy Searches at the LHC

    SciTech Connect

    Abdallah, Jalal; Ashkenazi, Adi; Boveia, Antonio; Busoni, Giorgio; De Simone, Andrea; Doglioni, Caterina; Efrati, Aielet; Etzion, Erez; Gramling, Johanna; Jacques, Thomas; Lin, Tongyan; Morgante, Enrico; Papucci, Michele; Penning, Bjoern; Riotto, Antonio Walter; Rizzo, Thomas; Salek, David; Schramm, Steven; Slone, Oren; Soreq, Yotam; Vichi, Alessandro; Volansky, Tomer; Yavin, Itay; Zhou, Ning; Zurek, Kathryn

    2014-10-01

    The study of collision events with missing energy as searches for the dark matter (DM) component of the Universe are an essential part of the extensive program looking for new physics at the LHC. Given the unknown nature of DM, the interpretation of such searches should be made broad and inclusive. This report reviews the usage of simplified models in the interpretation of missing energy searches. We begin with a brief discussion of the utility and limitation of the effective field theory approach to this problem. The bulk of the report is then devoted to several different simplified models and their signatures, including s-channel and t-channel processes. A common feature of simplified models for DM is the presence of additional particles that mediate the interactions between the Standard Model and the particle that makes up DM. We consider these in detail and emphasize the importance of their inclusion as final states in any coherent interpretation. We also review some of the experimental progress in the field, new signatures, and other aspects of the searches themselves. We conclude with comments and recommendations regarding the use of simplified models in Run-II of the LHC.

  3. Asymptotic derivation and numerical investigation of time-dependent simplified PN equations

    NASA Astrophysics Data System (ADS)

    Olbrant, E.; Larsen, E. W.; Frank, M.; Seibold, B.

    2013-04-01

    The steady-state simplified PN (SPN) approximations to the linear Boltzmann equation have been proven to be asymptotically higher-order corrections to the diffusion equation in certain physical systems. In this paper, we present an asymptotic analysis for the time-dependent simplified PN equations up to N=3. Additionally, SPN equations of arbitrary order are derived in an ad hoc way. The resulting SPN equations are hyperbolic and differ from those investigated in a previous work by some of the authors. In two space dimensions, numerical calculations for the PN and SPN equations are performed. We simulate neutron distributions of a moving rod and present results for a benchmark problem, known as the checkerboard problem. The SPN equations are demonstrated to yield significantly more accurate results than diffusion approximations. In addition, for sufficiently low values of N, they are shown to be more efficient than PN models of comparable cost.

  4. Impact Rates on Giant Planet Satellites: Checking Our Assumptions

    NASA Astrophysics Data System (ADS)

    Dones, Henry C.; Zahnle, Kevin J.; Levison, Harold F.

    2014-11-01

    The giant planets gravitationally scatter comets more often than they accrete them. Levison and Duncan (1997) and Levison et al. (2000) estimated that only about 2% of ecliptic comets struck a planet, with most ultimately being ejected into interstellar space by Jupiter. Impact rates on even the biggest moons are a factor of 10,000 smaller yet. Thus, even with fast computers and orbital integrators, determining impact rates on moons directly is impractical. Astatistical approach is required. Shoemaker and Wolfe (1982) used Öpik's equations (Öpik 1951, Kessler 1981), which, for this application, give the impact probability of a small body with a satellite on a circular orbit in terms of the small body's planetocentric pericenter distance, orbital eccentricity (greater than 1 except for temporarily bound Shoemaker-Levy 9-like comets), and inclination. Zahnle et al. (1998, 2003) performed Monte Carlo simulations that implement Öpik's equations and tabulated impact rates for a wide variety of satellites. We have confirmed these results with a semi-analytic approach. However, both Zahnle et al. and we assume that the orbital distribution of ecliptic comets that cross the Hill sphere of a planet is isotropic in the frame of the planet. Isotropy is a reasonable approximation for the typical low-to-moderate heliocentric eccentricities and inclinations of ecliptic comets (Levison et al. 2000), but the magnitude of the error incurred by this assumption is unknown. We will present the results of orbital integrations in which we assume a distribution of heliocentric elements for a vast number of giant planet-crossing or -approaching comets and will follow their orbits through the planet's Hill sphere. We will compute impact rates with satellites and will compare our results with those assuming an isotropic distribution of impactors.References: Kessler, D.J. 1981. Icarus 48, 39.Levison, H.F., Duncan, M.J. 1997. Icarus 127, 13.Levison, H.F., et al. 2000. Icarus 143, 415.

  5. Simplified model for fouling of a pleated membrane filter

    NASA Astrophysics Data System (ADS)

    Sanaei, Pejman; Cummings, Linda

    2014-11-01

    Pleated filter cartridge are widely used to remove undesired impurities from a fluid. A filter membrane is sandwiched between porous support layers, then pleated and packed in to an annular cylindrical cartridge. Although this arrangement offers a high ratio of surface filtration area to volume, the filter performance (measured, e.g., by graph of total flux versus throughput for a given pressure drop), is not as good as a flat filter membrane. The reasons for this difference in performance are currently unclear, but likely factors include the additional resistance of the porous support layers upstream and downstream of the membrane, the pleat packing density (PPD) and possible damage to the membrane during the pleating process. To investigate this, we propose a simplified mathematical model of the filtration within a single pleat. We consider the fluid dynamics through the membrane and support layers, and propose a model by which the pores of the membrane become fouled (i) by particles smaller than the membrane pore size; and (ii) by particles larger than the pores.We present some simulations of our model, investigating how flow and fouling differ between not only flat and pleated membranes, but also for support layers of different permeability profiles. NSF DMS-1261596.

  6. Excitation-resolved fluorescence tomography with simplified spherical harmonics equations

    NASA Astrophysics Data System (ADS)

    Klose, Alexander D.; Pöschinger, Thomas

    2011-03-01

    Fluorescence tomography (FT) reconstructs the three-dimensional (3D) fluorescent reporter probe distribution inside biological tissue. These probes target molecules of biological function, e.g. cell surface receptors or enzymes, and emit fluorescence light upon illumination with an external light source. The fluorescence light is detected on the tissue surface and a source reconstruction algorithm based on the simplified spherical harmonics (SPN) equations calculates the unknown 3D probe distribution inside tissue. While current FT approaches require multiple external sources at a defined wavelength range, the proposed FT method uses only a white light source with tunable wavelength selection for fluorescence stimulation and further exploits the spectral dependence of tissue absorption for the purpose of 3D tomographic reconstruction. We will show the feasibility of the proposed hyperspectral excitation-resolved fluorescence tomography method with experimental data. In addition, we will demonstrate the performance and limitations of such a method under ideal and controlled conditions by means of a digital mouse model and synthetic measurement data. Moreover, we will address issues regarding the required amount of wavelength intervals for fluorescent source reconstruction. We will explore the impact of assumed spatially uniform and nonuniform optical parameter maps on the accuracy of the fluorescence source reconstruction. Last, we propose a spectral re-scaling method for overcoming the observed limitations in reconstructing accurate source distributions in optically non-uniform tissue when assuming only uniform optical property maps for the source reconstruction process.

  7. Simplified training for hazardous materials management in developing countries

    SciTech Connect

    Braithwaite, J.

    1994-12-31

    There are thousands of dangerous situations happening daily in developing countries around the world involving untrained workers and hazardous materials. There are very few if any agencies in developing countries that are charged with ensuring safe and healthful working conditions. In addition to the problem of regulation and enforcement, there are potential training problems due to the level of literacy and degree of scientific background of these workers. Many of these workers are refugees from poorly developed countries who are willing to work no matter what the conditions. Training methods (standards) accepted as state of the art in the United States and other developed countries may not work well under the conditions found in developing countries. Because these methods may not be appropriate, new and novel ways to train workers quickly, precisely and economically in hazardous materials management should be developed. One approach is to develop training programs that use easily recognizable graphics with minimal verbal instruction, programs similar to the type used to teach universal international driving regulations and safety. The program as outlined in this paper could be tailored to any sized plant and any hazardous material handling or exposure situation. The situation in many developing countries is critical, development of simplified training methods for workers exposed to hazardous materials hold valuable market potential and are an opportunity for many underdeveloped countries to develop indigenous expertise in hazardous materials management.

  8. Simplified two and three dimensional HTTR benchmark problems

    SciTech Connect

    Zhan Zhang; Dingkang Zhang; Justin M. Pounders; Abderrafi M. Ougouag

    2011-05-01

    To assess the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of whole core configurations. In this paper we have created two and three dimensional numerical benchmark problems typical of high temperature gas cooled prismatic cores. Additionally, a single cell and single block benchmark problems are also included. These problems were derived from the HTTR start-up experiment. Since the primary utility of the benchmark problems is in code-to-code verification, minor details regarding geometry and material specification of the original experiment have been simplified while retaining the heterogeneity and the major physics properties of the core from a neutronics viewpoint. A six-group material (macroscopic) cross section library has been generated for the benchmark problems using the lattice depletion code HELIOS. Using this library, Monte Carlo solutions are presented for three configurations (all-rods-in, partially-controlled and all-rods-out) for both the 2D and 3D problems. These solutions include the core eigenvalues, the block (assembly) averaged fission densities, local peaking factors, the absorption densities in the burnable poison and control rods, and pin fission density distribution for selected blocks. Also included are the solutions for the single cell and single block problems.

  9. Simplified models for dark matter searches at the LHC

    NASA Astrophysics Data System (ADS)

    Abdallah, Jalal; Araujo, Henrique; Arbey, Alexandre; Ashkenazi, Adi; Belyaev, Alexander; Berger, Joshua; Boehm, Celine; Boveia, Antonio; Brennan, Amelia; Brooke, Jim; Buchmueller, Oliver; Buckley, Matthew; Busoni, Giorgio; Calibbi, Lorenzo; Chauhan, Sushil; Daci, Nadir; Davies, Gavin; De Bruyn, Isabelle; De Jong, Paul; De Roeck, Albert; de Vries, Kees; Del Re, Daniele; De Simone, Andrea; Di Simone, Andrea; Doglioni, Caterina; Dolan, Matthew; Dreiner, Herbi K.; Ellis, John; Eno, Sarah; Etzion, Erez; Fairbairn, Malcolm; Feldstein, Brian; Flaecher, Henning; Feng, Eric; Fox, Patrick; Genest, Marie-Hélène; Gouskos, Loukas; Gramling, Johanna; Haisch, Ulrich; Harnik, Roni; Hibbs, Anthony; Hoh, Siewyan; Hopkins, Walter; Ippolito, Valerio; Jacques, Thomas; Kahlhoefer, Felix; Khoze, Valentin V.; Kirk, Russell; Korn, Andreas; Kotov, Khristian; Kunori, Shuichi; Landsberg, Greg; Liem, Sebastian; Lin, Tongyan; Lowette, Steven; Lucas, Robyn; Malgeri, Luca; Malik, Sarah; McCabe, Christopher; Mete, Alaettin Serhan; Morgante, Enrico; Mrenna, Stephen; Nakahama, Yu; Newbold, Dave; Nordstrom, Karl; Pani, Priscilla; Papucci, Michele; Pataraia, Sophio; Penning, Bjoern; Pinna, Deborah; Polesello, Giacomo; Racco, Davide; Re, Emanuele; Riotto, Antonio Walter; Rizzo, Thomas; Salek, David; Sarkar, Subir; Schramm, Steven; Skubic, Patrick; Slone, Oren; Smirnov, Juri; Soreq, Yotam; Sumner, Timothy; Tait, Tim M. P.; Thomas, Marc; Tomalin, Ian; Tunnell, Christopher; Vichi, Alessandro; Volansky, Tomer; Weiner, Neal; West, Stephen M.; Wielers, Monika; Worm, Steven; Yavin, Itay; Zaldivar, Bryan; Zhou, Ning; Zurek, Kathryn

    2015-09-01

    This document outlines a set of simplified models for dark matter and its interactions with Standard Model particles. It is intended to summarize the main characteristics that these simplified models have when applied to dark matter searches at the LHC, and to provide a number of useful expressions for reference. The list of models includes both ss-channel and tt-channel scenarios. For ss-channel, spin-0 and spin-1 mediations are discussed, and also realizations where the Higgs particle provides a portal between the dark and visible sectors. The guiding principles underpinning the proposed simplified models are spelled out, and some suggestions for implementation are presented.

  10. The Business Environment for Housing Officers: Assumptions for the 1980's.

    ERIC Educational Resources Information Center

    Bishop, Welker; Schuh, John H.

    In November 1979 the National Association of College and University Business Officers (NACUBO) published a list of assumptions about the business environment within which college and university administrators would operate during the 1980's. The assumptions were divided into two categories: general external economic assumptions, and those that…

  11. 12 CFR 307.2 - Certification of assumption of deposit liabilities.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... defined in 12 CFR 303.2(g). (d) Evidence of assumption. The receipt by the FDIC of an accurate... satisfactory evidence of such deposit assumption, as required by section 8(q) of the FDI Act (12 U.S.C. 1818(q... evidence of such assumption for purposes of section 8(q). (e) Issuance of an order. The Executive...

  12. Exploring the Influence of Ethnicity, Age, and Trauma on Prisoners' World Assumptions

    ERIC Educational Resources Information Center

    Gibson, Sandy

    2011-01-01

    In this study, the author explores world assumptions of prisoners, how these assumptions vary by ethnicity and age, and whether trauma history affects world assumptions. A random sample of young and old prisoners, matched for prison location, was drawn from the New Jersey Department of Corrections prison population. Age and ethnicity had…

  13. Philosophy of Technology Assumptions in Educational Technology Leadership: Questioning Technological Determinism

    ERIC Educational Resources Information Center

    Webster, Mark David

    2013-01-01

    Scholars have emphasized that decisions about technology can be influenced by philosophy of technology assumptions, and have argued for research that critically questions technological determinist assumptions. Empirical studies of technology management in fields other than K-12 education provided evidence that philosophy of technology assumptions,…

  14. School Principals' Assumptions about Human Nature: Implications for Leadership in Turkey

    ERIC Educational Resources Information Center

    Sabanci, Ali

    2008-01-01

    This article considers principals' assumptions about human nature in Turkey and the relationship between the assumptions held and the leadership style adopted in schools. The findings show that school principals hold Y-type assumptions and prefer a relationship-oriented style in their relations with assistant principals. However, both principals…

  15. Educational Technology as a Subversive Activity: Questioning Assumptions Related to Teaching and Leading with Technology

    ERIC Educational Resources Information Center

    Kruger-Ross, Matthew J.; Holcomb, Lori B.

    2012-01-01

    The use of educational technologies is grounded in the assumptions of teachers, learners, and administrators. Assumptions are choices that structure our understandings and help us make meaning. Current advances in Web 2.0 and social media technologies challenge our assumptions about teaching and learning. The intersection of technology and…

  16. 77 FR 28477 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-15

    ... title IV of the Employee Retirement Income Security Act of 1974. The interest assumptions in the... CORPORATION 29 CFR Part 4022 Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in...

  17. 78 FR 11093 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-15

    ... Employee Retirement Income Security Act of 1974. The interest assumptions in the regulation are also... CORPORATION 29 CFR Part 4022 Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates...

  18. 75 FR 69588 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-15

    ... title IV of the Employee Retirement Income Security Act of 1974. PBGC uses the interest assumptions in... CORPORATION 29 CFR Part 4022 Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for...-Employer Plans to prescribe interest assumptions under the regulation for valuation dates in December...

  19. 78 FR 68739 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-15

    ... title IV of the Employee Retirement Income Security Act of 1974. The interest assumptions in the... CORPORATION 29 CFR Part 4022 Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates...

  20. 77 FR 8730 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-15

    ... covered by title IV of the Employee Retirement Income Security Act of 1974. The interest assumptions in... CORPORATION 29 CFR Part 4022 Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates...

  1. 77 FR 48855 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-15

    ... of the Employee Retirement Income Security Act of 1974. The interest assumptions in the regulation... 4022 Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits... to prescribe interest assumptions under the regulation for valuation dates in September 2012....

  2. 77 FR 2015 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-13

    ... CORPORATION 29 CFR Part 4022 Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in February 2012. The interest assumptions are used for paying benefits under terminating...

  3. 77 FR 74353 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-14

    ... of the Employee Retirement Income Security Act of 1974. The interest assumptions in the regulation... CORPORATION 29 CFR Part 4022 Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates...

  4. 76 FR 8649 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-15

    ... title IV of the Employee Retirement Income Security Act of 1974. PBGC uses the interest assumptions in... CORPORATION 29 CFR Part 4022 Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for...-Employer Plans to prescribe interest assumptions under the regulation for valuation dates in March...

  5. 78 FR 42009 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-15

    ... Employee Retirement Income Security Act of 1974. The interest assumptions in the regulation are also... CORPORATION 29 CFR Part 4022 Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates...

  6. 78 FR 22192 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-15

    ... CORPORATION 29 CFR Part 4022 Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in May 2013. The interest assumptions are used for paying benefits under terminating single-employer...

  7. 78 FR 62426 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-22

    ... of the Employee Retirement Income Security Act of 1974. The interest assumptions in the regulation... CORPORATION 29 CFR Part 4022 Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates...

  8. 7 CFR 765.402 - Transfer of security and loan assumption on same rates and terms.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 7 2011-01-01 2011-01-01 false Transfer of security and loan assumption on same rates... of Security and Assumption of Debt § 765.402 Transfer of security and loan assumption on same rates... obligated on the note inherits the security property; (b) A family member of the borrower or an...

  9. 7 CFR 765.402 - Transfer of security and loan assumption on same rates and terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Transfer of security and loan assumption on same rates... of Security and Assumption of Debt § 765.402 Transfer of security and loan assumption on same rates... obligated on the note inherits the security property; (b) A family member of the borrower or an...

  10. Review of simplified Pseudo-two-Dimensional models of lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Jokar, Ali; Rajabloo, Barzin; Désilets, Martin; Lacroix, Marcel

    2016-09-01

    Over the last decade, many efforts have been deployed to develop models for the prediction, the control, the optimization and the parameter estimation of Lithium-ion (Li-ion) batteries. It appears that the most successful electrochemical-based model for Li-ion battery is the Pseudo-two-Dimensional model (P2D). Due to the fact that the governing equations are complex, this model cannot be used in real-time applications like Battery Management Systems (BMSs). To remedy the situation, several investigations have been carried out to simplify the P2D model. Mathematical and physical techniques are employed to reduce the order of magnitude of the P2D governing equations. The present paper is a review of the studies on the modeling of Li-ion batteries with simplified P2D models. The assumptions on which these models rest are stated, the calculation methods are examined, the advantages and the drawbacks of the models are discussed and their applications are presented. Suggestions for overcoming the shortcomings of the models are made. Challenges and future directions in the modeling of Li-ion batteries are also discussed.

  11. A Simplified Coupled Structural-Flowfield Analysis Of Solid Rocket Motors Ignition Transient

    NASA Astrophysics Data System (ADS)

    Cavallini, E.; Favini, B.; Serraglia, F.; Di Giacinto, M.; Steelant, J.

    2011-05-01

    Ignition transient of a solid rocket motor can be characterized by strong unsteady phenomena such as waves propagation and pressure oscillations inside the combustion chamber. Depending on their frequencies and their amplitude, these oscillations can generate undesirable effects on the launcher, such as thrust fluctuations and transient loads on structures and-or payload equipments. This paper wants to present a simplified flow field/structural model based on a quasi-1D unsteady model of the ignition transient internal ballistics (SPIT) coupled with a simplified structural model, able to account for the radial dynamics of the grain and SRM casing, with the assumption of a standard linear elastic behavior of the structure. The parametric analysis performed with the coupled internal ballistics/structural model allows to show and evaluate some effects in the chamber pressurization rate, when reducing the elastic modulus of the grain propellant towards small values. Concerning the dynamics aspects of the fluid/structural coupled system, instead, a small but clear coupling between the acoustic flow field phenomena and structural dynamics is possible especially, as expected, when both fundamental oscillatory phenomena fall in the same range of frequency. The results and the parametric analysis with the fluid- structural model developed are shown for two solid rocket motors of the new European launcher VEGA: P80FW, first solid stage and Zefiro 9 old version of the third solid stage.

  12. Comparative assessment of parametric neuroreceptor mapping approaches based on the simplified reference tissue model using [¹¹C]ABP688 PET.

    PubMed

    Seo, Seongho; Kim, Su J; Kim, Yu K; Lee, Jee-Young; Jeong, Jae M; Lee, Dong S; Lee, Jae S

    2015-12-01

    In recent years, several linearized model approaches for fast and reliable parametric neuroreceptor mapping based on dynamic nuclear imaging have been developed from the simplified reference tissue model (SRTM) equation. All the methods share the basic SRTM assumptions, but use different schemes to alleviate the effect of noise in dynamic-image voxels. Thus, this study aimed to compare those approaches in terms of their performance in parametric image generation. We used the basis function method and MRTM2 (multilinear reference tissue model with two parameters), which require a division process to obtain the distribution volume ratio (DVR). In addition, a linear model with the DVR as a model parameter (multilinear SRTM) was used in two forms: one based on linear least squares and the other based on extension of total least squares (TLS). Assessment using simulated and actual dynamic [(11)C]ABP688 positron emission tomography data revealed their equivalence with the SRTM, except for different noise susceptibilities. In the DVR image production, the two multilinear SRTM approaches achieved better image quality and regional compatibility with the SRTM than the others, with slightly better performance in the TLS-based method. PMID:26243707

  13. Simplified circuit corrects faults in parallel binary information channels

    NASA Technical Reports Server (NTRS)

    Goldberg, J.

    1966-01-01

    Corrective circuit prevents the appearance of erroneous output signals from the possible failure of any single-channel element interconnected in parallel binary information channels. The circuit is simplified and economical because it does not use redundant channels.

  14. Photographic and drafting techniques simplify method of producing engineering drawings

    NASA Technical Reports Server (NTRS)

    Provisor, H.

    1968-01-01

    Combination of photographic and drafting techniques has been developed to simplify the preparation of three dimensional and dimetric engineering drawings. Conventional photographs can be converted to line drawings by making copy negatives on high contrast film.

  15. Modular chassis simplifies packaging and interconnecting of circuit boards

    NASA Technical Reports Server (NTRS)

    Arens, W. E.; Boline, K. G.

    1964-01-01

    A system of modular chassis structures has simplified the design for mounting a number of printed circuit boards. This design is structurally adaptable to computer and industrial control system applications.

  16. A simplified dynamic model of the Space Shuttle main engine

    NASA Technical Reports Server (NTRS)

    Duyar, Ahmet; Eldem, Vasfi; Merrill, Walter; Guo, Ten-Huei

    1991-01-01

    A simplified model is presented of the space shuttle main engine (SSME) dynamics valid within the range of operation of the engine. This model is obtained by linking the linearized point models obtained at 25 different operating points of SSME. The simplified model was developed for use with a model-based diagnostic scheme for failure detection and diagnostics studies, as well as control design purposes.

  17. Elaboration of simplified vinca alkaloids and phomopsin hybrids.

    PubMed

    Ngo, Quoc Anh; Roussi, Fanny; Thoret, Sylviane; Guéritte, Françoise

    2010-03-01

    Nine simplified vinca alkaloids and phomospin A hybrids, in which vindoline moiety has been replaced by a simpler scaffold, have been elaborated to evaluate their activity on the inhibition of tubulin polymerization. This article deals with the synthesis of various simplified vinca alkaloids, using a stereoselective coupling of catharantine with reactive aromatic compounds and methanol as well as their subsequent condensation with a large peptide chain mimicking those of phomopsin A. Biological evaluation and molecular modeling studies are also reported. PMID:20659111

  18. Simplified Correction Of Errors In Reed-Solomon Codes

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Hsu, I. S.; Eastman, W. L.; Reed, I. S.

    1990-01-01

    New decoder realized by simplified pipeline architecture. Simplified procedure for correction of errors and erasures in Reed-Solomon codes expected to result in simpler decoding equipment. Development widens commercial applicability of Reed-Solomon codes, used to correct bursts of errors in digital communication and recording systems. Improved decoder less complex. Made more regular, simple, and suitable for implementation in both VLSI and software.

  19. Evaluation of Simplified Models for Estimating Public Dose from Spent Nuclear Fuel Shipments

    SciTech Connect

    Connolly, Kevin J.; Radulescu, Georgeta

    2015-01-01

    This paper investigates the dose rate as a function of distance from a representative high-capacity SNF rail-type transportation cask. It uses the SCALE suite of radiation transport modeling and simulation codes to determine neutron and gamma radiation dose rates. The SCALE calculated dose rate is compared with simplified analytical methods historically used for these calculations. The SCALE dose rate calculation presented in this paper employs a very detailed transportation cask model (e.g., pin-by-pin modeling of fuel assembly) and a new hybrid computational transport method. Because it includes pin-level heterogeneity and models ample air and soil outside the cask to simulate scattering of gamma and neutron radiation, this detailed SCALE model is expected to yield more accurate results than previously used models which made more simplistic assumptions (e.g., fuel assembly treated as a point or line source, simple 1-D model of environment outside of cask). The results in this paper are preliminary and, as progress is made on developing and validating improved models, results may be subject to change as models and estimates become more refined and better information leads to more accurate assumptions.

  20. Simplified models for same-spin new physics scenarios

    NASA Astrophysics Data System (ADS)

    Edelhäuser, Lisa; Krämer, Michael; Sonneveld, Jory

    2015-04-01

    Simplified models are an important tool for the interpretation of searches for new physics at the LHC. They are defined by a small number of new particles together with a specific production and decay pattern. The simplified models adopted in the experimental analyses thus far have been derived from supersymmetric theories, and they have been used to set limits on supersymmetric particle masses. We investigate the applicability of such simplified supersymmetric models to a wider class of new physics scenarios, in particular those with same-spin Standard Model partners. We focus on the pair production of quark partners and analyze searches for jets and missing energy within a simplified supersymmetric model with scalar quarks and a simplified model with spin-1/2 quark partners. Despite sizable differences in the detection efficiencies due to the spin of the new particles, the limits on particle masses are found to be rather similar. We conclude that the supersymmetric simplified models employed in current experimental analyses also provide a reliable tool to constrain same-spin BSM scenarios.

  1. Simplifying the complexity of pipe flow.

    PubMed

    Barkley, Dwight

    2011-07-01

    Transitional pipe flow is modeled as a one-dimensional excitable and bistable medium. Models are presented in two variables, turbulence intensity and mean shear, that evolve according to established properties of transitional turbulence. A continuous model captures the essence of the puff-slug transition as a change from excitability to bistability. A discrete model, which additionally incorporates turbulence locally as a chaotic repeller, reproduces almost all large-scale features of transitional pipe flow. In particular, it captures metastable localized puffs, puff splitting, slugs, localized edge states, a continuous transition to sustained turbulence via spatiotemporal intermittency (directed percolation), and a subsequent increase in turbulence fraction toward uniform, featureless turbulence. PMID:21867306

  2. Simplified stereo-optical ultrasound plane calibration

    NASA Astrophysics Data System (ADS)

    Hoßbach, Martin; Noll, Matthias; Wesarg, Stefan

    2013-03-01

    Image guided therapy is a natural concept and commonly used in medicine. In anesthesia, a common task is the injection of an anesthetic close to a nerve under freehand ultrasound guidance. Several guidance systems exist using electromagnetic tracking of the ultrasound probe as well as the needle, providing the physician with a precise projection of the needle into the ultrasound image. This, however, requires additional expensive devices. We suggest using optical tracking with miniature cameras attached to a 2D ultrasound probe to achieve a higher acceptance among physicians. The purpose of this paper is to present an intuitive method to calibrate freehand ultrasound needle guidance systems employing a rigid stereo camera system. State of the art methods are based on a complex series of error prone coordinate system transformations which makes them susceptible to error accumulation. By reducing the amount of calibration steps to a single calibration procedure we provide a calibration method that is equivalent, yet not prone to error accumulation. It requires a linear calibration object and is validated on three datasets utilizing di erent calibration objects: a 6mm metal bar and a 1:25mm biopsy needle were used for experiments. Compared to existing calibration methods for freehand ultrasound needle guidance systems, we are able to achieve higher accuracy results while additionally reducing the overall calibration complexity. Ke

  3. Helioviewer: Simplifying Your Access to SDO Data

    NASA Astrophysics Data System (ADS)

    Hughitt, V. K.; Ireland, J.; Mueller, D.; Beck, J.; Lyon, D.; Dau, A.; Dietert, H.; Nuhn, M.; Dimitoglou, G.; Fleck, B.

    2010-12-01

    Over the past several years, the Helioviewer Project has evolved from a simple web application to display images of the sun into a suite of tools to visualize and interact with heterogeneous types of solar data. In addition to a modular and scalable back-end server, the Helioviewer Project now offers multiple browse clients; the original web application has been upgraded to support high-definition movie generation and feature and event overlays. For complex image processing and massive data volumes, there is a stand-alone desktop application, JHelioviewer. For a quick check of the latest images and events, there is an iPhone application, hqTouch. The project has expanded from the original SOHO images to include image data from SDO and event and feature data from the HEK. We are working on adding additional image data from other missions as well as spectral and time-series data. We will discuss the procedure through which interested parties may process their data for use with Helioviewer, including how to use JP2Gen to convert FITS files into Helioviewer-compliant JPEG 2000 images, how to setup a local instance of the Helioviewer server, and how to query Helioviewer in your own applications using a simple web API.

  4. ON MULTIMODE COMPUTATIONS FOR LATERALLY-HETEROGENEOUS STRUCTURES WITH VARIABLE SURFACE CURVATURE: MODAL LATERAL SCATTERING AND THE FUNDAMENTAL ASSUMPTION

    NASA Astrophysics Data System (ADS)

    Gurung, G.; Schwab, F. A.; Jo, B. G.

    2009-12-01

    Modern computational hardware and Internet (network) communications have made 3-D seismic mapping with complete multimode-multistructure computations feasible for laterally-heterogeneous structures with variable surface curvature; specifically, for a 10 x 10 km grid of surface locations with eight surface azimuths per location, surface dimensions of 3000 x 4000 km, depth from the surface to 800 km, and frequencies from 0.0005 to 0.1000 Hz with interval 0.0005 Hz. The four-part method involves: construction of an initial 3-D structure, static computations (the aspects of which have been presented earlier), wavefront-propagation computations (the modal, lateral-scattering aspects of which are treated here), and inversion for an improved structure. The static computations assign a full, azimuthally-dependent, propagating-mode (spheroidal and torsional) specification to each latitude-longitude location of the geographical region. The fundamental assumption for modal treatment of a 3-D varying structure with variable curvature, is that each triplet (frequency, mode number, surface azimuthal direction of propagation) at a location can be assigned its own specific laterally-homogeneous structure and radius of surface curvature. The extent/cylinder of the true structure used for this is defined by the modal depth of penetration D, and the diameter -- S = 1.5 D -- of the surface sensing circle. The dependence of static-computation results on the fundamental assumption was fully dealt with in earlier presentations. This assumption also enters into the modal, lateral-scattering aspects of the wavefront-propagation computations. Under the ``fundamental assumption'', the structure associated with each triplet varies continuously along the surface raypath from epicenter to receiver, where each raypath point is associated with an effective, laterally-averaged structure. This greatly simplifies the lateral-scattering problem because at any vertical plane the raypath crosses, the

  5. Prenatal Substance Use: Exploring Assumptions of Maternal Unfitness.

    PubMed

    Terplan, Mishka; Kennedy-Hendricks, Alene; Chisolm, Margaret S

    2015-01-01

    In spite of the growing knowledge and understanding of addiction as a chronic relapsing medical condition, individuals with substance use disorders (SUD) continue to experience stigmatization. Pregnant women who use substances suffer additional stigma as their use has the potential to cause fetal harm, calling into question their maternal fitness and often leading to punitive responses. Punishing pregnant women denies the integral interconnectedness of the maternal-fetal dyad. Linking substance use with maternal unfitness is not supported by the balance of the scientific evidence regarding the actual harms associated with substance use during pregnancy. Such linkage adversely impacts maternal, child, and family health by deterring pregnant women from seeking both obstetrical care and SUD treatment. Pregnant women who use substances deserve compassion and care, not pariah-status and punishment. PMID:26448685

  6. Prenatal Substance Use: Exploring Assumptions of Maternal Unfitness

    PubMed Central

    Terplan, Mishka; Kennedy-Hendricks, Alene; Chisolm, Margaret S

    2015-01-01

    In spite of the growing knowledge and understanding of addiction as a chronic relapsing medical condition, individuals with substance use disorders (SUD) continue to experience stigmatization. Pregnant women who use substances suffer additional stigma as their use has the potential to cause fetal harm, calling into question their maternal fitness and often leading to punitive responses. Punishing pregnant women denies the integral interconnectedness of the maternal-fetal dyad. Linking substance use with maternal unfitness is not supported by the balance of the scientific evidence regarding the actual harms associated with substance use during pregnancy. Such linkage adversely impacts maternal, child, and family health by deterring pregnant women from seeking both obstetrical care and SUD treatment. Pregnant women who use substances deserve compassion and care, not pariah-status and punishment. PMID:26448685

  7. Are assumptions of well-known statistical techniques checked, and why (not)?

    PubMed

    Hoekstra, Rink; Kiers, Henk A L; Johnson, Addie

    2012-01-01

    A valid interpretation of most statistical techniques requires that one or more assumptions be met. In published articles, however, little information tends to be reported on whether the data satisfy the assumptions underlying the statistical techniques used. This could be due to self-selection: Only manuscripts with data fulfilling the assumptions are submitted. Another explanation could be that violations of assumptions are rarely checked for in the first place. We studied whether and how 30 researchers checked fictitious data for violations of assumptions in their own working environment. Participants were asked to analyze the data as they would their own data, for which often used and well-known techniques such as the t-procedure, ANOVA and regression (or non-parametric alternatives) were required. It was found that the assumptions of the techniques were rarely checked, and that if they were, it was regularly by means of a statistical test. Interviews afterward revealed a general lack of knowledge about assumptions, the robustness of the techniques with regards to the assumptions, and how (or whether) assumptions should be checked. These data suggest that checking for violations of assumptions is not a well-considered choice, and that the use of statistics can be described as opportunistic. PMID:22593746

  8. Are Assumptions of Well-Known Statistical Techniques Checked, and Why (Not)?

    PubMed Central

    Hoekstra, Rink; Kiers, Henk A. L.; Johnson, Addie

    2012-01-01

    A valid interpretation of most statistical techniques requires that one or more assumptions be met. In published articles, however, little information tends to be reported on whether the data satisfy the assumptions underlying the statistical techniques used. This could be due to self-selection: Only manuscripts with data fulfilling the assumptions are submitted. Another explanation could be that violations of assumptions are rarely checked for in the first place. We studied whether and how 30 researchers checked fictitious data for violations of assumptions in their own working environment. Participants were asked to analyze the data as they would their own data, for which often used and well-known techniques such as the t-procedure, ANOVA and regression (or non-parametric alternatives) were required. It was found that the assumptions of the techniques were rarely checked, and that if they were, it was regularly by means of a statistical test. Interviews afterward revealed a general lack of knowledge about assumptions, the robustness of the techniques with regards to the assumptions, and how (or whether) assumptions should be checked. These data suggest that checking for violations of assumptions is not a well-considered choice, and that the use of statistics can be described as opportunistic. PMID:22593746

  9. Simplified neutrosophic sets and their applications in multi-criteria group decision-making problems

    NASA Astrophysics Data System (ADS)

    Peng, Juan-juan; Wang, Jian-qiang; Wang, Jing; Zhang, Hong-yu; Chen, Xiao-hong

    2016-07-01

    As a variation of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete and inconsistent information that exists in the real world. Simplified neutrosophic sets (SNSs) have been proposed for the main purpose of addressing issues with a set of specific numbers. However, there are certain problems regarding the existing operations of SNSs, as well as their aggregation operators and the comparison methods. Therefore, this paper defines the novel operations of simplified neutrosophic numbers (SNNs) and develops a comparison method based on the related research of intuitionistic fuzzy numbers. On the basis of these operations and the comparison method, some SNN aggregation operators are proposed. Additionally, an approach for multi-criteria group decision-making (MCGDM) problems is explored by applying these aggregation operators. Finally, an example to illustrate the applicability of the proposed method is provided and a comparison with some other methods is made.

  10. Simplified three-dimensional tissue clearing and incorporation of colorimetric phenotyping

    PubMed Central

    Sung, Kevin; Ding, Yichen; Ma, Jianguo; Chen, Harrison; Huang, Vincent; Cheng, Michelle; Yang, Cindy F.; Kim, Jocelyn T.; Eguchi, Daniel; Di Carlo, Dino; Hsiai, Tzung K.; Nakano, Atsushi; Kulkarni, Rajan P.

    2016-01-01

    Tissue clearing methods promise to provide exquisite three-dimensional imaging information; however, there is a need for simplified methods for lower resource settings and for non-fluorescence based phenotyping to enable light microscopic imaging modalities. Here we describe the simplified CLARITY method (SCM) for tissue clearing that preserves epitopes of interest. We imaged the resulting tissues using light sheet microscopy to generate rapid 3D reconstructions of entire tissues and organs. In addition, to enable clearing and 3D tissue imaging with light microscopy methods, we developed a colorimetric, non-fluorescent method for specifically labeling cleared tissues based on horseradish peroxidase conversion of diaminobenzidine to a colored insoluble product. The methods we describe here are portable and can be accomplished at low cost, and can allow light microscopic imaging of cleared tissues, thus enabling tissue clearing and imaging in a wide variety of settings. PMID:27498769

  11. Simplified absolute phase retrieval of dual-frequency fringe patterns in fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Lu, Jin; Mo, Rong; Sun, Huibin; Chang, Zhiyong; Zhao, Xiaxia

    2016-04-01

    In fringe projection profilometry, a simplified method is proposed to recover absolute phase maps of two-frequency fringe patterns by using a unique mapping rule. The mapping rule is designed from the rounded phase values to the fringe order of each pixel. Absolute phase can be recovered by the fringe order maps. Unlike the existing techniques, where the lowest frequency of dual- or multiple-frequency fringe patterns must be single, the presented method breaks the limitation and simplifies the procedure of phase unwrapping. Additionally, due to many issues including ambient light, shadow, sharp edges, step height boundaries and surface reflectivity variations, a novel framework of automatically identifying and removing invalid phase values is also proposed. Simulations and experiments have been carried out to validate the performances of the proposed method.

  12. Transient Stress- and Strain-Based Hemolysis Estimation in a Simplified Blood Pump

    PubMed Central

    Pauli, L.; Nam, J.; Pasquali, M.; Behr, M.

    2014-01-01

    SUMMARY We compare two approaches to numerical estimation of mechanical hemolysis in a simplified blood pump model. The stress-based model relies on the instantaneous shear stress in the blood flow, whereas the strain-based model uses an additional tensor equation to relate distortion of red blood cells to a shear stress measure. We use the newly proposed least-squares finite element method (LSFEM) to prevent negative concentration fields and show a stable and volume preserving LSFEM for the tensor equation. Application of both models to a simplified centrifugal blood pump at three different operating conditions show that the stress-based model overestimates the rate of hemolysis. The strain-based model is found to deliver lower hemolysis rates since it incorporates a more detailed description of biophysical phenomena into the simulation process. PMID:23922311

  13. Simplified three-dimensional tissue clearing and incorporation of colorimetric phenotyping.

    PubMed

    Sung, Kevin; Ding, Yichen; Ma, Jianguo; Chen, Harrison; Huang, Vincent; Cheng, Michelle; Yang, Cindy F; Kim, Jocelyn T; Eguchi, Daniel; Di Carlo, Dino; Hsiai, Tzung K; Nakano, Atsushi; Kulkarni, Rajan P

    2016-01-01

    Tissue clearing methods promise to provide exquisite three-dimensional imaging information; however, there is a need for simplified methods for lower resource settings and for non-fluorescence based phenotyping to enable light microscopic imaging modalities. Here we describe the simplified CLARITY method (SCM) for tissue clearing that preserves epitopes of interest. We imaged the resulting tissues using light sheet microscopy to generate rapid 3D reconstructions of entire tissues and organs. In addition, to enable clearing and 3D tissue imaging with light microscopy methods, we developed a colorimetric, non-fluorescent method for specifically labeling cleared tissues based on horseradish peroxidase conversion of diaminobenzidine to a colored insoluble product. The methods we describe here are portable and can be accomplished at low cost, and can allow light microscopic imaging of cleared tissues, thus enabling tissue clearing and imaging in a wide variety of settings. PMID:27498769

  14. Transient stress-based and strain-based hemolysis estimation in a simplified blood pump.

    PubMed

    Pauli, Lutz; Nam, Jaewook; Pasquali, Matteo; Behr, Marek

    2013-10-01

    We compare two approaches to numerical estimation of mechanical hemolysis in a simplified blood pump model. The stress-based model relies on the instantaneous shear stress in the blood flow, whereas the strain-based model uses an additional tensor equation to relate distortion of red blood cells to a shear stress measure. We use the newly proposed least-squares finite element method (LSFEM) to prevent negative concentration fields and show a stable and volume preserving LSFEM for the tensor equation. Application of both models to a simplified centrifugal blood pump at three different operating conditions shows that the stress-based model overestimates the rate of hemolysis. The strain-based model is found to deliver lower hemolysis rates because it incorporates a more detailed description of biophysical phenomena into the simulation process. PMID:23922311

  15. Comparison of risk-dominant scenario assumptions for several TRU waste facilities in the DOE complex

    SciTech Connect

    Foppe, T.L.; Marx, D.R.

    1999-06-01

    In order to gain a risk management perspective, the DOE Rocky Flats Field Office (RFFO) initiated a survey of other DOE sites regarding risks from potential accidents associated with transuranic (TRU) storage and/or processing facilities. Recently-approved authorization basis documents at the Rocky Flats Environmental Technology Site (RFETS) have been based on the DOE Standard 3011 risk assessment methodology with three qualitative estimates of frequency of occurrence and quantitative estimates of radiological consequences to the collocated worker and the public binned into three severity levels. Risk Class 1 and 2 events after application of controls to prevent or mitigate the accident are designated as risk-dominant scenarios. Accident Evaluation Guidelines for selection of Technical Safety Requirements (TSRs) are based on the frequency and consequence bin assignments to identify controls that can be credited to reduce risk to Risk Class 3 or 4, or that are credited for Risk Class 1 and 2 scenarios that cannot be further reduced. This methodology resulted in several risk-dominant scenarios for either the collocated worker or the public that warranted consideration on whether additional controls should be implemented. RFFO requested the survey because of these high estimates of risks that are primarily due to design characteristics of RFETS TRU waste facilities (i.e., Butler-type buildings without a ventilation and filtration system, and a relatively short distance to the Site boundary). Accident analysis methodologies and key assumptions are being compared for the DOE sites responding to the survey. This includes type of accidents that are risk dominant (e.g., drum explosion, material handling breach, fires, natural phenomena, external events, etc.), source term evaluation (e.g., radionuclide material-at-risk, chemical and physical form, damage ratio, airborne release fraction, respirable fraction, leakpath factors), dispersion analysis (e.g., meteorological

  16. Cosmological perturbations and quasistatic assumption in f (R ) theories

    NASA Astrophysics Data System (ADS)

    Chiu, Mu-Chen; Taylor, Andy; Shu, Chenggang; Tu, Hong

    2015-11-01

    f (R ) gravity is one of the simplest theories of modified gravity to explain the accelerated cosmic expansion. Although it is usually assumed that the quasi-Newtonian approach (a combination of the quasistatic approximation and sub-Hubble limit) for cosmic perturbations is good enough to describe the evolution of large scale structure in f (R ) models, some studies have suggested that this method is not valid for all f (R ) models. Here, we show that in the matter-dominated era, the pressure and shear equations alone, which can be recast into four first-order equations to solve for cosmological perturbations exactly, are sufficient to solve for the Newtonian potential, Ψ , and the curvature potential, Φ . Based on these two equations, we are able to clarify how the exact linear perturbations fit into different limits. We find that the Compton length controls the quasistatic behaviors in f (R ) gravity. In addition, regardless the validity of quasistatic approximation, a strong version of the sub-Hubble limit alone is sufficient to reduce the exact linear perturbations in any viable f (R ) gravity to second order. Our findings disagree with some previous studies where we find little difference between our exact and quasi-Newtonian solutions even up to k =10 c-1H0.

  17. Application of a Simplified Theory of ELF Propagation to a Simplified Worldwide Model of the Ionosphere

    NASA Astrophysics Data System (ADS)

    Behroozi-Toosi, Amir B.; Booker, Henry G.

    1983-05-01

    The approximate theory of ELF propagation in the Earth-ionosphere transmission line described by Booker (1980) is applied to a simplified worldwide model of the D and E regions, and of the Earth's magnetic field. At 1000 Hz by day, reflection is primarily from the gradient on the underside of the D region. At 300 Hz by day, reflection is primarily from the D region at low latitudes, but it is from the E region at high latitudes. Below 100 Hz by day, reflection is primarily from the gradient on the underside of the E region at all latitudes. By night, reflection from the gradient on the topside of the E region is important. There is then a resonant frequency (˜ 300 Hz) at which the optical thickness of the E region for the whistler mode is half a wavelength. At the Schumann resonant frequency in the Earth-ionosphere cavity (˜ 8 Hz) the nocturnal E region is almost completely transparent for the whistler mode and is semi-transparent for the Alfvén mode. Reflection then takes place from the F region. ELF propagation in the Earth-ionosphere transmission line by night is quite dependent on the magnitude of the drop in ionization density between the E and F regions. Nocturnal propagation at ELF therefore depends significantly on an ionospheric feature whose magnitude and variability are not well understood. A comparison is made with results based on the computer program of the United States Naval Ocean Systems Center.

  18. Estimating bacterial diversity for ecological studies: methods, metrics, and assumptions.

    PubMed

    Birtel, Julia; Walser, Jean-Claude; Pichon, Samuel; Bürgmann, Helmut; Matthews, Blake

    2015-01-01

    Methods to estimate microbial diversity have developed rapidly in an effort to understand the distribution and diversity of microorganisms in natural environments. For bacterial communities, the 16S rRNA gene is the phylogenetic marker gene of choice, but most studies select only a specific region of the 16S rRNA to estimate bacterial diversity. Whereas biases derived from from DNA extraction, primer choice and PCR amplification are well documented, we here address how the choice of variable region can influence a wide range of standard ecological metrics, such as species richness, phylogenetic diversity, β-diversity and rank-abundance distributions. We have used Illumina paired-end sequencing to estimate the bacterial diversity of 20 natural lakes across Switzerland derived from three trimmed variable 16S rRNA regions (V3, V4, V5). Species richness, phylogenetic diversity, community composition, β-diversity, and rank-abundance distributions differed significantly between 16S rRNA regions. Overall, patterns of diversity quantified by the V3 and V5 regions were more similar to one another than those assessed by the V4 region. Similar results were obtained when analyzing the datasets with different sequence similarity thresholds used during sequences clustering and when the same analysis was used on a reference dataset of sequences from the Greengenes database. In addition we also measured species richness from the same lake samples using ARISA Fingerprinting, but did not find a strong relationship between species richness estimated by Illumina and ARISA. We conclude that the selection of 16S rRNA region significantly influences the estimation of bacterial diversity and species distributions and that caution is warranted when comparing data from different variable regions as well as when using different sequencing techniques. PMID:25915756

  19. A simplified holography based superresolution system

    NASA Astrophysics Data System (ADS)

    Mudassar, Asloob Ahmad

    2015-12-01

    In this paper we are proposing a simple idea based on holography to achieve superresolution. The object is illuminated by three fibers which maintain the mutual coherence between the light waves. The object in-plane rotation along with fiber-based illumination is used to achieve superresolution. The object in a 4f optical system is illuminated by an on-axis fiber to make the central part of the object's spectrum to the pass through the limiting square-aperture placed at the Fourier plane and the corresponding hologram of the image is recorded at the image plane. The on-axis fiber is switched off and the two off axis fibers (one positioned on the vertical axis and the other positioned on diagonal) are switched on one by one for each orientation of the object position. Four orientations of object in-plane rotation are used differing in angle by 90°. This will allow the recording of eight holographic images in addition to the one recorded with on-axis fiber. The three fibers are at the vertices of a right angled isosceles triangle and are aligned toward the centre of the lens following the fiber plane to generate plane waves for object illumination. The nine holographic images are processed for construction of object's original spectrum, the inverse of which gives the super-resolved image of the original object. Mathematical modeling and simulations are reported.

  20. A Simplified Model of Tropical Cyclone Intensification

    NASA Astrophysics Data System (ADS)

    Schubert, W. H.

    2015-12-01

    An axisymmetric model of tropical cyclone intensification is presented. The model is based on Salmon's wave-vortex approximation, which can describe flows with high Rossby number and low Froude number. After introducing an additional approximation designed to filter propagating inertia-gravity waves, the problem is reduced to the prediction of potential vorticity (PV) and the inversion of this PV to obtain the balanced wind and mass fields. This PV prediction/inversion problem is solved analytically for two types of forcing: a two-region model in which there is nonzero forcing in the cyclone core and zero forcing in the far-field; a three-region model in which there is non-zero forcing in both the cyclone core and the eyewall, with zero forcing in the far-field. The solutions of the two-region model provide insight into why tropical cyclones can have long incubation times before rapid intensification and how the size of the mature vortex can be influenced by the size of the initial vortex. The solutions of the three-region model provide insight into the formation of hollow PV structures and the inward movement of angular momentum surfaces across the radius of maximum wind.

  1. Simplified Approach to Predicting Rough Surface Transition

    NASA Technical Reports Server (NTRS)

    Boyle, R. J.; Stripf, M.

    2009-01-01

    Turbine vane heat transfer predictions are given for smooth and rough vanes where the experimental data show transition moving forward on the vane as the surface roughness physical height increases. Consistent with smooth vane heat transfer, the transition moves forward for a fixed roughness height as the Reynolds number increases. Comparisons are presented with published experimental data. Some of the data are for a regular roughness geometry with a range of roughness heights, Reynolds numbers, and inlet turbulence intensities. The approach taken in this analysis is to treat the roughness in a statistical sense, consistent with what would be obtained from blades measured after exposure to actual engine environments. An approach is given to determine the equivalent sand grain roughness from the statistics of the regular geometry. This approach is guided by the experimental data. A roughness transition criterion is developed, and comparisons are made with experimental data over the entire range of experimental test conditions. Additional comparisons are made with experimental heat transfer data, where the roughness geometries are both regular and statistical. Using the developed analysis, heat transfer calculations are presented for the second stage vane of a high pressure turbine at hypothetical engine conditions.

  2. Simplified Approach to Predicting Rough Surface Transition

    NASA Technical Reports Server (NTRS)

    Boyle, Robert J.; Stripf, Matthias

    2009-01-01

    Turbine vane heat transfer predictions are given for smooth and rough vanes where the experimental data show transition moving forward on the vane as the surface roughness physical height increases. Consiste nt with smooth vane heat transfer, the transition moves forward for a fixed roughness height as the Reynolds number increases. Comparison s are presented with published experimental data. Some of the data ar e for a regular roughness geometry with a range of roughness heights, Reynolds numbers, and inlet turbulence intensities. The approach ta ken in this analysis is to treat the roughness in a statistical sense , consistent with what would be obtained from blades measured after e xposure to actual engine environments. An approach is given to determ ine the equivalent sand grain roughness from the statistics of the re gular geometry. This approach is guided by the experimental data. A roughness transition criterion is developed, and comparisons are made with experimental data over the entire range of experimental test co nditions. Additional comparisons are made with experimental heat tran sfer data, where the roughness geometries are both regular as well a s statistical. Using the developed analysis, heat transfer calculatio ns are presented for the second stage vane of a high pressure turbine at hypothetical engine conditions.

  3. Simplified lysed-blood culture technique.

    PubMed Central

    Zierdt, C H

    1986-01-01

    A blood culture system was developed in which a lysing agent (either Tween 20, one of several other polyoxyethylene adducts, digitonin, or Triton X-100) is added to the blood culture medium. Of 33 Triton compounds, 9 lysed human blood, as did 7 of 21 polyoxyethylene compounds and digitonin, all at a concentration of 0.05%. Under the specific test conditions, three of the hemolytic polyoxyethylene compounds and digitonin had no inhibitory effect. All of the Triton compounds had at least some inhibitory effect on the most sensitive of the pathogenic bacteria that were tested, Streptococcus pneumoniae and Neisseria meningitidis. Because of results from previous studies, Triton X-100 was tested further, despite evidence in this study of its inhibition of bacteria. Of the 55 lysing agents tested, digitonin, Triton X-100, Brij 96, and Tween 20 were selected for further testing as additions to conventional culture broth. Comparative culture studies with bacteremic blood from infected rabbits were performed with the conventional blood culture, the Isolator system (Du Pont Co., Wilmington, Del.), and the new lysing medium. The new system has the advantages of lysis filtration and lysis centrifugation without the associated added cost and processing complexity. PMID:3958142

  4. Social Support, World Assumptions, and Exposure as Predictors of Anxiety and Quality of Life following a Mass Trauma

    PubMed Central

    Grills-Taquechel, Amie E.; Littleton, Heather L.; Axsom, Danny

    2011-01-01

    This study examined the influence of a mass trauma (the Virginia Tech campus shootings) on anxiety symptoms and quality of life, as well as the potential vulnerability/ protective roles of world assumptions and social support. Pre-trauma adjustment data, collected in the six months prior to the shooting, was examined along with two-month post-shooting data in a sample of 298 female students enrolled at the university at the time of the shootings. Linear regression analyses revealed consistent predictive roles for world assumptions pertaining to control and self-worth as well as family support. In addition, for those more severely exposed to the shooting, greater belief in a lack of control over outcomes appeared to increase vulnerability for post-trauma physiological and emotional anxiety symptoms. Implications of the results for research and intervention following mass trauma are discussed. PMID:21236630

  5. D-OPTIMAL EXPERIMENTAL DESIGNS TO TEST FOR DEPARTURE FROM ADDITIVITY IN A FIXED-RATIO MIXTURE RAY.

    EPA Science Inventory

    Humans are exposed to mixtures of environmental compounds. A regulatory assumption is that the mixtures of chemicals act in an additive manner. However, this assumption requires experimental validation. Traditional experimental designs (full factorial) require a large number of e...

  6. A simplified model for two phase face seal design

    NASA Technical Reports Server (NTRS)

    Lau, S. Y.; Hughes, W. F.; Basu, P.; Beatty, P. A.

    1990-01-01

    A simplified quasi-isothermal low-leakage laminar model for analyzing the stiffness and the stability characteristics of two-phase face seals with real fluids is developed. Sample calculations with this model for low-leakage operations are compared with calculations for high-leakage operations, performed using the adiabatic turbulent model of Beatty and Hughes (1987). It was found that the seal characteristics predicted using the two extreme models tend to overlap with each other, indicating that the simplified laminar model may be a useful tool for seal design. The effect of coning was investigated using the simplified model. The results show that, for the same balance, a coned seal has a higher leakage rate than a parallel face seal.

  7. A Simplified Model of The Electrical Asymmetry Effect

    NASA Astrophysics Data System (ADS)

    Keil, Douglas L.; Augustyniak, Edward; Sakiyama, Yukinori; Ni, Pavel

    2014-10-01

    Dual Frequency Capacitively Coupled Plasmas (DF CCP) have been used extensively in semiconductor processing. One of the most promising methods for extending CCP technology is the application of the Electrical Asymmetry Effect (EAE). Extensive studies of this effect have appeared in the literature and the effect can be claimed to be reasonably well understood. However, the complexity of the available models often makes them unwieldy for resolving engineering issues and for analysis of test data. In this work it is shown that most of the industrially important features of the EAE effect can be captured with a greatly simplified model. Although approximate, this simplified model enables relatively quick design guidance and simplifies analysis of test data. Electrical measurements of the EAE effect from a commercially relevant CCP plasma deposition tool are presented. These results show good agreement with the model and serve to illustrate the basic features of the model.

  8. A simplified method for elastic-plastic-creep structural analysis

    NASA Technical Reports Server (NTRS)

    Kaufman, A.

    1985-01-01

    A simplified inelastic analysis computer program (ANSYPM) was developed for predicting the stress-strain history at the critical location of a thermomechanically cycled structure from an elastic solution. The program uses an iterative and incremental procedure to estimate the plastic strains from the material stress-strain properties and a plasticity hardening model. Creep effects are calculated on the basis of stress relaxation at constant strain, creep at constant stress or a combination of stress relaxation and creep accumulation. The simplified method was exercised on a number of problems involving uniaxial and multiaxial loading, isothermal and nonisothermal conditions, dwell times at various points in the cycles, different materials and kinematic hardening. Good agreement was found between these analytical results and nonlinear finite element solutions for these problems. The simplified analysis program used less than 1 percent of the CPU time required for a nonlinear finite element analysis.

  9. A simplified method for elastic-plastic-creep structural analysis

    NASA Technical Reports Server (NTRS)

    Kaufman, A.

    1984-01-01

    A simplified inelastic analysis computer program (ANSYPM) was developed for predicting the stress-strain history at the critical location of a thermomechanically cycled structure from an elastic solution. The program uses an iterative and incremental procedure to estimate the plastic strains from the material stress-strain properties and a plasticity hardening model. Creep effects are calculated on the basis of stress relaxation at constant strain, creep at constant stress or a combination of stress relaxation and creep accumulation. The simplified method was exercised on a number of problems involving uniaxial and multiaxial loading, isothermal and nonisothermal conditions, dwell times at various points in the cycles, different materials and kinematic hardening. Good agreement was found between these analytical results and nonlinear finite element solutions for these problems. The simplified analysis program used less than 1 percent of the CPU time required for a nonlinear finite element analysis.

  10. Development of a simplified procedure for cyclic structural analysis

    NASA Technical Reports Server (NTRS)

    Kaufman, A.

    1984-01-01

    Development was extended of a simplified inelastic analysis computer program (ANSYMP) for predicting the stress-strain history at the critical location of a thermomechanically cycled structure from an elastic solution. The program uses an iterative and incremental procedure to estimate the plastic strains from the material stress-strain properties and a plasticity hardening model. Creep effects can be calculated on the basis of stress relaxation at constant strain, creep at constant stress, or a combination of stress relaxation and creep accumulation. The simplified method was exercised on a number of problems involving uniaxial and multiaxial loading, isothermal and nonisothermal conditions, dwell times at various points in the cycles, different materials, and kinematic hardening. Good agreement was found between these analytical results and nonlinear finite-element solutions for these problems. The simplified analysis program used less than 1 percent of the CPU time required for a nonlinear finite-element analysis.

  11. Simplifying EPID dosimetry for IMRT treatment verification

    SciTech Connect

    Pecharroman-Gallego, R.; Mans, Anton; Sonke, Jan-Jakob; Stroom, Joep C.; Olaciregui-Ruiz, Igor; Herk, Marcel van; Mijnheer, Ben J.

    2011-02-15

    Purpose: Electronic portal imaging devices (EPIDs) are increasingly used for IMRT dose verification, both pretreatment and in vivo. In this study, an earlier developed backprojection model has been modified to avoid the need for patient-specific transmission measurements and, consequently, leads to a faster procedure. Methods: Currently, the transmission, an essential ingredient of the backprojection model, is estimated from the ratio of EPID measurements with and without a phantom/patient in the beam. Thus, an additional irradiation to obtain ''open images'' under the same conditions as the actual phantom/patient irradiation is required. However, by calculating the transmission of the phantom/patient in the direction of the beam instead of using open images, this extra measurement can be avoided. This was achieved by using a model that includes the effect of beam hardening and off-axis dependence of the EPID response on photon beam spectral changes. The parameters in the model were empirically obtained by performing EPID measurements using polystyrene slab phantoms of different thickness in 6, 10, and 18 MV photon beams. A theoretical analysis to verify the sensitivity of the model with patient thickness changes was performed. The new model was finally applied for the analysis of EPID dose verification measurements of step-and-shoot IMRT treatments of head and neck, lung, breast, cervix, prostate, and rectum patients. All measurements were carried out using Elekta SL20i linear accelerators equipped with a hydrogenated amorphous silicon EPID, and the IMRT plans were made using PINNACLE software (Philips Medical Systems). Results: The results showed generally good agreement with the dose determined using the old model applying the measured transmission. The average differences between EPID-based in vivo dose at the isocenter determined using either the new model for transmission and its measured value were 2.6{+-}3.1%, 0.2{+-}3.1%, and 2.2{+-}3.9% for 47 patients

  12. A simplified dynamic model of the T700 turboshaft engine

    NASA Technical Reports Server (NTRS)

    Duyar, Ahmet; Gu, Zhen; Litt, Jonathan S.

    1992-01-01

    A simplified open-loop dynamic model of the T700 turboshaft engine, valid within the normal operating range of the engine, is developed. This model is obtained by linking linear state space models obtained at different engine operating points. Each linear model is developed from a detailed nonlinear engine simulation using a multivariable system identification and realization method. The simplified model may be used with a model-based real time diagnostic scheme for fault detection and diagnostics, as well as for open loop engine dynamics studies and closed loop control analysis utilizing a user generated control law.

  13. Simulation and simplified design studies of photovoltaic systems

    SciTech Connect

    Evans, D.L.; Facinelli, W.A.; Koehler, L.P.

    1980-09-01

    Results of TRNSYS simulations of photovoltaic systems with electrical storage are described. Studies of the sensitivity of system performance, in terms of the fraction of the electrical load supplied by the solar energy system, to variables such as array size, battery size, location, time of year, and load shape are reported. An accurate simplified method for predicting array output of max-power photovoltaic systems is presented. A second simplified method, which estimates the overall performance of max-power systems, is developed. Finally, a preliminary technique for predicting clamped-voltage system performance is discussed.

  14. Simplified pregnant woman models for the fetus exposure assessment

    NASA Astrophysics Data System (ADS)

    Jala, Marjorie; Conil, Emmanuelle; Varsier, Nadège; Wiart, Joe; Hadjem, Abdelhamid; Moulines, Éric; Lévy-Leduc, Céline

    2013-05-01

    In this paper, we introduce a study that we carried out in order to validate the use of a simplified pregnant woman model for the assessment of the fetus exposure to radio frequency waves. This simplified model, based on the use of a homogeneous tissue to replace most of the inner organs of the virtual mother, would allow us to deal with many issues that are raised because of the lack of pregnant woman models for numerical dosimetry. Using specific absorption rate comparisons, we show that this model could be used to estimate the fetus exposure to plane waves.

  15. Simplifying Probability Elicitation and Uncertainty Modeling in Bayesian Networks

    SciTech Connect

    Paulson, Patrick R; Carroll, Thomas E; Sivaraman, Chitra; Neorr, Peter A; Unwin, Stephen D; Hossain, Shamina S

    2011-04-16

    In this paper we contribute two methods that simplify the demands of knowledge elicitation for particular types of Bayesian networks. The first method simplify the task of providing probabilities when the states that a random variable takes can be described by a new, fully ordered state set in which a state implies all the preceding states. The second method leverages Dempster-Shafer theory of evidence to provide a way for the expert to express the degree of ignorance that they feel about the estimates being provided.

  16. Simplified ontologies allowing comparison of developmental mammalian gene expression

    PubMed Central

    Kruger, Adele; Hofmann, Oliver; Carninci, Piero; Hayashizaki, Yoshihide; Hide, Winston

    2007-01-01

    Model organisms represent an important resource for understanding the fundamental aspects of mammalian biology. Mapping of biological phenomena between model organisms is complex and if it is to be meaningful, a simplified representation can be a powerful means for comparison. The Developmental eVOC ontologies presented here are simplified orthogonal ontologies describing the temporal and spatial distribution of developmental human and mouse anatomy. We demonstrate the ontologies by identifying genes showing a bias for developmental brain expression in human and mouse. PMID:17961239

  17. Simplified methods of topical fluoride administration: effects in individuals with hyposalivation.

    PubMed

    Gabre, Pia; Moberg Sköld, Ulla; Birkhed, Dowen

    2013-01-01

    The aim was to compare fluoride (F) levels in individuals with normal salivary secretion and hyposalivation in connection with their use of F solutions and toothpaste. Seven individuals with normal salivation and nine with hyposalivation rinsed with 0.2% NaF solution for 1 minute. In addition, individuals with hyposalivation performed the following: (i) 0.2% NaF rinsing for 20 seconds, (ii) rubbing oral mucosa with a swab soaked with 0.2% NaF solution, and (iii) brushing with 5,000 ppm F (1.1% NaF) toothpaste. Subjects characterized by hyposalivation reached approximately five times higher peak values of F concentrations in saliva after 1 minute rinsing with the F solution and higher area under the curve (AUC) values. The simplified methods exhibited the same AUC values as did 1 minute of rinsing. Brushing with 5,000 ppm F toothpaste resulted in higher AUC values than did the simplified methods. The F concentrations reached higher levels in individuals with hyposalivation compared to those with normal salivation. The simplified methods tested showed similar effects as conventional methods. PMID:23600981

  18. Dissecting jets and missing energy searches using $n$-body extended simplified models

    DOE PAGESBeta

    Cohen, Timothy; Dolan, Matthew J.; El Hedri, Sonia; Hirschauer, James; Tran, Nhan; Whitbeck, Andrew

    2016-08-04

    Simplified Models are a useful way to characterize new physics scenarios for the LHC. Particle decays are often represented using non-renormalizable operators that involve the minimal number of fields required by symmetries. Generalizing to a wider class of decay operators allows one to model a variety of final states. This approach, which we dub the $n$-body extension of Simplified Models, provides a unifying treatment of the signal phase space resulting from a variety of signals. In this paper, we present the first application of this framework in the context of multijet plus missing energy searches. The main result of thismore » work is a global performance study with the goal of identifying which set of observables yields the best discriminating power against the largest Standard Model backgrounds for a wide range of signal jet multiplicities. Our analysis compares combinations of one, two and three variables, placing emphasis on the enhanced sensitivity gain resulting from non-trivial correlations. Utilizing boosted decision trees, we compare and classify the performance of missing energy, energy scale and energy structure observables. We demonstrate that including an observable from each of these three classes is required to achieve optimal performance. In conclusion, this work additionally serves to establish the utility of $n$-body extended Simplified Models as a diagnostic for unpacking the relative merits of different search strategies, thereby motivating their application to new physics signatures beyond jets and missing energy.« less

  19. Dissecting jets and missing energy searches using n-body extended simplified models

    NASA Astrophysics Data System (ADS)

    Cohen, Timothy; Dolan, Matthew J.; El Hedri, Sonia; Hirschauer, James; Tran, Nhan; Whitbeck, Andrew

    2016-08-01

    Simplified Models are a useful way to characterize new physics scenarios for the LHC. Particle decays are often represented using non-renormalizable operators that involve the minimal number of fields required by symmetries. Generalizing to a wider class of decay operators allows one to model a variety of final states. This approach, which we dub the n-body extension of Simplified Models, provides a unifying treatment of the signal phase space resulting from a variety of signals. In this paper, we present the first application of this framework in the context of multijet plus missing energy searches. The main result of this work is a global performance study with the goal of identifying which set of observables yields the best discriminating power against the largest Standard Model backgrounds for a wide range of signal jet multiplicities. Our analysis compares combinations of one, two and three variables, placing emphasis on the enhanced sensitivity gain resulting from non-trivial correlations. Utilizing boosted decision trees, we compare and classify the performance of missing energy, energy scale and energy structure observables. We demonstrate that including an observable from each of these three classes is required to achieve optimal performance. This work additionally serves to establish the utility of n-body extended Simplified Models as a diagnostic for unpacking the relative merits of different search strategies, thereby motivating their application to new physics signatures beyond jets and missing energy.

  20. Testing assumptions for conservation of migratory shorebirds and coastal managed wetlands

    USGS Publications Warehouse

    Collazo, Jaime A.; James Lyons; Herring, Garth

    2015-01-01

    Managed wetlands provide critical foraging and roosting habitats for shorebirds during migration; therefore, ensuring their availability is a priority action in shorebird conservation plans. Contemporary shorebird conservation plans rely on a number of assumptions about shorebird prey resources and migratory behavior to determine stopover habitat requirements. For example, the US Shorebird Conservation Plan for the Southeast-Caribbean region assumes that average benthic invertebrate biomass in foraging habitats is 2.4 g dry mass m−2 and that the dominant prey item of shorebirds in the region is Chironomid larvae. For effective conservation and management, it is important to test working assumptions and update predictive models that are used to estimate habitat requirements. We surveyed migratory shorebirds and sampled the benthic invertebrate community in coastal managed wetlands of South Carolina. We sampled invertebrates at three points in time representing early, middle, and late stages of spring migration, and concurrently surveyed shorebird stopover populations at approximately 7-day intervals throughout migration. We used analysis of variance by ranks to test for temporal variation in invertebrate biomass and density, and we used a model based approach (linear mixed model and Monte Carlo simulation) to estimate mean biomass and density. There was little evidence of a temporal variation in biomass or density during the course of spring shorebird migration, suggesting that shorebirds did not deplete invertebrate prey resources at our site. Estimated biomass was 1.47 g dry mass m−2 (95 % credible interval 0.13–3.55), approximately 39 % lower than values used in the regional shorebird conservation plan. An additional 4728 ha (a 63 % increase) would be required if habitat objectives were derived from biomass levels observed in our study. Polychaetes, especially Laeonereis culveri(2569 individuals m−2), were the most abundant prey in foraging

  1. Simplified versus geometrically accurate models of forefoot anatomy to predict plantar pressures: A finite element study.

    PubMed

    Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R

    2016-01-25

    Integration of patient-specific biomechanical measurements into the design of therapeutic footwear has been shown to improve clinical outcomes in patients with diabetic foot disease. The addition of numerical simulations intended to optimise intervention design may help to build on these advances, however at present the time and labour required to generate and run personalised models of foot anatomy restrict their routine clinical utility. In this study we developed second-generation personalised simple finite element (FE) models of the forefoot with varying geometric fidelities. Plantar pressure predictions from barefoot, shod, and shod with insole simulations using simplified models were compared to those obtained from CT-based FE models incorporating more detailed representations of bone and tissue geometry. A simplified model including representations of metatarsals based on simple geometric shapes, embedded within a contoured soft tissue block with outer geometry acquired from a 3D surface scan was found to provide pressure predictions closest to the more complex model, with mean differences of 13.3kPa (SD 13.4), 12.52kPa (SD 11.9) and 9.6kPa (SD 9.3) for barefoot, shod, and insole conditions respectively. The simplified model design could be produced in <1h compared to >3h in the case of the more detailed model, and solved on average 24% faster. FE models of the forefoot based on simplified geometric representations of the metatarsal bones and soft tissue surface geometry from 3D surface scans may potentially provide a simulation approach with improved clinical utility, however further validity testing around a range of therapeutic footwear types is required. PMID:26708965

  2. On TESOL '82. Pacific Perspectives on Language Learning and Teaching. II: Challenging Assumptions.

    ERIC Educational Resources Information Center

    Stevick, Earl W.; And Others

    This section of the TESOL convention volume challenges basic assumptions which are held by language teachers and researchers while at the same time providing other assumptions for professionals to challenge. The following papers are presented: (1) My View of "Teaching Languages: A Way and Ways," by E. Stevick; (2) "'I Got Religion!': Evangelism in…

  3. 47 CFR 76.913 - Assumption of jurisdiction by the Commission.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Assumption of jurisdiction by the Commission. 76.913 Section 76.913 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Cable Rate Regulation § 76.913 Assumption...

  4. The Empirical Status of Empirically Supported Psychotherapies: Assumptions, Findings, and Reporting in Controlled Clinical Trials

    ERIC Educational Resources Information Center

    Westen, Drew; Novotny, Catherine M.; Thompson-Brenner, Heather

    2004-01-01

    This article provides a critical review of the assumptions and findings of studies used to establish psychotherapies as empirically supported. The attempt to identify empirically supported therapies (ESTs) imposes particular assumptions on the use of randomized controlled trial (RCT) methodology that appear to be valid for some disorders and…

  5. The Holmes Report: Epistemological Assumptions that Impact Art Teacher Assessment and Preparation.

    ERIC Educational Resources Information Center

    Maitland-Gholson, Jane

    1988-01-01

    Examines implicit assumptions about knowledge and learning found in the "Holmes Report" on teacher preparation. Considers the impact of these epistemological assumptions on conceptions of art knowledge, learning, and teacher assessment. Suggests that art education could provide a needed evaluative counterpoint to current trends toward simplistic…

  6. Robustness of the Polytomous IRT Model to Violations of the Unidimensionality Assumption.

    ERIC Educational Resources Information Center

    Dawadi, Bhaskar R.

    The robustness of the polytomous Item Response Theory (IRT) model to violations of the unidimensionality assumption was studied. A secondary purpose was to provide guidelines to practitioners to help in deciding whether to use an IRT model to analyze their data. In a simulation study, the unidimensionality assumption was deliberately violated by…

  7. 7 CFR 772.10 - Transfer and assumption-AMP loans.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 7 2012-01-01 2012-01-01 false Transfer and assumption-AMP loans. 772.10 Section 772..., DEPARTMENT OF AGRICULTURE SPECIAL PROGRAMS SERVICING MINOR PROGRAM LOANS § 772.10 Transfer and assumption—AMP loans. (a) Eligibility. The Agency may approve transfers and assumptions of AMP loans when: (1)...

  8. 7 CFR 772.10 - Transfer and assumption-AMP loans.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 7 2014-01-01 2014-01-01 false Transfer and assumption-AMP loans. 772.10 Section 772..., DEPARTMENT OF AGRICULTURE SPECIAL PROGRAMS SERVICING MINOR PROGRAM LOANS § 772.10 Transfer and assumption—AMP loans. (a) Eligibility. The Agency may approve transfers and assumptions of AMP loans when: (1)...

  9. 7 CFR 772.10 - Transfer and assumption-AMP loans.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 7 2011-01-01 2011-01-01 false Transfer and assumption-AMP loans. 772.10 Section 772..., DEPARTMENT OF AGRICULTURE SPECIAL PROGRAMS SERVICING MINOR PROGRAM LOANS § 772.10 Transfer and assumption—AMP loans. (a) Eligibility. The Agency may approve transfers and assumptions of AMP loans when: (1)...

  10. 7 CFR 772.10 - Transfer and assumption-AMP loans.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 7 2013-01-01 2013-01-01 false Transfer and assumption-AMP loans. 772.10 Section 772..., DEPARTMENT OF AGRICULTURE SPECIAL PROGRAMS SERVICING MINOR PROGRAM LOANS § 772.10 Transfer and assumption—AMP loans. (a) Eligibility. The Agency may approve transfers and assumptions of AMP loans when: (1)...

  11. 7 CFR 772.10 - Transfer and assumption-AMP loans.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Transfer and assumption-AMP loans. 772.10 Section 772..., DEPARTMENT OF AGRICULTURE SPECIAL PROGRAMS SERVICING MINOR PROGRAM LOANS § 772.10 Transfer and assumption—AMP loans. (a) Eligibility. The Agency may approve transfers and assumptions of AMP loans when: (1)...

  12. Under What Assumptions Do Site-by-Treatment Instruments Identify Average Causal Effects?

    ERIC Educational Resources Information Center

    Reardon, Sean F.; Raudenbush, Stephen W.

    2011-01-01

    The purpose of this paper is to clarify the assumptions that must be met if this--multiple site, multiple mediator--strategy, hereafter referred to as "MSMM," is to identify the average causal effects (ATE) in the populations of interest. The authors' investigation of the assumptions of the multiple-mediator, multiple-site IV model demonstrates…

  13. 42 CFR 417.120 - Fiscally sound operation and assumption of financial risk.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Fiscally sound operation and assumption of...: Organization and Operation § 417.120 Fiscally sound operation and assumption of financial risk. (a) Fiscally sound operation—(1) General requirements. Each HMO must have a fiscally sound operation, as...

  14. 42 CFR 417.120 - Fiscally sound operation and assumption of financial risk.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 3 2011-10-01 2011-10-01 false Fiscally sound operation and assumption of...: Organization and Operation § 417.120 Fiscally sound operation and assumption of financial risk. (a) Fiscally sound operation—(1) General requirements. Each HMO must have a fiscally sound operation, as...

  15. 42 CFR 417.120 - Fiscally sound operation and assumption of financial risk.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 3 2013-10-01 2013-10-01 false Fiscally sound operation and assumption of... Organizations: Organization and Operation § 417.120 Fiscally sound operation and assumption of financial risk. (a) Fiscally sound operation—(1) General requirements. Each HMO must have a fiscally sound...

  16. 42 CFR 417.120 - Fiscally sound operation and assumption of financial risk.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 3 2014-10-01 2014-10-01 false Fiscally sound operation and assumption of... Organizations: Organization and Operation § 417.120 Fiscally sound operation and assumption of financial risk. (a) Fiscally sound operation—(1) General requirements. Each HMO must have a fiscally sound...

  17. 42 CFR 417.120 - Fiscally sound operation and assumption of financial risk.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 3 2012-10-01 2012-10-01 false Fiscally sound operation and assumption of... Organizations: Organization and Operation § 417.120 Fiscally sound operation and assumption of financial risk. (a) Fiscally sound operation—(1) General requirements. Each HMO must have a fiscally sound...

  18. 77 FR 75549 - Allocation of Assets in Single-Employer Plans; Interest Assumptions for Valuing Benefits

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-21

    ... benefits for allocation purposes under ERISA section 4044. Assumptions under the asset allocation regulation are updated quarterly and are intended to reflect current conditions in the financial and annuity markets. This final rule updates the asset allocation interest assumptions for the first quarter...

  19. 7 CFR 765.403 - Transfer of security to and assumption of debt by eligible applicants.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... security with assumption of FLP debt, other than EM loans for physical or production losses, by transferees... in part 764 of this chapter for the type of loan being assumed; and (2) The outstanding loan balance... requirements. (d) Amount of assumption. The transferee must assume the lesser of: (1) The outstanding...

  20. 7 CFR 765.403 - Transfer of security to and assumption of debt by eligible applicants.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... security with assumption of FLP debt, other than EM loans for physical or production losses, by transferees... in part 764 of this chapter for the type of loan being assumed; and (2) The outstanding loan balance... requirements. (d) Amount of assumption. The transferee must assume the lesser of: (1) The outstanding...

  1. The Importance of the Assumption of Uncorrelated Errors in Psychometric Theory

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.; Patelis, Thanos

    2015-01-01

    A critical discussion of the assumption of uncorrelated errors in classical psychometric theory and its applications is provided. It is pointed out that this assumption is essential for a number of fundamental results and underlies the concept of parallel tests, the Spearman-Brown's prophecy and the correction for attenuation formulas as well as…

  2. Making Sense out of Sex Stereotypes in Advertising: A Feminist Analysis of Assumptions.

    ERIC Educational Resources Information Center

    Ferrante, Karlene

    Sexism and racism in advertising have been well documented, but feminist research aimed at social change must go beyond existing content analyses to ask how advertising is created. Analysis of the "mirror assumption" (advertising reflects society) and the "gender assumption" (advertising speaks in a male voice to female consumers) reveals the fact…

  3. 12 CFR 741.8 - Purchase of assets and assumption of liabilities.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Purchase of assets and assumption of... § 741.8 Purchase of assets and assumption of liabilities. (a) Any credit union insured by the National... interest in connection with an extension of credit to any member; or (3) Purchases of assets,...

  4. On Testing for Assumption Recognition. Statistical Report. LSAC Research Report Series.

    ERIC Educational Resources Information Center

    Plumer, Gilbert E.

    This paper proposes criteria for determining necessary assumptions of arguments. In their book Evaluating Critical Thinking, S. Norris and R. Ennis (1989) state that although it is tempting to think that certain assumptions are logically necessary for an argument or position, they are not. Many writers of introductory logic texts and the authors…

  5. 76 FR 63836 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-14

    ...-employer plans covered by title IV of the Employee Retirement Income Security Act of 1974. The interest... CORPORATION 29 CFR Part 4022 Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates...

  6. 78 FR 8985 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-07

    ... Register of January 15, 2013 (78 FR 2881), a final rule amending PBGC's regulation on Benefits Payable in... CORPORATION 29 CFR Part 4022 Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for... prescribe interest assumptions under the regulation for valuation dates in February 2013. This...

  7. Making Foundational Assumptions Transparent: Framing the Discussion about Group Communication and Influence

    ERIC Educational Resources Information Center

    Meyers, Renee A.; Seibold, David R.

    2009-01-01

    In this article, the authors seek to augment Dean Hewes's (1986, 1996) intriguing bracketing and admirable larger effort to "return to basic theorizing in the study of group communication" by making transparent the foundational, and debatable, assumptions that underlie those models. Although these assumptions are addressed indirectly by Hewes, the…

  8. Teaching Lessons in Exclusion: Researchers' Assumptions and the Ideology of Normality

    ERIC Educational Resources Information Center

    Benincasa, Luciana

    2012-01-01

    Filling in a research questionnaire means coming into contact with the researchers' assumptions. In this sense filling in a questionnaire may be described as a learning situation. In this paper I carry out discourse analysis of selected questionnaire items from a number of studies, in order to highlight underlying values and assumptions, and their…

  9. A new scenario framework for climate change research: The concept of Shared Climate Policy Assumptions

    SciTech Connect

    Kriegler, Elmar; Edmonds, James A.; Hallegatte, Stephane; Ebi, Kristie L.; Kram, Tom; Riahi, Keywan; Winkler, Harald; Van Vuuren, Detlef

    2014-04-01

    The paper presents the concept of shared climate policy assumptions as an important element of the new scenario framework. Shared climate policy assumptions capture key climate policy dimensions such as the type and scale of mitigation and adaptation measures. They are not specified in the socio-economic reference pathways, and therefore introduce an important third dimension to the scenario matrix architecture. Climate policy assumptions will have to be made in any climate policy scenario, and can have a significant impact on the scenario description. We conclude that a meaningful set of shared climate policy assumptions is useful for grouping individual climate policy analyses and facilitating their comparison. Shared climate policy assumptions should be designed to be policy relevant, and as a set to be broad enough to allow a comprehensive exploration of the climate change scenario space.

  10. SIMPLIFIED RADIOIMMUNOASSAY FOR DETECTION OF HUMAN ROTAVIRUS IN STOOLS

    EPA Science Inventory

    A simplified radioimmunoassay (RIA) technique was developed to facilitate the diagnosis of human rotavirus in stools of infants with diarrhea. This microtiter solid-phase RIA utilizes as a critical reagent hyperimmune serum against a tissue culture-grown simian rotavirus that is ...

  11. Simplified Load-Following Control for a Fuel Cell System

    NASA Technical Reports Server (NTRS)

    Vasquez, Arturo

    2010-01-01

    A simplified load-following control scheme has been proposed for a fuel cell power system. The scheme could be used to control devices that are important parts of a fuel cell system but are sometimes characterized as parasitic because they consume some of the power generated by the fuel cells.

  12. Simplify Web Development for Faculty and Promote Instructional Design.

    ERIC Educational Resources Information Center

    Pedersen, David C.

    Faculty members are often overwhelmed with the prospect of implementing Web-based instruction. In an effort to simplify the process and incorporate some basic instructional design elements, the Educational Technology Team at Embry Riddle Aeronautical University created a course template for WebCT. Utilizing rapid prototyping, the template…

  13. Flat pack interconnection structure simplifies modular electronic assemblies

    NASA Technical Reports Server (NTRS)

    Katzin, L.

    1967-01-01

    Flat pack interconnection structure composed of stick modules simplifies modular electronic assemblies by allowing a single axis mother board. Two of the wiring planes are located in the stick module, which is the lower level of assembly, with the third wiring plane in the mother board.

  14. 12 CFR 3.144 - Simplified supervisory formula approach (SSFA).

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 1 2014-01-01 2014-01-01 false Simplified supervisory formula approach (SSFA). 3.144 Section 3.144 Banks and Banking COMPTROLLER OF THE CURRENCY, DEPARTMENT OF THE TREASURY CAPITAL ADEQUACY STANDARDS Risk-Weighted Assets-Internal Ratings-Based and Advanced Measurement Approaches Risk-Weighted Assets for...

  15. 12 CFR 324.144 - Simplified supervisory formula approach (SSFA).

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 5 2014-01-01 2014-01-01 false Simplified supervisory formula approach (SSFA). 324.144 Section 324.144 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION REGULATIONS AND STATEMENTS OF GENERAL POLICY CAPITAL ADEQUACY OF FDIC-SUPERVISED INSTITUTIONS Risk-Weighted Assets-Internal Ratings-Based and Advanced...

  16. 7 CFR 273.25 - Simplified Food Stamp Program.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... TANF plan as defined at 45 CFR 260.30. (3) Pure-TANF household means a household in which all members... CFR 260.31. (b) Limit on benefit reduction for mixed-TANF households under the SFSP. If a State agency... 7 Agriculture 4 2011-01-01 2011-01-01 false Simplified Food Stamp Program. 273.25 Section...

  17. 7 CFR 273.25 - Simplified Food Stamp Program.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... TANF plan as defined at 45 CFR 260.30. (3) Pure-TANF household means a household in which all members... CFR 260.31. (b) Limit on benefit reduction for mixed-TANF households under the SFSP. If a State agency... 7 Agriculture 4 2012-01-01 2012-01-01 false Simplified Food Stamp Program. 273.25 Section...

  18. Simplified seismic performance assessment and implications for seismic design

    NASA Astrophysics Data System (ADS)

    Sullivan, Timothy J.; Welch, David P.; Calvi, Gian Michele

    2014-08-01

    The last decade or so has seen the development of refined performance-based earthquake engineering (PBEE) approaches that now provide a framework for estimation of a range of important decision variables, such as repair costs, repair time and number of casualties. This paper reviews current tools for PBEE, including the PACT software, and examines the possibility of extending the innovative displacement-based assessment approach as a simplified structural analysis option for performance assessment. Details of the displacement-based s+eismic assessment method are reviewed and a simple means of quickly assessing multiple hazard levels is proposed. Furthermore, proposals for a simple definition of collapse fragility and relations between equivalent single-degree-of-freedom characteristics and multi-degree-of-freedom story drift and floor acceleration demands are discussed, highlighting needs for future research. To illustrate the potential of the methodology, performance measures obtained from the simplified method are compared with those computed using the results of incremental dynamic analyses within the PEER performance-based earthquake engineering framework, applied to a benchmark building. The comparison illustrates that the simplified method could be a very effective conceptual seismic design tool. The advantages and disadvantages of the simplified approach are discussed and potential implications of advanced seismic performance assessments for conceptual seismic design are highlighted through examination of different case study scenarios including different structural configurations.

  19. Cataloguing, Classification and Processing. A Simplified Guide for School Libraries.

    ERIC Educational Resources Information Center

    Manitoba Dept. of Education, Winnipeg.

    Designed for use in school libraries, this manual outlines simplified procedures for cataloging, defined as the indexing of an item; for classification, defined as the assignment of a subject number to the item; and for processsing, defined as the preparation of the item so that it can be borrowed from the library. The manual is divided into 10…

  20. 46 CFR 178.330 - Simplified stability proof test (SST).

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... (UNDER 100 GROSS TONS) INTACT STABILITY AND SEAWORTHINESS Intact Stability Standards § 178.330 Simplified... a flush deck or well deck vessel, the freeboard must be measured to the top of the weatherdeck at... measured to the top of the gunwale. (g) A ferry must also be tested in a manner acceptable to the...

  1. 46 CFR 178.330 - Simplified stability proof test (SST).

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... (UNDER 100 GROSS TONS) INTACT STABILITY AND SEAWORTHINESS Intact Stability Standards § 178.330 Simplified... a flush deck or well deck vessel, the freeboard must be measured to the top of the weatherdeck at... measured to the top of the gunwale. (g) A ferry must also be tested in a manner acceptable to the...

  2. 46 CFR 178.330 - Simplified stability proof test (SST).

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... (UNDER 100 GROSS TONS) INTACT STABILITY AND SEAWORTHINESS Intact Stability Standards § 178.330 Simplified... a flush deck or well deck vessel, the freeboard must be measured to the top of the weatherdeck at... measured to the top of the gunwale. (g) A ferry must also be tested in a manner acceptable to the...

  3. 46 CFR 178.330 - Simplified stability proof test (SST).

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... (UNDER 100 GROSS TONS) INTACT STABILITY AND SEAWORTHINESS Intact Stability Standards § 178.330 Simplified... a flush deck or well deck vessel, the freeboard must be measured to the top of the weatherdeck at... measured to the top of the gunwale. (g) A ferry must also be tested in a manner acceptable to the...

  4. A simplified ductile-brittle transition temperature tester

    NASA Technical Reports Server (NTRS)

    Arias, A.

    1973-01-01

    The construction and operation of a versatile, simplified bend tester is described. The tester is usable at temperatures from - 192 to 650 C in air. Features of the tester include a single test chamber for cryogenic or elevated temperatures, specimen alining support rollers, and either manual or motorized operation.

  5. Tricuspid balloon valvuloplasty: a more simplified approach using inoue balloon.

    PubMed

    Patel, T M; Dani, S I; Shah, S C; Patel, T K

    1996-01-01

    We report a more simplified technique of the balloon tricuspid valvuloplasty using inoue balloon set in a patient suffering from severe rheumatic tricuspid stenosis. We believe that this technique may be useful in a difficult case of tricuspid valvuloplasty. PMID:8770490

  6. How does a simplified-sequence protein fold?

    PubMed

    Guarnera, Enrico; Pellarin, Riccardo; Caflisch, Amedeo

    2009-09-16

    To investigate a putatively primordial protein we have simplified the sequence of a 56-residue alpha/beta fold (the immunoglobulin-binding domain of protein G) by replacing it with polyalanine, polythreonine, and diglycine segments at regions of the sequence that in the folded structure are alpha-helical, beta-strand, and turns, respectively. Remarkably, multiple folding and unfolding events are observed in a 15-micros molecular dynamics simulation at 330 K. The most stable state (populated at approximately 20%) of the simplified-sequence variant of protein G has the same alpha/beta topology as the wild-type but shows the characteristics of a molten globule, i.e., loose contacts among side chains and lack of a specific hydrophobic core. The unfolded state is heterogeneous and includes a variety of alpha/beta topologies but also fully alpha-helical and fully beta-sheet structures. Transitions within the denatured state are very fast, and the molten-globule state is reached in <1 micros by a framework mechanism of folding with multiple pathways. The native structure of the wild-type is more rigid than the molten-globule conformation of the simplified-sequence variant. The difference in structural stability and the very fast folding of the simplified protein suggest that evolution has enriched the primordial alphabet of amino acids mainly to optimize protein function by stabilization of a unique structure with specific tertiary interactions. PMID:19751679

  7. A Simplified Diagnostic Method for Elastomer Bond Durability

    NASA Technical Reports Server (NTRS)

    White, Paul

    2009-01-01

    A simplified method has been developed for determining bond durability under exposure to water or high humidity conditions. It uses a small number of test specimens with relatively short times of water exposure at elevated temperature. The method is also gravimetric; the only equipment being required is an oven, specimen jars, and a conventional laboratory balance.

  8. A Simplified Decision Support Approach for Evaluating Wetlands Ecosystem Services

    EPA Science Inventory

    We will be presenting a simplified approach to evaluating ecosystem services provided by freshwater wetlands restoration. Our approach is based on an existing functional assessment approach developed by Golet and Miller for the State of Rhode Island, and modified by Miller for ap...

  9. A Simplified Technique for Evaluating Human "CCR5" Genetic Polymorphism

    ERIC Educational Resources Information Center

    Falteisek, Lukáš; Cerný, Jan; Janštová, Vanda

    2013-01-01

    To involve students in thinking about the problem of AIDS (which is important in the view of nondecreasing infection rates), we established a practical lab using a simplified adaptation of Thomas's (2004) method to determine the polymorphism of HIV co-receptor CCR5 from students' own epithelial cells. CCR5 is a receptor involved in…

  10. How Does a Simplified-Sequence Protein Fold?

    PubMed Central

    Guarnera, Enrico; Pellarin, Riccardo; Caflisch, Amedeo

    2009-01-01

    To investigate a putatively primordial protein we have simplified the sequence of a 56-residue α/β fold (the immunoglobulin-binding domain of protein G) by replacing it with polyalanine, polythreonine, and diglycine segments at regions of the sequence that in the folded structure are α-helical, β-strand, and turns, respectively. Remarkably, multiple folding and unfolding events are observed in a 15-μs molecular dynamics simulation at 330 K. The most stable state (populated at ∼20%) of the simplified-sequence variant of protein G has the same α/β topology as the wild-type but shows the characteristics of a molten globule, i.e., loose contacts among side chains and lack of a specific hydrophobic core. The unfolded state is heterogeneous and includes a variety of α/β topologies but also fully α-helical and fully β-sheet structures. Transitions within the denatured state are very fast, and the molten-globule state is reached in <1 μs by a framework mechanism of folding with multiple pathways. The native structure of the wild-type is more rigid than the molten-globule conformation of the simplified-sequence variant. The difference in structural stability and the very fast folding of the simplified protein suggest that evolution has enriched the primordial alphabet of amino acids mainly to optimize protein function by stabilization of a unique structure with specific tertiary interactions. PMID:19751679

  11. Simplifying the Admissions Process: Legitimate Endeavor or Recruiting Gimmick?

    ERIC Educational Resources Information Center

    Bruker, Robert M.

    1977-01-01

    Improbable as it may seem, some admissions officers have been able to increase the size of their applicant pool; more effectively identify and serve student needs; and simplify their admissions process--all at the same time, and without increasing their office work load. (Author)

  12. Simplified physically based model of earthen embankment breaching

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A simplified physically based model has been developed to simulate the breaching processes of homogenous and composite earthen embankments owing to overtopping and piping. The breach caused by overtopping flow is approximated as a flat broad-crested weir with a trapezoidal cross section, downstream ...

  13. Measuring Phantom Recollection in the Simplified Conjoint Recognition Paradigm

    ERIC Educational Resources Information Center

    Stahl, Christoph; Klauer, Karl Christoph

    2009-01-01

    False memories are sometimes strong enough to elicit recollective experiences. This phenomenon has been termed Phantom Recollection (PR). The Conjoint Recognition (CR) paradigm has been used to empirically separate PR from other memory processes. Recently, a simplification of the CR procedure has been proposed. We herein extend the simplified CR…

  14. The Choice of Traditional vs. Simplified Characters in US Classrooms

    ERIC Educational Resources Information Center

    Deng, Shi-zhong

    2009-01-01

    Which form of Chinese characters should be taught in Chinese language classes: traditional or simplified? The results of a questionnaire distributed to sections at the University of Florida show the reasons for students' preferences for one or the other form. In view of the students' awareness that traditional characters are more beneficial to…

  15. A Linguistic Analysis of Simplified and Authentic Texts

    ERIC Educational Resources Information Center

    Crossley, Scott A.; Louwerse, Max M.; McCarthy, Philip M.; McNamara, Danielle S.

    2007-01-01

    The opinions of second language learning (L2) theorists and researchers are divided over whether to use authentic or simplified reading texts as the means of input for beginning- and intermediate-level L2 learners. Advocates of both approaches cite the use of linguistic features, syntax, and discourse structures as important elements in support of…

  16. Modèle simplifié du comportement hygrothermique d'un chai

    NASA Astrophysics Data System (ADS)

    Battaglia, J.-L.; Jomaa, W.; Gounot, J.

    1996-11-01

    Coupling between heat and moisture is often important for some kind of buildings. It is the case for wine and spirit storehouse for the breeding of wine in cask. Without strong assumptions, the knowledge model is too complex is regard to the formulation and the resolution of the equations. In this work, we develop a simplified model based on a preliminary study of the concerned building. For the resolution, an algorithm, adapted to our configuration, is proposed. Assumptions are proved with respect to experimental measurements of temperature and relative moisture, for two periods of the year. A sensitivity analysis of the model to some parameters is carried out. Finally, an implementation of the model, to estimate the quantity of wine lost by evaporation during one year, is presented. Le couplage entre chaleur et humidité est souvent important dans certains types de bâtiment et ne permet pas de traiter les deux transferts indépendamment l'un de l'autre. C'est le cas des chais où se déroule l'élevage du vin en barriques. Sans hypothèses fortes, la prise en compte du couplage hygrothermique donne alors lieu à une modélisation complexe tant au point de vue de sa formulation que la résolution des équations. Dans ce travail, nous développons un modèle simplifié, basé sur une étude préliminaire du bâtiment considéré. Un algorithme de résolution, adapté à la configuration étudiée, est proposé. Les hypothèses de modélisation sont validées à partir de relevés expérimentaux de température et d'humidité, réalisés sur deux périodes de l'année. Une étude de sensibilité du modèle à certains paramètres est menée. Enfin, une application du modèle, pour l'évaluation de la quantité de vin évaporée sur une année, est présentée.

  17. On Simplifying Features in OpenStreetMap database

    NASA Astrophysics Data System (ADS)

    Qian, Xinlin; Tao, Kunwang; Wang, Liang

    2015-04-01

    Currently the visualization of OpenStreetMap data is using a tile server which stores map tiles that have been rendered from vector data in advance. However, tiled map are short of functionalities such as data editing and customized styling. To enable these advanced functionality, Client-side processing and rendering of geospatial data is needed. Considering the voluminous size of the OpenStreetMap data, simply sending region queries results of OSM database to client is prohibitive. To make the OSM data retrieved from database adapted for client receiving and rendering, It must be filtered and simplified at server-side to limit its volume. We propose a database extension for OSM database to make it possible to simplifying geospatial objects such as ways and relations during data queries. Several auxiliary tables and PL/pgSQL functions are presented to make the geospatial features can be simplified by omitting unimportant vertices. There are five components in the database extension: Vertices weight computation by polyline and polygon simplification algorithm, Vertices weight storage in auxiliary tables. filtering and selecting of vertices using specific threshold value during spatial queries, assembling of simplified geospatial objects using filtered vertices, vertices weight updating after geospatial objects editing. The database extension is implemented on an OSM APIDB using PL/pgSQL. The database contains a subset of OSM database. The experimental database contains geographic data of United Kingdom which is about 100 million vertices and roughly occupy 100GB disk. JOSM are used to retrieve the data from the database using a revised data accessing API and render the geospatial objects in real-time. When serving simplified data to client, The database allows user to set the bound of the error of simplification or the bound of responding time in each data query. Experimental results show the effectiveness and efficiency of the proposed methods in building a

  18. Evaluation of simplified analytical models for CO2 plume movement and pressure buildup

    NASA Astrophysics Data System (ADS)

    Oruganti, Y.; Mishra, S.

    2011-12-01

    CO2 injection into the sub-surface is emerging as a viable technology for reducing anthropogenic CO2 emissions into the atmosphere. When large amounts of CO2 are sequestered, pressure buildup is an associated risk, along with plume movement beyond the injected domain. In this context, simple modeling tools become valuable assets in preliminary CO2 injection project screening and implementation phases. This study presents an evaluation of two commonly used simplified analytical models for plume movement and pressure buildup, (1) the sharp interface model of Nordbotten et al. (2005), and the corresponding pressure distribution solution of Mathias et al. (2008), and (2) the 3-region model of Burton et al. (2008) based on fractional flow and steady-state pressure gradient considerations. The three-region model of Burton et al. assumes a constant pressure outer boundary. In this study, we incorporate the radius of investigation of the pressure front as the transient pressure boundary, in order to represent an infinite-acting system. The sharp-interface model also assumes the system to be infinite-acting. Temperature and pressure conditions used in these models correspond to the "warm, shallow" and "cold, deep" aquifer conditions as defined by Nordbotten et al. The saturation and pressure profiles as well as injection-well pressure buildup predicted by the analytical models are compared with those from the numerical simulator STOMP in order to provide a verification of the simplified modeling assumptions. Both the STOMP results and the three-region model show two sharp fronts (the drying and two-phase fronts), and a good match is obtained between the front positions at any time. For the sharp interface model, the vertically averaged gas saturation does not exhibit two sharp fronts as seen in the STOMP simulations, but shows a gradual change in saturation with radial distance over the two-phase region. The pressure profiles from STOMP and the analytical model are

  19. An Exploration of Dental Students' Assumptions About Community-Based Clinical Experiences.

    PubMed

    Major, Nicole; McQuistan, Michelle R

    2016-03-01

    The aim of this study was to ascertain which assumptions dental students recalled feeling prior to beginning community-based clinical experiences and whether those assumptions were fulfilled or challenged. All fourth-year students at the University of Iowa College of Dentistry & Dental Clinics participate in community-based clinical experiences. At the completion of their rotations, they write a guided reflection paper detailing the assumptions they had prior to beginning their rotations and assessing the accuracy of their assumptions. For this qualitative descriptive study, the 218 papers from three classes (2011-13) were analyzed for common themes. The results showed that the students had a variety of assumptions about their rotations. They were apprehensive about working with challenging patients, performing procedures for which they had minimal experience, and working too slowly. In contrast, they looked forward to improving their clinical and patient management skills and knowledge. Other assumptions involved the site (e.g., the equipment/facility would be outdated; protocols/procedures would be similar to the dental school's). Upon reflection, students reported experiences that both fulfilled and challenged their assumptions. Some continued to feel apprehensive about treating certain patient populations, while others found it easier than anticipated. Students were able to treat multiple patients per day, which led to increased speed and patient management skills. However, some reported challenges with time management. Similarly, students were surprised to discover some clinics were new/updated although some had limited instruments and materials. Based on this study's findings about students' recalled assumptions and reflective experiences, educators should consider assessing and addressing their students' assumptions prior to beginning community-based dental education experiences. PMID:26933101

  20. Troubling 'lived experience': a post-structural critique of mental health nursing qualitative research assumptions.

    PubMed

    Grant, A

    2014-08-01

    Qualitative studies in mental health nursing research deploying the 'lived experience' construct are often written on the basis of conventional qualitative inquiry assumptions. These include the presentation of the 'authentic voice' of research participants, related to their 'lived experience' and underpinned by a meta-assumption of the 'metaphysics of presence'. This set of assumptions is critiqued on the basis of contemporary post-structural qualitative scholarship. Implications for mental health nursing qualitative research emerging from this critique are described in relation to illustrative published work, and some benefits and challenges for researchers embracing post-structural sensibilities are outlined. PMID:24118139

  1. Learning disabilities theory and Soviet psychology: a comparison of basic assumptions.

    PubMed

    Coles, G S

    1982-09-01

    Critics both within and outside the Learning Disabilities (LD) field have pointed to the weaknesses of LD theory. Beginning with the premise that a significant problem of LD theory has been its failure to explore fully its fundamental assumptions, this paper examines a number of these assumptions about individual and social development, cognition, and learning. These assumptions are compared with a contrasting body of premises found in Soviet psychology, particularly in the works of Vygotsky, Leontiev, and Luria. An examination of the basic assumptions of LD theory and Soviet psychology shows that a major difference lies in their respective nondialectical and dialectical interpretation of the relationship of social factors and cognition, learning, and neurological development. PMID:7142423

  2. Life Detection with Minimal Assumptions — Setting an Abiotic Background for Mars

    NASA Astrophysics Data System (ADS)

    Steele, A.

    2016-05-01

    I set out a strategy for life detection on Mars with minimal assumptions and review the state of knowledge of Martian organic carbon in martian meteorites. Analyses of martian meteorites represents an invaluable "analogue" suite of samples for study.

  3. Washington International Renewable Energy Conference (WIREC) 2008 Pledges. Methodology and Assumptions Summary

    SciTech Connect

    Babiuch, Bill; Bilello, Daniel E.; Cowlin, Shannon C.; Mann, Margaret; Wise, Alison

    2008-08-01

    This report describes the methodology and assumptions used by NREL in quantifying the potential CO2 reductions resulting from more than 140 governments, international organizations, and private-sector representatives pledging to advance the uptake of renewable energy.

  4. Levels of Simplification. The Use of Assumptions, Restrictions, and Constraints in Engineering Analysis.

    ERIC Educational Resources Information Center

    Whitaker, Stephen

    1988-01-01

    Describes the use of assumptions, restrictions, and constraints in solving difficult analytical problems in engineering. Uses the Navier-Stokes equations as examples to demonstrate use, derivations, advantages, and disadvantages of the technique. (RT)

  5. Simplified scheme for entanglement preparation with Rydberg pumping via dissipation

    NASA Astrophysics Data System (ADS)

    Su, Shi-Lei; Guo, Qi; Wang, Hong-Fu; Zhang, Shou

    2015-08-01

    Inspired by recent work [Carr and Saffman, Phys. Rev. Lett. 111, 033607 (2013), 10.1103/PhysRevLett.111.033607], we propose a simplified scheme to prepare the two-atom maximally entangled states via dissipative Rydberg pumping. Compared with the former scheme, the simplified one involves fewer classical laser fields and Rydberg interactions, and the asymmetric Rydberg interactions are avoided. Master equation simulations demonstrate that the fidelity and the Clauser-Horne-Shimony-Holt correlation of the maximally entangled state could reach up to 0.999 and 2.821, respectively, under certain conditions. Furthermore, we extend the physical thoughts to prepare the three-dimensional entangled state, and the numerical simulations show that, in theory, both the fidelity and the negativity of the desired entanglement could be very close to unity under certain conditions.

  6. Simplified slow anti-coincidence circuit for Compton suppression systems.

    PubMed

    Al-Azmi, Darwish

    2008-08-01

    Slow coincidence circuits for the anti-coincidence measurements have been considered for use in Compton suppression technique. The simplified version of the slow circuit has been found to be fast enough, satisfactory and allows an easy system setup, particularly with the advantage of the automatic threshold setting of the low-level discrimination. A well-type NaI detector as the main detector surrounded by plastic guard detector has been arranged to investigate the performance of the Compton suppression spectrometer using the simplified slow circuit. The system has been tested to observe the improvement in the energy spectra for medium to high-energy gamma-ray photons from terrestrial and environmental samples. PMID:18222698

  7. Simplified Building Models Extraction from Ultra-Light Uav Imagery

    NASA Astrophysics Data System (ADS)

    Küng, O.; Strecha, C.; Fua, P.; Gurdan, D.; Achtelik, M.; Doth, K.-M.; Stumpf, J.

    2011-09-01

    Generating detailed simplified building models such as the ones present on Google Earth is often a difficult and lengthy manual task, requiring advanced CAD software and a combination of ground imagery, LIDAR data and blueprints. Nowadays, UAVs such as the AscTec Falcon 8 have reached the maturity to offer an affordable, fast and easy way to capture large amounts of oblique images covering all parts of a building. In this paper we present a state-of-the-art photogrammetry and visual reconstruction pipeline provided by Pix4D applied to medium resolution imagery acquired by such UAVs. The key element of simplified building models extraction is the seamless integration of the outputs of such a pipeline for a final manual refinement step in order to minimize the amount of manual work.

  8. Simplified analytical model of penetration with lateral loading -- User`s guide

    SciTech Connect

    Young, C.W.

    1998-05-01

    The SAMPLL (Simplified Analytical Model of Penetration with Lateral Loading) computer code was originally developed in 1984 to realistically yet economically predict penetrator/target interactions. Since the code`s inception, its use has spread throughout the conventional and nuclear penetrating weapons community. During the penetrator/target interaction, the resistance of the material being penetrated imparts both lateral and axial loads on the penetrator. These loads cause changes to the penetrator`s motion (kinematics). SAMPLL uses empirically based algorithms, formulated from an extensive experimental data base, to replicate the loads the penetrator experiences during penetration. The lateral loads resulting from angle of attack and trajectory angle of the penetrator are explicitly treated in SAMPLL. The loads are summed and the kinematics calculated at each time step. SAMPLL has been continually improved, and the current version, Version 6.0, can handle cratering and spall effects, multiple target layers, penetrator damage/failure, and complex penetrator shapes. Version 6 uses the latest empirical penetration equations, and also automatically adjusts the penetrability index for certain target layers to account for layer thickness and confinement. This report describes the SAMPLL code, including assumptions and limitations, and includes a user`s guide.

  9. A simplified approach for the computation of steady two-phase flow in inverted siphons.

    PubMed

    Diogo, A Freire; Oliveira, Maria C

    2016-01-15

    Hydraulic, sanitary, and sulfide control conditions of inverted siphons, particularly in large wastewater systems, can be substantially improved by continuous air injection in the base of the inclined rising branch. This paper presents a simplified approach that was developed for the two-phase flow of the rising branch using the energy equation for a steady pipe flow, based on the average fluid fraction, observed slippage between phases, and isothermal assumption. As in a conventional siphon design, open channel steady uniform flow is assumed in inlet and outlet chambers, corresponding to the wastewater hydraulic characteristics in the upstream and downstream sewers, and the descending branch operates in steady uniform single-phase pipe flow. The proposed approach is tested and compared with data obtained in an experimental siphon setup with two plastic barrels of different diameters operating separately as in a single-barrel siphon. Although the formulations developed are very simple, the results show a good adjustment for the set of the parameters used and conditions tested and are promising mainly for sanitary siphons with relatively moderate heights of the ascending branch. PMID:26517278

  10. A statistical analysis of the dependency of closure assumptions in cumulus parameterization on the horizontal resolution

    NASA Technical Reports Server (NTRS)

    Xu, Kuan-Man

    1994-01-01

    Simulated data from the UCLA cumulus ensemble model are used to investigate the quasi-universal validity of closure assumptions used in existing cumulus parameterizations. A closure assumption is quasi-universally valid if it is sensitive neither to convective cloud regimes nor to horizontal resolutions of large-scale/mesoscale models. The dependency of three types of closure assumptions, as classified by Arakawa and Chen, on the horizontal resolution is addressed in this study. Type I is the constraint on the coupling of the time tendencies of large-scale temperature and water vapor mixing ratio. Type II is the constraint on the coupling of cumulus heating and cumulus drying. Type III is a direct constraint on the intensity of a cumulus ensemble. The macroscopic behavior of simulated cumulus convection is first compared with the observed behavior in view of Type I and Type II closure assumptions using 'quick-look' and canonical correlation analyses. It is found that they are statistically similar to each other. The three types of closure assumptions are further examined with simulated data averaged over selected subdomain sizes ranging from 64 to 512 km. It is found that the dependency of Type I and Type II closure assumptions on the horizontal resolution is very weak and that Type III closure assumption is somewhat dependent upon the horizontal resolution. The influences of convective and mesoscale processes on the closure assumptions are also addressed by comparing the structures of canonical components with the corresponding vertical profiles in the convective and stratiform regions of cumulus ensembles analyzed directly from simulated data. The implication of these results for cumulus parameterization is discussed.

  11. NGNP: High Temperature Gas-Cooled Reactor Key Definitions, Plant Capabilities, and Assumptions

    SciTech Connect

    Wayne Moe

    2013-05-01

    This document provides key definitions, plant capabilities, and inputs and assumptions related to the Next Generation Nuclear Plant to be used in ongoing efforts related to the licensing and deployment of a high temperature gas-cooled reactor. These definitions, capabilities, and assumptions were extracted from a number of NGNP Project sources such as licensing related white papers, previously issued requirement documents, and preapplication interactions with the Nuclear Regulatory Commission (NRC).

  12. 12 CFR 307.2 - Certification of assumption of deposit liabilities.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... defined in 12 CFR 303.2(g). (d) Evidence of assumption. The receipt by the FDIC of an accurate... depository institution in default, as defined in section 3(x)(1) of the FDI Act (12 U.S.C. 1813(x)(1)), and... satisfactory evidence of such deposit assumption, as required by section 8(q) of the FDI Act (12 U.S.C....

  13. 12 CFR 307.2 - Certification of assumption of deposit liabilities.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... defined in 12 CFR 303.2(g). (d) Evidence of assumption. The receipt by the FDIC of an accurate... depository institution in default, as defined in section 3(x)(1) of the FDI Act (12 U.S.C. 1813(x)(1)), and... satisfactory evidence of such deposit assumption, as required by section 8(q) of the FDI Act (12 U.S.C....

  14. 12 CFR 307.2 - Certification of assumption of deposit liabilities.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... defined in 12 CFR 303.2(g). (d) Evidence of assumption. The receipt by the FDIC of an accurate... depository institution in default, as defined in section 3(x)(1) of the FDI Act (12 U.S.C. 1813(x)(1)), and... satisfactory evidence of such deposit assumption, as required by section 8(q) of the FDI Act (12 U.S.C....

  15. 12 CFR 307.2 - Certification of assumption of deposit liabilities.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... defined in 12 CFR 303.2(g). (d) Evidence of assumption. The receipt by the FDIC of an accurate... depository institution in default, as defined in section 3(x)(1) of the FDI Act (12 U.S.C. 1813(x)(1)), and... satisfactory evidence of such deposit assumption, as required by section 8(q) of the FDI Act (12 U.S.C....

  16. Dark matter phenomenology of GUT inspired simplified models

    NASA Astrophysics Data System (ADS)

    Arcadi, Giorgio

    2016-05-01

    We discuss some aspects of dark matter phenomenology, in particular related to Direct detection and collider searches, of models in which a fermionic Dark Matter interacts with SM fermions through spin 1 mediators. Contrary to conventional simplified models we will consider fixed assignments of the couplings of the (Z' ) mediator, according theoretically motivated embeddings. This allows to predict signals at future experimental facilities which can be used to test and possibly discriminate different realizations.

  17. Simplifying the writing process for the novice writer.

    PubMed

    Redmond, Mary Connie

    2002-10-01

    Nurses take responsibility for reading information to update their professional knowledge and to meet relicensure requirements. However, nurses are less enthusiastic about writing for professional publication. This article explores the reluctance of nurses to write, the reasons why writing for publication is important to the nursing profession, the importance of mentoring to potential writers, and basic information about simplifying the writing process for novice writers. PMID:12384898

  18. Velocity profiles in a hot jet by simplified RELIEF

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Raman Excitation + Laser Induced Electronic Fluorescence (RELIEF) is a double resonance velocimetry technique in which oxygen molecules are vibrationally excited via stimulated Raman scattering at a specific location within a flow field. After suitable time delay, typically 1-10 microseconds, the displacement of the tagged molecules is determined by laser induced fluorescence imaging. Providing support for the installation of simplified RELIEF flow tagging instrumentation at NASA LaRC was the principal goal of this research.

  19. The pentabox Master Integrals with the Simplified Differential Equations approach

    NASA Astrophysics Data System (ADS)

    Papadopoulos, Costas G.; Tommasini, Damiano; Wever, Christopher

    2016-04-01

    We present the calculation of massless two-loop Master Integrals relevant to five-point amplitudes with one off-shell external leg and derive the complete set of planar Master Integrals with five on-mass-shell legs, that contribute to many 2 → 3 amplitudes of interest at the LHC, as for instance three jet production, γ , V, H + 2 jets etc., based on the Simplified Differential Equations approach.

  20. Simplified renormalizable T' model for tribimaximal mixing and Cabibbo angle

    NASA Astrophysics Data System (ADS)

    Frampton, Paul H.; Kephart, Thomas W.; Matsuzaki, Shinya

    2008-10-01

    In a simplified renormalizable model where the neutrinos have Pontecorvo-Maki-Nakagawa-Sakata (PMNS) mixings tan⁡2θ12=(1)/(2), θ13=0, θ23=π/4 and with flavor symmetry T' there is a corresponding prediction where the quarks have Cabibbo-Kobayashi-Maskawa (CKM) mixings tan⁡2Θ12=(2)/(3), Θ13=0, Θ23=0.