Salinas, Cristian A; Searle, Graham E; Gunn, Roger N
2015-02-01
Reference tissue models have gained significant traction over the last two decades as the methods of choice for the quantification of brain positron emission tomography data because they balance quantitative accuracy with less invasive procedures. The principal advantage is the elimination of the need to perform arterial cannulation of the subject to measure blood and metabolite concentrations for input function generation. In particular, the simplified reference tissue model (SRTM) has been widely adopted as it uses a simplified model configuration with only three parameters that typically produces good fits to the kinetic data and a stable parameter estimation process. However, the model's simplicity and its ability to generate good fits to the data, even when the model assumptions are not met, can lead to misplaced confidence in binding potential (BPND) estimates. Computer simulation were used to study the bias introduced in BPND estimates as a consequence of violating each of the four core SRTM model assumptions. Violation of each model assumption led to bias in BPND (both over and underestimation). Careful assessment of the bias in SRTM BPND should be performed for new tracers and applications so that an appropriate decision about its applicability can be made. PMID:25425078
Salinas, Cristian A; Searle, Graham E; Gunn, Roger N
2015-01-01
Reference tissue models have gained significant traction over the last two decades as the methods of choice for the quantification of brain positron emission tomography data because they balance quantitative accuracy with less invasive procedures. The principal advantage is the elimination of the need to perform arterial cannulation of the subject to measure blood and metabolite concentrations for input function generation. In particular, the simplified reference tissue model (SRTM) has been widely adopted as it uses a simplified model configuration with only three parameters that typically produces good fits to the kinetic data and a stable parameter estimation process. However, the model's simplicity and its ability to generate good fits to the data, even when the model assumptions are not met, can lead to misplaced confidence in binding potential (BPND) estimates. Computer simulation were used to study the bias introduced in BPND estimates as a consequence of violating each of the four core SRTM model assumptions. Violation of each model assumption led to bias in BPND (both over and underestimation). Careful assessment of the bias in SRTM BPND should be performed for new tracers and applications so that an appropriate decision about its applicability can be made. PMID:25425078
SU-E-T-293: Simplifying Assumption for Determining Sc and Sp
King, R; Cheung, A; Anderson, R; Thompson, G; Fletcher, M
2014-06-01
Purpose: Scp(mlc,jaw) is a two-dimensional function of collimator field size and effective field size. Conventionally, Scp(mlc,jaw) is treated as separable into components Sc(jaw) and Sp(mlc). Scp(mlc=jaw) is measured in phantom and Sc(jaw) is measured in air with Sp=Scp/Sc. Ideally, Sc and Sp would be able to predict measured values of Scp(mlc,jaw) for all combinations of mlc and jaw. However, ideal Sc and Sp functions do not exist and a measured two-dimensional Scp dataset cannot be decomposed into a unique pair of one-dimensional functions.If the output functions Sc(jaw) and Sp(mlc) were equal to each other and thus each equal to Scp(mlc=jaw){sup 0.5}, this condition would lead to a simpler measurement process by eliminating the need for in-air measurements. Without the distorting effect of the buildup-cap, small-field measurement would be limited only by the dimensions of the detector and would thus be improved by this simplification of the output functions. The goal of the present study is to evaluate an assumption that Sc=Sp. Methods: For a 6 MV x-ray beam, Sc and Sp were determined both by the conventional method and as Scp(mlc=jaw){sup 0.5}. Square field benchmark values of Scp(mlc,jaw) were then measured across the range from 2×2 to 29×29. Both Sc and Sp functions were then evaluated as to their ability to predict these measurements. Results: Both methods produced qualitatively similar results with <4% error for all cases and >3% error in 1 case. The conventional method produced 2 cases with >2% error, while the squareroot method produced only 1 such case. Conclusion: Though it would need to be validated for any specific beam to which it might be applied, under the conditions studied, the simplifying assumption that Sc = Sp is justified.
English as an Additional Language: Assumptions and Challenges
ERIC Educational Resources Information Center
Mistry, Malini; Sood, Krishan
2010-01-01
The number of pupils who have English as an Additional Language (EAL) in our English schools is increasing with an increased influx of migrants from Europe. This paper investigates how schools are addressing the needs of these children. Using survey and interviews with teachers and paraprofessionals (teaching assistants and bilingual assistants),…
Feasibility of a simplified fuel additive evaluation protocol
Lister, S.J.; Hunzinger, R.D.; Taghizadeh, A.
1998-12-31
This report describes the work carried out during the four stages of the first phase of a project that involved the determination of the feasibility of replacing the Association of American Railroads Recommended Practice (ARRP) 503 protocol for testing diesel fuel oil additives with a new procedure using the single cylinder research engine SCRE-251 as the laboratory test engine, which tests for both engine performance as well as emissions compliance. The report begins with a review of the literature on fuel additive testing, then reviews the new US Environmental Protection Agency regulations regarding locomotive diesel emissions. This is followed by a review of the ARRP 503 protocol and the proposed new procedure, a comparison of the ARRP 503 test engines and the SCRE-251, and a study of the SCRE-251`s ability to represent a multi-cylinder medium-speed diesel engine. Appendices include fuel additive manufacturers` information sheets.
Gordon, Christopher J; Herr, David W; Gennings, Chris; Graff, Jaimie E; McMurray, Matthew; Stork, LeAnna; Coffey, Todd; Hamm, Adam; Mack, Cina M
2006-01-01
Most toxicity data are based on studies using single compounds. This study assessed if there is an interaction between mixtures of the anticholinesterase insecticides chlorpyrifos (CHP) and carbaryl (CAR) using hypothermia and cholinesterase (ChE) inhibition as toxicological endpoints. Core temperature (T(c)) was continuously monitored by radiotelemetry in adult Long-Evans rats administered CHP at doses ranging from 0 to 50mg/kg and CAR doses of 0-150 mg/kg. The temperature index (TI), an integration of the change in T(c) over a 12h period, was quantified. Effects of mixtures of CHP and CAR in 2:1 and 1:1 ratios on the TI were examined and the data analyzed using a statistical model designed to assess significant departures from additivity for chemical mixtures. CHP and CAR elicited a marked hypothermia and dose-related decrease in the TI. The TI response to a 2:1 ratio of CHP:CAR was significantly less than that predicted by additivity. The TI response to a 1:1 ratio of CHP and CAR was not significantly different from the predicted additivity. Plasma and brain ChE activity were measured 4h after dosing with CHP, CAR, and mixtures in separate groups of rats. There was a dose-additive interaction for the inhibition of brain ChE for the 2:1 ratio, but an antagonistic effect for the 1:1 ratio. The 2:1 and 1:1 mixtures had an antagonistic interaction on plasma ChE. Overall, the departures from additivity for the physiological (i.e., temperature) and biochemical (i.e., ChE inhibition) endpoints for the 2:1 and 1:1 mixtures studies did not coincide as expected. An interaction between CHP and CAR appears to depend on the ratio of compounds in the mixture as well as the biological endpoint. PMID:16182429
NASA Astrophysics Data System (ADS)
Teng, Chong; Ashby, Kathryn; Phan, Nam; Pal, Deepankar; Stucker, Brent
2016-08-01
The objective of this study was to provide guidance on material specifications for powders used in laser powder bed fusion based additive manufacturing (AM) processes. The methodology was to investigate how different material property assumptions in a simulation affect meltpool prediction and by corrolary how different material properties affect meltpool formation in AM processes. The sensitvity of meltpool variations to each material property can be used as a guide to help drive future research and to help prioritize material specifications in requirements documents. By identifying which material properties have the greatest affect on outcomes, metrology can be tailored to focus on those properties which matter most; thus reducing costs by eliminating unnecessary testing and property charaterizations. Futhermore, this sensitivity study provides insight into which properties require more accurate measurements, thus motivating development of new metrology methods to measure those properties accurately.
Swaminathan, M
1997-01-01
Indian women do not have to be told the benefits of breast feeding or "rescued from the clutches of wicked multinational companies" by international agencies. There is no proof that breast feeding has declined in India; in fact, a 1987 survey revealed that 98% of Indian women breast feed. Efforts to promote breast feeding among the middle classes rely on such initiatives as the "baby friendly" hospital where breast feeding is promoted immediately after birth. This ignores the 76% of Indian women who give birth at home. Blaming this unproved decline in breast feeding on multinational companies distracts attention from more far-reaching and intractable effects of social change. While the Infant Milk Substitutes Act is helpful, it also deflects attention from more pressing issues. Another false assumption is that Indian women are abandoning breast feeding to comply with the demands of employment, but research indicates that most women give up employment for breast feeding, despite the economic cost to their families. Women also seek work in the informal sector to secure the flexibility to meet their child care responsibilities. Instead of being concerned about "teaching" women what they already know about the benefits of breast feeding, efforts should be made to remove the constraints women face as a result of their multiple roles and to empower them with the support of families, governmental policies and legislation, employers, health professionals, and the media. PMID:12321627
Sensitivity Analysis Without Assumptions
VanderWeele, Tyler J.
2016-01-01
Unmeasured confounding may undermine the validity of causal inference with observational studies. Sensitivity analysis provides an attractive way to partially circumvent this issue by assessing the potential influence of unmeasured confounding on causal conclusions. However, previous sensitivity analysis approaches often make strong and untestable assumptions such as having an unmeasured confounder that is binary, or having no interaction between the effects of the exposure and the confounder on the outcome, or having only one unmeasured confounder. Without imposing any assumptions on the unmeasured confounder or confounders, we derive a bounding factor and a sharp inequality such that the sensitivity analysis parameters must satisfy the inequality if an unmeasured confounder is to explain away the observed effect estimate or reduce it to a particular level. Our approach is easy to implement and involves only two sensitivity parameters. Surprisingly, our bounding factor, which makes no simplifying assumptions, is no more conservative than a number of previous sensitivity analysis techniques that do make assumptions. Our new bounding factor implies not only the traditional Cornfield conditions that both the relative risk of the exposure on the confounder and that of the confounder on the outcome must satisfy but also a high threshold that the maximum of these relative risks must satisfy. Furthermore, this new bounding factor can be viewed as a measure of the strength of confounding between the exposure and the outcome induced by a confounder. PMID:26841057
Rearchitecting IT: Simplify. Simplify
ERIC Educational Resources Information Center
Panettieri, Joseph C.
2006-01-01
Simplifying and securing an IT infrastructure is not easy. It frequently requires rethinking years of hardware and software investments, and a gradual migration to modern systems. Even so, writes the author, universities can take six practical steps to success: (1) Audit software infrastructure; (2) Evaluate current applications; (3) Centralize…
Zoumalan, Christopher I; Roostaeian, Jason
2016-01-01
Blepharoplasty remains one of the most common aesthetic procedures performed today. Its popularity stems partly from the ability to consistently make significant improvements in facial aesthetics with a relatively short operation that carries an acceptable risk profile. In this article, the authors attempt to simplify the approach to both upper and lower lid blepharoplasty and provide an algorithm based on the individual findings for any given patient. The recent trend with both upper and lower lid blepharoplasty has been toward greater volume preservation and at times volume augmentation. A simplified approach to upper lid blepharoplasty focuses on removal of excess skin and judicious removal of periorbital fat. Avoidance of a hollow upper sulcus has been emphasized and the addition of volume with either fat grafting or fillers can be considered. Lower lid blepharoplasty can use a transcutaneous or a transconjunctival approach to address herniated fat pads while blending the lid-cheek junction through release of the orbitomalar ligament and volume augmentation with fat (by repositioning and/or grafting) or injectable fillers. Complications with upper lid blepharoplasty are typically minimal, particularly with conservative skin removal and volume preservation techniques. Lower lid blepharoplasty, conversely, can lead to more serious complications, including lid malposition, and therefore should be approached with great caution. Nevertheless, through an algorithmic approach that meets the needs of each individual patient, the approach to blepharoplasty may be simplified with consistent and predictable results. PMID:26710052
Simplified Vicarious Radiometric Calibration
NASA Technical Reports Server (NTRS)
Stanley, Thomas; Ryan, Robert; Holekamp, Kara; Pagnutti, Mary
2010-01-01
ground target areas having different reflectance values. The target areas can be natural or artificial and must be large enough to minimize adjacent-pixel contamination effects. The radiative coupling between the atmosphere and the terrain needs to be approximately the same for the two targets. This condition can be met for relatively uniform backgrounds when the distance between the targets is within a few hundred meters. For each target area, the radiance leaving the ground in the direction of the satellite is measured with a radiometrically calibrated spectroradiometer. Using the radiance measurements from the two targets, atmospheric adjacency and atmospheric scattering effects can be subtracted, thereby eliminating many assumptions about the atmosphere and the radiative interaction between the atmosphere and the terrain. In addition, the radiometrically calibrated spectroradiometer can be used with a known reflectance target to estimate atmospheric transmission and diffuse- to-global ratios without the need for ancillary sun photometers. Several comparisons between the simplified method and traditional techniques were found to agree within a few percent. Hence, the simplified method reduces the overall complexity of performing vicarious calibrations and can serve as a method for validating traditional radiative transfer models
NASA Technical Reports Server (NTRS)
Smalheer, C. V.
1973-01-01
The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.
Huggins, J.K.
1994-12-31
The use of computers, like any technological activity, is not content-neutral. Users of computers constantly interact with assumptions regarding worthwhile activity which are embedded in any computing system. Directly questioning these assumptions in the context of computing allows us to develop an understanding of responsible computing.
ERIC Educational Resources Information Center
Baskas, Richard S.
2011-01-01
The purpose of this study is to examine Knowles' theory of andragogy and his six assumptions of how adults learn while providing evidence to support two of his assumptions based on the theory of andragogy. As no single theory explains how adults learn, it can best be assumed that adults learn through the accumulation of formal and informal…
Mathematical models of Ebola-Consequences of underlying assumptions.
Feng, Zhilan; Zheng, Yiqiang; Hernandez-Ceron, Nancy; Zhao, Henry; Glasser, John W; Hill, Andrew N
2016-07-01
Mathematical models have been used to study Ebola disease transmission dynamics and control for the recent epidemics in West Africa. Many of the models used in these studies are based on the model of Legrand et al. (2007), and most failed to accurately project the outbreak's course (Butler, 2014). Although there could be many reasons for this, including incomplete and unreliable data on Ebola epidemiology and lack of empirical data on how disease-control measures quantitatively affect Ebola transmission, we examine the underlying assumptions of the Legrand model, and provide alternate formulations that are simpler and provide additional information regarding the epidemiology of Ebola during an outbreak. We developed three models with different assumptions about disease stage durations, one of which simplifies to the Legrand model while the others have more realistic distributions. Control and basic reproduction numbers for all three models are derived and shown to provide threshold conditions for outbreak control and prevention. PMID:27130854
Testing Our Fundamental Assumptions
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2016-06-01
fundamental assumptions.A recent focus set in the Astrophysical Journal Letters, titled Focus on Exploring Fundamental Physics with Extragalactic Transients, consists of multiple published studies doing just that.Testing General RelativitySeveral of the articles focus on the 4th point above. By assuming that the delay in photon arrival times is only due to the gravitational potential of the Milky Way, these studies set constraints on the deviation of our galaxys gravitational potential from what GR would predict. The study by He Gao et al. uses the different photon arrival times from gamma-ray bursts to set constraints at eVGeV energies, and the study by Jun-Jie Wei et al. complements this by setting constraints at keV-TeV energies using photons from high-energy blazar emission.Photons or neutrinos from different extragalactic transients each set different upper limits on delta gamma, the post-Newtonian parameter, vs. particle energy or frequency. This is a test of Einsteins equivalence principle: if the principle is correct, delta gamma would be exactly zero, meaning that photons of different energies move at the same velocity through a vacuum. [Tingay Kaplan 2016]S.J. Tingay D.L. Kaplan make the case that measuring the time delay of photons from fast radio bursts (FRBs; transient radio pulses that last only a few milliseconds) will provide even tighter constraints if we are able to accurately determine distances to these FRBs.And Adi Musser argues that the large-scale structure of the universe plays an even greater role than the Milky Way gravitational potential, allowing for even stricter testing of Einsteins equivalence principle.The ever-narrower constraints from these studies all support GR as a correct set of rules through which to interpret our universe.Other Tests of Fundamental PhysicsIn addition to the above tests, Xue-Feng Wu et al. show that FRBs can be used to provide severe constraints on the rest mass of the photon, and S. Croft et al. even touches on what we
Testing Our Fundamental Assumptions
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2016-06-01
fundamental assumptions.A recent focus set in the Astrophysical Journal Letters, titled Focus on Exploring Fundamental Physics with Extragalactic Transients, consists of multiple published studies doing just that.Testing General RelativitySeveral of the articles focus on the 4th point above. By assuming that the delay in photon arrival times is only due to the gravitational potential of the Milky Way, these studies set constraints on the deviation of our galaxys gravitational potential from what GR would predict. The study by He Gao et al. uses the different photon arrival times from gamma-ray bursts to set constraints at eVGeV energies, and the study by Jun-Jie Wei et al. complements this by setting constraints at keV-TeV energies using photons from high-energy blazar emission.Photons or neutrinos from different extragalactic transients each set different upper limits on delta gamma, the post-Newtonian parameter, vs. particle energy or frequency. This is a test of Einsteins equivalence principle: if the principle is correct, delta gamma would be exactly zero, meaning that photons of different energies move at the same velocity through a vacuum. [Tingay Kaplan 2016]S.J. Tingay D.L. Kaplan make the case that measuring the time delay of photons from fast radio bursts (FRBs; transient radio pulses that last only a few milliseconds) will provide even tighter constraints if we are able to accurately determine distances to these FRBs.And Adi Musser argues that the large-scale structure of the universe plays an even greater role than the Milky Way gravitational potential, allowing for even stricter testing of Einsteins equivalence principle.The ever-narrower constraints from these studies all support GR as a correct set of rules through which to interpret our universe.Other Tests of Fundamental PhysicsIn addition to the above tests, Xue-Feng Wu et al. show that FRBs can be used to provide severe constraints on the rest mass of the photon, and S. Croft et al. even touches on what we
Teaching Practices: Reexamining Assumptions.
ERIC Educational Resources Information Center
Spodek, Bernard, Ed.
This publication contains eight papers, selected from papers presented at the Bicentennial Conference on Early Childhood Education, that discuss different aspects of teaching practices. The first two chapters reexamine basic assumptions underlying the organization of curriculum experiences for young children. Chapter 3 discusses the need to…
Neuron Model with Simplified Memristive Ionic Channels
NASA Astrophysics Data System (ADS)
Hegab, Almoatazbellah M.; Salem, Noha M.; Radwan, Ahmed G.; Chua, Leon
2015-06-01
A simplified neuron model is introduced to mimic the action potential generated by the famous Hodgkin-Huxley equations by using the genetic optimization algorithm. Comparison with different neuron models is investigated, and it is confirmed that the sodium and potassium channels in our simplified neuron model are made out of memristors. In addition, the channel equations in the simplified model may be adjusted to introduce a simplified memristor model that is in accordance with the theoretical conditions of the memristive systems.
Stelmach, Ewelina; Szymczycha-Madeja, Anna; Pohl, Pawel
2016-04-15
A direct analysis of instant coffee brews with HR-CS-FAAS spectrometry to determine the total Ca, Fe, Mg and Mn content has been developed and validated. The proposed method is simple and fast; it delivers good analytical performance; its accuracy being within -3% to 3%, its precision--2-3% and detection limits--0.03, 0.04, 0.004 and 0.01 mg l(-1) for Ca, Fe, Mg and Mn, respectively. In addition, Ca, Fe, Mg and Mn bioaccessibility in instant coffee brews was measured by means of the in vitro gastrointestinal digestion with the use of simulated gastric and intestinal juice solutions. Absorption of metals in intestinal villi was simulated by means of ultrafiltration over semi-permeable membrane with a molecular weight cut-off of 5 kDa. Ca, Fe, Mg and Mn concentrations in permeates of instant coffee gastrointestinal incubates were measured with HR-CS-FAA spectrometry. PMID:26616965
Impact of unseen assumptions on communication of atmospheric carbon mitigation options
NASA Astrophysics Data System (ADS)
Elliot, T. R.; Celia, M. A.; Court, B.
2010-12-01
With the rapid access and dissemination of information made available through online and digital pathways, there is need for a concurrent openness and transparency in communication of scientific investigation. Even with open communication it is essential that the scientific community continue to provide impartial result-driven information. An unknown factor in climate literacy is the influence of an impartial presentation of scientific investigation that has utilized biased base-assumptions. A formal publication appendix, and additional digital material, provides active investigators a suitable framework and ancillary material to make informed statements weighted by assumptions made in a study. However, informal media and rapid communiqués rarely make such investigatory attempts, often citing headline or key phrasing within a written work. This presentation is focused on Geologic Carbon Sequestration (GCS) as a proxy for the wider field of climate science communication, wherein we primarily investigate recent publications in GCS literature that produce scenario outcomes using apparently biased pro- or con- assumptions. A general review of scenario economics, capture process efficacy and specific examination of sequestration site assumptions and processes, reveals an apparent misrepresentation of what we consider to be a base-case GCS system. The authors demonstrate the influence of the apparent bias in primary assumptions on results from commonly referenced subsurface hydrology models. By use of moderate semi-analytical model simplification and Monte Carlo analysis of outcomes, we can establish the likely reality of any GCS scenario within a pragmatic middle ground. Secondarily, we review the development of publically available web-based computational tools and recent workshops where we presented interactive educational opportunities for public and institutional participants, with the goal of base-assumption awareness playing a central role. Through a series of
Bounds on the microanalyzer array assumption
NASA Astrophysics Data System (ADS)
Vaughn, Israel J.; Alenin, Andrey S.; Tyo, J. Scott
2016-05-01
Micropolarizer arrays are occasionally used in partial Stokes, full Stokes, and Mueller matrix polarimeters. When treating modulated polarimeters as linear systems, specific assumptions are made about the Dirac delta functional forms generated in the channel space by micropolarizer arrays. These assumptions are 1) infinitely fine sampling both spatially and temporally and 2) infinite array sizes. When these assumptions are lifted and the physical channel shapes are computed, channel shapes become dependent on both the physical pixel area and shape, as well as the array size. We show that under certain circumstances the Dirac delta function approximation is not valid, and give some bounding terms to compute when the approximation is valid, i.e., which array and pixel sizes must be used for the Dirac delta function approximation to hold. Additionally, we show how the physical channel shape changes as a function of array and pixel size, for a conventional 0°, 45°, -45°, 90° superpixel micropolarizer array configuration.
Sampling Assumptions in Inductive Generalization
ERIC Educational Resources Information Center
Navarro, Daniel J.; Dry, Matthew J.; Lee, Michael D.
2012-01-01
Inductive generalization, where people go beyond the data provided, is a basic cognitive capability, and it underpins theoretical accounts of learning, categorization, and decision making. To complete the inductive leap needed for generalization, people must make a key "sampling" assumption about how the available data were generated. Previous…
Stealth Supersymmetry simplified
NASA Astrophysics Data System (ADS)
Fan, JiJi; Krall, Rebecca; Pinner, David; Reece, Matthew; Ruderman, Joshua T.
2016-07-01
In Stealth Supersymmetry, bounds on superpartners from direct searches can be notably weaker than in standard supersymmetric scenarios, due to suppressed missing energy. We present a set of simplified models of Stealth Supersymmetry that motivate 13 TeV LHC searches. We focus on simplified models within the Natural Supersymmetry framework, in which the gluino, stop, and Higgsino are assumed to be lighter than other superpartners. Our simplified models exhibit novel decay patterns that differ significantly from topologies of the Minimal Supersymmetric Standard Model, with and without R-parity. We determine limits on stops and gluinos from searches at the 8 TeV LHC. Existing searches constitute a powerful probe of Stealth Supersymmetry gluinos with certain topologies. However, we identify simplified models where the gluino can be considerably lighter than 1 TeV. Stops are significantly less constrained in Stealth Supersymmetry than the MSSM, and we have identified novel stop decay topologies that are completely unconstrained by existing LHC searches.
Learning Assumptions for Compositional Verification
NASA Technical Reports Server (NTRS)
Cobleigh, Jamieson M.; Giannakopoulou, Dimitra; Pasareanu, Corina; Clancy, Daniel (Technical Monitor)
2002-01-01
Compositional verification is a promising approach to addressing the state explosion problem associated with model checking. One compositional technique advocates proving properties of a system by checking properties of its components in an assume-guarantee style. However, the application of this technique is difficult because it involves non-trivial human input. This paper presents a novel framework for performing assume-guarantee reasoning in an incremental and fully automated fashion. To check a component against a property, our approach generates assumptions that the environment needs to satisfy for the property to hold. These assumptions are then discharged on the rest of the system. Assumptions are computed by a learning algorithm. They are initially approximate, but become gradually more precise by means of counterexamples obtained by model checking the component and its environment, alternately. This iterative process may at any stage conclude that the property is either true or false in the system. We have implemented our approach in the LTSA tool and applied it to the analysis of a NASA system.
Simplified analysis of a generalized bias test for fabrics with two families of inextensible fibres
NASA Astrophysics Data System (ADS)
Cuomo, M.; dell'Isola, F.; Greco, L.
2016-06-01
Two tests for woven fabrics with orthogonal fibres are examined using simplified kinematic assumptions. The aim is to analyse how different constitutive assumptions may affect the response of the specimen. The fibres are considered inextensible, and the kinematics of 2D continua with inextensible chords due to Rivlin is adopted. In addition to two forms of strain energy depending on the shear deformation, also two forms of energy depending on the gradient of shear are examined. It is shown that this energy can account for the bending of the fibres. In addition to the standard bias extension test, a modified test has been examined, in which the head of the specimen is rotated rather than translated. In this case more bending occurs, so that the results of the simulation carried out with the different energy models adopted differ more that what has been found for the BE test.
Can Computers Simplify Admissions?
ERIC Educational Resources Information Center
Bruker, Robert M.
1978-01-01
Based on experience with a simplified admissions concept, Southern Illinois University is satisfied that the admissions process has been made easier for prospective students, high school counselors, and admissions staff. The computer does not make decisions regarding admission of a student, but reduced work loads for everyone concerned. (Author)
Faulty assumptions for repository requirements
Sutcliffe, W G
1999-06-03
Long term performance requirements for a geologic repository for spent nuclear fuel and high-level waste are based on assumptions concerning water use and subsequent deaths from cancer due to ingesting water contaminated with radio isotopes ten thousand years in the future. This paper argues that the assumptions underlying these requirements are faulty for a number of reasons. First, in light of the inevitable technological progress, including efficient desalination of water, over the next ten thousand years, it is inconceivable that a future society would drill for water near a repository. Second, even today we would not use water without testing its purity. Third, today many types of cancer are curable, and with the rapid progress in medical technology in general, and the prevention and treatment of cancer in particular, it is improbable that cancer caused by ingesting contaminated water will be a sign&ant killer in the far future. This paper reviews the performance requirements for geological repositories and comments on the difficulties in proving compliance in the face of inherent uncertainties. The already tiny long-term risk posed by a geologic repository is presented and contrasted with contemporary every day risks. A number of examples of technological progress, including cancer treatments, are advanced. The real and significant costs resulting from the overly conservative requirements are then assessed. Examples are given of how money (and political capital) could be put to much better use to save lives today and in the future. It is concluded that although a repository represents essentially no long-term risk, monitored retrievable dry storage (above or below ground) is the current best alternative for spent fuel and high-level nuclear waste.
Assumptions of the QALY procedure.
Carr-Hill, R A
1989-01-01
The Quality Adjusted Life Year (QALY) has been proposed as a useful index for those managing the provision of health care because it enables the decision-maker to compare the 'value' of different health care programmes and in a way which, potentially at least, reflects social preferences about the appropriate pattern of provision. The index depends on a combination of a measure of morbidity and the risk of mortality. Methodological debate has tended to concentrate on the technicalities of producing a scale of health; and philosophical argument has concentrated on the ethics of interpersonal comparison. There is little recognition of the fragility of the theoretical assumptions underpinning the proposed combination of morbidity and risk of mortality. The context in which the proposed indices are being developed is examined in Section 2. Whilst most working in the field of health measurement eschew over-simplification, it is clear that the application of micro-economics to management is greatly facilitated if a single index can be agreed. The various approaches to combining morbidity and mortality are described in Section 3. The crucial assumptions concern the measurement and valuation of morbidity; the procedures used for scaling morbidity with mortality; and the role of risk. The nature of the valuations involved are examined in Section 4. It seems unlikely that they could ever be widely acceptable; the combination with death and perfect health poses particular problems; and aggregation across individuals compounds the problem. There are also several technical difficulties of scaling and of allowing for risk which have been discussed elsewhere and so are only considered briefly in Section 5 of this paper.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:2762872
Simplified Parallel Domain Traversal
Erickson III, David J
2011-01-01
Many data-intensive scientific analysis techniques require global domain traversal, which over the years has been a bottleneck for efficient parallelization across distributed-memory architectures. Inspired by MapReduce and other simplified parallel programming approaches, we have designed DStep, a flexible system that greatly simplifies efficient parallelization of domain traversal techniques at scale. In order to deliver both simplicity to users as well as scalability on HPC platforms, we introduce a novel two-tiered communication architecture for managing and exploiting asynchronous communication loads. We also integrate our design with advanced parallel I/O techniques that operate directly on native simulation output. We demonstrate DStep by performing teleconnection analysis across ensemble runs of terascale atmospheric CO{sub 2} and climate data, and we show scalability results on up to 65,536 IBM BlueGene/P cores.
Investigations in a Simplified Bracketed Grid Approach to Metrical Structure
ERIC Educational Resources Information Center
Liu, Patrick Pei
2010-01-01
In this dissertation, I examine the fundamental mechanisms and assumptions of the Simplified Bracketed Grid Theory (Idsardi 1992) in two ways: first, by comparing it with Parametric Metrical Theory (Hayes 1995), and second, by implementing it in the analysis of several case studies in stress assignment and syllabification. Throughout these…
75 FR 81459 - Simplified Proceedings
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-28
... Simplified Proceedings in certain civil penalty proceedings. 75 FR 28223. The Commission explained that since... simplify the procedures for handling certain civil penalty proceedings. DATES: The final rule takes effect... to deal with that burgeoning caseload, the Commission is considering methods to simplify...
Microbial life detection with minimal assumptions
NASA Astrophysics Data System (ADS)
Kounaves, Samuel P.; Noll, Rebecca A.; Buehler, Martin G.; Hecht, Michael H.; Lankford, Kurt; West, Steven J.
2002-02-01
To produce definitive and unambiguous results, any life detection experiment must make minimal assumptions about the nature of extraterrestrial life. The only criteria that fits this definition is the ability to reproduce and in the process create a disequilibrium in the chemical and redox environment. The Life Detection Array (LIDA), an instrument proposed for the 2007 NASA Mars Scout Mission, and in the future for the Jovian moons, enables such an experiment. LIDA responds to minute biogenic chemical and physical changes in two identical 'growth' chambers. The sensitivity is provided by two differentially monitored electrochemical sensor arrays. Growth in one of the chambers alters the chemistry and ionic properties and results in a signal. This life detection system makes minimal assumptions; that after addition of water the microorganism replicates and in the process will produce small changes in its immediate surroundings by consuming, metabolizing, and excreting a number of molecules and/or ionic species. The experiment begins by placing an homogenized split-sample of soil or water into each chamber, adding water if soil, sterilizing via high temperature, and equilibrating. In the absence of any microorganism in either chamber, no signal will be detected. The inoculation of one chamber with even a few microorganisms which reproduce, will create a sufficient disequilibrium in the system (compared to the control) to be detectable. Replication of the experiment and positive results would lead to a definitive conclusion of biologically induced changes. The split sample and the nanogram inoculation eliminates chemistry as a causal agent.
Scenarios Based on Shared Socioeconomic Pathway Assumptions
NASA Astrophysics Data System (ADS)
Edmonds, J.
2013-12-01
scenario with at least 8.5 Wm-2. To address this problem each SSP scenario can be treated as a reference scenario, to which emissions mitigation policies can be applied to create a set of RCP replications. These RCP replications have the underlying SSP socio-economic assumptions in addition to policy assumptions and radiative forcing levels consistent with the CMIP5 products. We report quantitative results of initial experiments from the five participating groups.
A simplified model for glass formation
NASA Technical Reports Server (NTRS)
Uhlmann, D. R.; Onorato, P. I. K.; Scherer, G. W.
1979-01-01
A simplified model of glass formation based on the formal theory of transformation kinetics is presented, which describes the critical cooling rates implied by the occurrence of glassy or partly crystalline bodies. In addition, an approach based on the nose of the time-temperature-transformation (TTT) curve as an extremum in temperature and time has provided a relatively simple relation between the activation energy for viscous flow in the undercooled region and the temperature of the nose of the TTT curve. Using this relation together with the simplified model, it now seems possible to predict cooling rates using only the liquidus temperature, glass transition temperature, and heat of fusion.
A discussion of assumptions and solution approaches of infiltration into a cracked soil
Technology Transfer Automated Retrieval System (TEKTRAN)
A model for predicting rain infiltration into a swelling/shrinking/cracking soil was proposed (Römkens, M.J.M., and S. N. Prasad., 2006, Agricultural Water Management. 86:196-205). Several simplifying assumptions were made. The model consists of a two-component process of Darcian matrix flow and Hor...
Assumptions to the Annual Energy Outlook
2015-01-01
This report presents the major assumptions of the National Energy Modeling System (NEMS) used to generate the projections in the Annual Energy Outlook, including general features of the model structure, assumptions concerning energy markets, and the key input data and parameters that are the most significant in formulating the model results.
5 CFR 841.405 - Economic assumptions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... modification of the economic assumptions concerning salary and wage growth to take into account the combined... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Economic assumptions. 841.405 Section 841... (CONTINUED) FEDERAL EMPLOYEES RETIREMENT SYSTEM-GENERAL ADMINISTRATION Government Costs § 841.405...
The Assumptive Worlds of Fledgling Administrators.
ERIC Educational Resources Information Center
Marshall, Catherine; Mitchell, Barbara A.
1991-01-01
Studies school-site administrators' understanding about ways of gaining/maintaining power, control, and predictability. Multisite study data concerning assistant principals identify rules of the game for four micropolitical (site-level assumptive world) domains. Assumptive worlds create avoidance of value conflicts and risky change, group-think…
10 CFR 436.14 - Methodological assumptions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 3 2011-01-01 2011-01-01 false Methodological assumptions. 436.14 Section 436.14 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION FEDERAL ENERGY MANAGEMENT AND PLANNING PROGRAMS Methodology and Procedures for Life Cycle Cost Analyses § 436.14 Methodological assumptions. (a) Each Federal Agency shall discount to present values the...
10 CFR 436.14 - Methodological assumptions.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 3 2014-01-01 2014-01-01 false Methodological assumptions. 436.14 Section 436.14 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION FEDERAL ENERGY MANAGEMENT AND PLANNING PROGRAMS Methodology and Procedures for Life Cycle Cost Analyses § 436.14 Methodological assumptions. (a) Each Federal Agency shall discount to present values the...
5 CFR 841.405 - Economic assumptions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false Economic assumptions. 841.405 Section 841... (CONTINUED) FEDERAL EMPLOYEES RETIREMENT SYSTEM-GENERAL ADMINISTRATION Government Costs § 841.405 Economic assumptions. The determinations of the normal cost percentage will be based on the economic...
5 CFR 841.405 - Economic assumptions.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 5 Administrative Personnel 2 2014-01-01 2014-01-01 false Economic assumptions. 841.405 Section 841... (CONTINUED) FEDERAL EMPLOYEES RETIREMENT SYSTEM-GENERAL ADMINISTRATION Government Costs § 841.405 Economic assumptions. The determinations of the normal cost percentage will be based on the economic...
5 CFR 841.405 - Economic assumptions.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 5 Administrative Personnel 2 2013-01-01 2013-01-01 false Economic assumptions. 841.405 Section 841... (CONTINUED) FEDERAL EMPLOYEES RETIREMENT SYSTEM-GENERAL ADMINISTRATION Government Costs § 841.405 Economic assumptions. The determinations of the normal cost percentage will be based on the economic...
5 CFR 841.405 - Economic assumptions.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 5 Administrative Personnel 2 2012-01-01 2012-01-01 false Economic assumptions. 841.405 Section 841... (CONTINUED) FEDERAL EMPLOYEES RETIREMENT SYSTEM-GENERAL ADMINISTRATION Government Costs § 841.405 Economic assumptions. The determinations of the normal cost percentage will be based on the economic...
Teaching Critical Thinking by Examining Assumptions
ERIC Educational Resources Information Center
Yanchar, Stephen C.; Slife, Brent D.
2004-01-01
We describe how instructors can integrate the critical thinking skill of examining theoretical assumptions (e.g., determinism and materialism) and implications into psychology courses. In this instructional approach, students formulate questions that help them identify assumptions and implications, use those questions to identify and examine the…
Assessment of calibration assumptions under strong climate changes
NASA Astrophysics Data System (ADS)
Van Schaeybroeck, Bert; Vannitsem, Stéphane
2016-02-01
Climate model calibration relies on different working hypotheses. The simplest bias correction or delta change methods assume the invariance of bias under climate change. Recent works have questioned this hypothesis and proposed linear bias changes with respect to the forcing. However, when the system experiences larger forcings, these schemes could fail. Calibration assumptions are tested within a simplified framework in the context of an intermediate complexity model for which the reference (or "reality") differs from the model by a single parametric model error and climate change is emulated by largely different CO2 forcings. It appears that calibration does not add value since the variation of bias under climate change is nonmonotonous for almost all variables and large compared to the climate change and the bias, except for the global temperature and sea ice area. For precipitation, calibration provides added value both globally and regionally. The calibration methods used fail to correct climate variability.
Further evidence for the EPNT assumption
NASA Technical Reports Server (NTRS)
Greenberger, Daniel M.; Bernstein, Herbert J.; Horne, Michael; Zeilinger, Anton
1994-01-01
We recently proved a theorem extending the Greenberger-Horne-Zeilinger (GHZ) Theorem from multi-particle systems to two-particle systems. This proof depended upon an auxiliary assumption, the EPNT assumption (Emptiness of Paths Not Taken). According to this assumption, if there exists an Einstein-Rosen-Podolsky (EPR) element of reality that determines that a path is empty, then there can be no entity associated with the wave that travels this path (pilot-waves, empty waves, etc.) and reports information to the amplitude, when the paths recombine. We produce some further evidence in support of this assumption, which is certainly true in quantum theory. The alternative is that such a pilot-wave theory would have to violate EPR locality.
Critical Thinking: Distinguishing between Inferences and Assumptions.
ERIC Educational Resources Information Center
Elder, Linda; Paul, Richard
2002-01-01
Outlines the differences between inferences and assumptions in critical thinking processes. Explains that as students develop critical intuitions, they increasingly notice how their point of view shapes their experiences. (AUTH/NB)
Code of Federal Regulations, 2013 CFR
2013-10-01
... (3 CFR, 1966-1970 Comp., p. 820), and other emergency plans regarding the allocation and use of... COORDINATION OF THE RADIO SPECTRUM DURING A WARTIME EMERGENCY § 214.3 Assumptions. When the provisions of...
Code of Federal Regulations, 2010 CFR
2010-10-01
... (3 CFR, 1966-1970 Comp., p. 820), and other emergency plans regarding the allocation and use of... COORDINATION OF THE RADIO SPECTRUM DURING A WARTIME EMERGENCY § 214.3 Assumptions. When the provisions of...
Code of Federal Regulations, 2014 CFR
2014-10-01
... (3 CFR, 1966-1970 Comp., p. 820), and other emergency plans regarding the allocation and use of... COORDINATION OF THE RADIO SPECTRUM DURING A WARTIME EMERGENCY § 214.3 Assumptions. When the provisions of...
Code of Federal Regulations, 2012 CFR
2012-10-01
... (3 CFR, 1966-1970 Comp., p. 820), and other emergency plans regarding the allocation and use of... COORDINATION OF THE RADIO SPECTRUM DURING A WARTIME EMERGENCY § 214.3 Assumptions. When the provisions of...
Code of Federal Regulations, 2011 CFR
2011-10-01
... (3 CFR, 1966-1970 Comp., p. 820), and other emergency plans regarding the allocation and use of... COORDINATION OF THE RADIO SPECTRUM DURING A WARTIME EMERGENCY § 214.3 Assumptions. When the provisions of...
Revisiting the Simplified Bernoulli Equation
Heys, Jeffrey J; Holyoak, Nicole; Calleja, Anna M; Belohlavek, Marek; Chaliki, Hari P
2010-01-01
Background: The assessment of the severity of aortic valve stenosis is done by either invasive catheterization or non-invasive Doppler Echocardiography in conjunction with the simplified Bernoulli equation. The catheter measurement is generally considered more accurate, but the procedure is also more likely to have dangerous complications. Objective: The focus here is on examining computational fluid dynamics as an alternative method for analyzing the echo data and determining whether it can provide results similar to the catheter measurement. Methods: An in vitro heart model with a rigid orifice is used as a first step in comparing echocardiographic data, which uses the simplified Bernoulli equation, catheterization, and echocardiographic data, which uses computational fluid dynamics (i.e., the Navier-Stokes equations). Results: For a 0.93cm2 orifice, the maximum pressure gradient predicted by either the simplified Bernoulli equation or computational fluid dynamics was not significantly different from the experimental catheter measurement (p > 0.01). For a smaller 0.52cm2 orifice, there was a small but significant difference (p < 0.01) between the simplified Bernoulli equation and the computational fluid dynamics simulation, with the computational fluid dynamics simulation giving better agreement with experimental data for some turbulence models. Conclusion: For this simplified, in vitro system, the use of computational fluid dynamics provides an improvement over the simplified Bernoulli equation with the biggest improvement being seen at higher valvular stenosis levels. PMID:21625471
Life Support Baseline Values and Assumptions Document
NASA Technical Reports Server (NTRS)
Anderson, Molly S.; Ewert, Michael K.; Keener, John F.; Wagner, Sandra A.
2015-01-01
The Baseline Values and Assumptions Document (BVAD) provides analysts, modelers, and other life support researchers with a common set of values and assumptions which can be used as a baseline in their studies. This baseline, in turn, provides a common point of origin from which many studies in the community may depart, making research results easier to compare and providing researchers with reasonable values to assume for areas outside their experience. With the ability to accurately compare different technologies' performance for the same function, managers will be able to make better decisions regarding technology development.
Assessing Statistical Model Assumptions under Climate Change
NASA Astrophysics Data System (ADS)
Varotsos, Konstantinos V.; Giannakopoulos, Christos; Tombrou, Maria
2016-04-01
The majority of the studies assesses climate change impacts on air-quality using chemical transport models coupled to climate ones in an off-line mode, for various horizontal resolutions and different present and future time slices. A complementary approach is based on present-day empirical relations between air-pollutants and various meteorological variables which are then extrapolated to the future. However, the extrapolation relies on various assumptions such as that these relationships will retain their main characteristics in the future. In this study we focus on the ozone-temperature relationship. It is well known that among a number of meteorological variables, temperature is found to exhibit the highest correlation with ozone concentrations. This has led, in the past years, to the development and application of statistical models with which the potential impact of increasing future temperatures on various ozone statistical targets was examined. To examine whether the ozone-temperature relationship retains its main characteristics under warmer temperatures we analyze the relationship during the heatwaves events of 2003 and 2006 in Europe. More specifically, we use available gridded daily maximum temperatures (E-OBS) and hourly ozone observations from different non-urban stations (EMEP) within the areas that were impacted from the two heatwave events. In addition, we compare the temperature distributions of the two events with temperatures from two different future time periods 2021-2050 and 2071-2100 from a number of regional climate models developed under the framework of the Cordex initiative (http://www.cordex.org) with a horizontal resolution of 12 x 12km, based on different IPCC RCPs emissions scenarios. A statistical analysis is performed on the ozone-temperature relationship for each station and for the two aforementioned years which are then compared against the ozone-temperature relationships obtained from the rest of the available dataseries. The
Publish unexpected results that conflict with assumptions
Technology Transfer Automated Retrieval System (TEKTRAN)
Some widely held scientific assumptions have been discredited, whereas others are just inappropriate for many applications. Sometimes, a widely-held analysis procedure takes on a life of its own, forgetting the original purpose of the analysis. The peer-reviewed system makes it difficult to get a pa...
Parenting the Musically Gifted: Assumptions and Issues.
ERIC Educational Resources Information Center
Flohr, John W.
1987-01-01
Commonly held assumptions about musical giftedness in children are disputed. Several issues are examined, including how musical giftedness is defined, the availability of community resources, parental encouragement versus pressure, and potential emotional and behavioral problems. Some suggestions useful for the parents of musically gifted children…
Artificial Intelligence: Underlying Assumptions and Basic Objectives.
ERIC Educational Resources Information Center
Cercone, Nick; McCalla, Gordon
1984-01-01
Presents perspectives on methodological assumptions underlying research efforts in artificial intelligence (AI) and charts activities, motivations, methods, and current status of research in each of the major AI subareas: natural language understanding; computer vision; expert systems; search, problem solving, planning; theorem proving and logic…
Assumptions of Multiple Regression: Correcting Two Misconceptions
ERIC Educational Resources Information Center
Williams, Matt N.; Gomez Grajales, Carlos Alberto; Kurkiewicz, Dason
2013-01-01
In 2002, an article entitled "Four assumptions of multiple regression that researchers should always test" by Osborne and Waters was published in "PARE." This article has gone on to be viewed more than 275,000 times (as of August 2013), and it is one of the first results displayed in a Google search for "regression…
Classroom Instruction: Background, Assumptions, and Challenges
ERIC Educational Resources Information Center
Wolery, Mark; Hemmeter, Mary Louise
2011-01-01
In this article, the authors focus on issues of instruction in classrooms. Initially, a brief definitional and historic section is presented. This is followed by a discussion of four assumptions about the current state of affairs: (a) evidence-based practices should be identified and used, (b) children's phase of performance should dictate…
24 CFR 58.4 - Assumption authority.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., decision-making, and action that would otherwise apply to HUD under NEPA and other provisions of law that... environmental review, decision-making and action for programs authorized by the Native American Housing... separate decision regarding assumption of responsibilities for each of these Acts and communicate...
Causal Mediation Analysis: Warning! Assumptions Ahead
ERIC Educational Resources Information Center
Keele, Luke
2015-01-01
In policy evaluations, interest may focus on why a particular treatment works. One tool for understanding why treatments work is causal mediation analysis. In this essay, I focus on the assumptions needed to estimate mediation effects. I show that there is no "gold standard" method for the identification of causal mediation effects. In…
Extracurricular Business Planning Competitions: Challenging the Assumptions
ERIC Educational Resources Information Center
Watson, Kayleigh; McGowan, Pauric; Smith, Paul
2014-01-01
Business planning competitions [BPCs] are a commonly offered yet under-examined extracurricular activity. Given the extent of sceptical comment about business planning, this paper offers what the authors believe is a much-needed critical discussion of the assumptions that underpin the provision of such competitions. In doing so it is suggested…
Culturally Biased Assumptions in Counseling Psychology
ERIC Educational Resources Information Center
Pedersen, Paul B.
2003-01-01
Eight clusters of culturally biased assumptions are identified for further discussion from Leong and Ponterotto's (2003) article. The presence of cultural bias demonstrates that cultural bias is so robust and pervasive that is permeates the profession of counseling psychology, even including those articles that effectively attack cultural bias…
29 CFR 4044.53 - Mortality assumptions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... for one person is in pay status on the valuation date, and if the payment of a death benefit after the... (c) of this section to represent the mortality of the death beneficiary. (c) Healthy lives. If the... assumptions. (a) General rule. Subject to paragraph (b) of this section (regarding certain death...
24 CFR 58.4 - Assumption authority.
Code of Federal Regulations, 2012 CFR
2012-04-01
..., decision-making, and action that would otherwise apply to HUD under NEPA and other provisions of law that... environmental review, decision-making and action for programs authorized by the Native American Housing... separate decision regarding assumption of responsibilities for each of these Acts and communicate...
Mexican-American Cultural Assumptions and Implications.
ERIC Educational Resources Information Center
Carranza, E. Lou
The search for presuppositions of a people's thought is not new. Octavio Paz and Samuel Ramos have both attempted to describe the assumptions underlying the Mexican character. Paz described Mexicans as private, defensive, and stoic, characteristics taken to the extreme in the "pachuco." Ramos, on the other hand, described Mexicans as being…
10 CFR 436.14 - Methodological assumptions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Procedures for Life Cycle Cost Analyses § 436.14 Methodological assumptions. (a) Each Federal Agency shall... the Life Cycle Costing Manual for the Federal Energy Management Program (NIST 85-3273) and determined... of the fiscal year in the Annual Supplement to the Life Cycle Costing Manual for the Federal...
10 CFR 436.14 - Methodological assumptions.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Procedures for Life Cycle Cost Analyses § 436.14 Methodological assumptions. (a) Each Federal Agency shall... the Life Cycle Costing Manual for the Federal Energy Management Program (NIST 85-3273) and determined... of the fiscal year in the Annual Supplement to the Life Cycle Costing Manual for the Federal...
10 CFR 436.14 - Methodological assumptions.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Procedures for Life Cycle Cost Analyses § 436.14 Methodological assumptions. (a) Each Federal Agency shall... the Life Cycle Costing Manual for the Federal Energy Management Program (NIST 85-3273) and determined... of the fiscal year in the Annual Supplement to the Life Cycle Costing Manual for the Federal...
Assumptive Worldviews and Problematic Reactions to Bereavement
ERIC Educational Resources Information Center
Currier, Joseph M.; Holland, Jason M.; Neimeyer, Robert A.
2009-01-01
Forty-two individuals who had lost an immediate family member in the prior 2 years and 42 nonbereaved matched controls completed the World Assumptions Scale (Janoff-Bulman, 1989) and the Symptom Checklist-10-Revised (Rosen et al., 2000). Results showed that bereaved individuals were significantly more distressed than nonbereaved matched controls,…
Critically Challenging Some Assumptions in HRD
ERIC Educational Resources Information Center
O'Donnell, David; McGuire, David; Cross, Christine
2006-01-01
This paper sets out to critically challenge five interrelated assumptions prominent in the (human resource development) HRD literature. These relate to: the exploitation of labour in enhancing shareholder value; the view that employees are co-contributors to and co-recipients of HRD benefits; the distinction between HRD and human resource…
Deep Borehole Field Test Requirements and Controlled Assumptions.
Hardin, Ernest
2015-07-01
This document presents design requirements and controlled assumptions intended for use in the engineering development and testing of: 1) prototype packages for radioactive waste disposal in deep boreholes; 2) a waste package surface handling system; and 3) a subsurface system for emplacing and retrieving packages in deep boreholes. Engineering development and testing is being performed as part of the Deep Borehole Field Test (DBFT; SNL 2014a). This document presents parallel sets of requirements for a waste disposal system and for the DBFT, showing the close relationship. In addition to design, it will also inform planning for drilling, construction, and scientific characterization activities for the DBFT. The information presented here follows typical preparations for engineering design. It includes functional and operating requirements for handling and emplacement/retrieval equipment, waste package design and emplacement requirements, borehole construction requirements, sealing requirements, and performance criteria. Assumptions are included where they could impact engineering design. Design solutions are avoided in the requirements discussion. Deep Borehole Field Test Requirements and Controlled Assumptions July 21, 2015 iv ACKNOWLEDGEMENTS This set of requirements and assumptions has benefited greatly from reviews by Gordon Appel, Geoff Freeze, Kris Kuhlman, Bob MacKinnon, Steve Pye, David Sassani, Dave Sevougian, and Jiann Su.
Simplified High-Power Inverter
NASA Technical Reports Server (NTRS)
Edwards, D. B.; Rippel, W. E.
1984-01-01
Solid-state inverter simplified by use of single gate-turnoff device (GTO) to commutate multiple silicon controlled rectifiers (SCR's). By eliminating conventional commutation circuitry, GTO reduces cost, size and weight. GTO commutation applicable to inverters of greater than 1-kilowatt capacity. Applications include emergency power, load leveling, drives for traction and stationary polyphase motors, and photovoltaic-power conditioning.
75 FR 28223 - Simplified Proceedings
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-20
... From the Federal Register Online via the Government Publishing Office ] FEDERAL MINE SAFETY AND HEALTH REVIEW COMMISSION 29 CFR Part 2700 Simplified Proceedings AGENCY: Federal Mine Safety and Health Review Commission. ACTION: Notice of proposed rulemaking. SUMMARY: The Federal Mine Safety and...
Simplifying the Water Poverty Index
ERIC Educational Resources Information Center
Cho, Danny I.; Ogwang, Tomson; Opio, Christopher
2010-01-01
In this paper, principal components methodology is used to derive simplified and cost effective indexes of water poverty. Using a well known data set for 147 countries from which an earlier five-component water poverty index comprising of "Resources," "Access," "Capacity," "Use" and "Environment" was constructed, we find that a simplified…
A note on the assumption of quasiequilibrium in semiconductor junction devices
NASA Technical Reports Server (NTRS)
Von Roos, O.
1977-01-01
It is shown that the quasi-equilibrium theory for p-n junctions, as originally proposed by Shockley (1949), does not apply under conditions involving an application of comparatively low external voltages. A numerical example indicates that the quasi-equilibrium assumption must be discarded as soon as the voltage is increased beyond a certain critical value, although the system may still be in a low-level injection regime. It is currently not known which set of simplifying assumptions may replace the quasi-equilibrium assumptions. Possible analytic simplification relations applicable to moderate or high injection levels can, perhaps, be based on an approach considered by Mari (1968) and Choo (1971, 1972).
Caring for Caregivers: Challenging the Assumptions.
Williams, A Paul; Peckham, Allie; Kuluski, Kerry; Lum, Janet; Warrick, Natalie; Spalding, Karen; Tam, Tommy; Bruce-Barrett, Cindy; Grasic, Marta; Im, Jennifer
2015-01-01
Informal and mostly unpaid caregivers - spouses, family, friends and neighbours - play a crucial role in supporting the health, well-being, functional independence and quality of life of growing numbers of persons of all ages who cannot manage on their own. Yet, informal caregiving is in decline; falling rates of engagement in caregiving are compounded by a shrinking caregiver pool. How should policymakers respond? In this paper, we draw on a growing international literature, along with findings from community-based studies conducted by our team across Ontario, to highlight six common assumptions about informal caregivers and what can be done to support them. These include the assumption that caregivers will be there to take on an increasing responsibility; that caregiving is only about an aging population; that money alone can do the job; that policymakers can simply wait and see; that front-line care professionals should be left to fill the policy void; and that caregivers should be addressed apart from cared-for persons and formal care systems. While each assumption has a different focus, all challenge policymakers to view caregivers as key players in massive social and political change, and to respond accordingly. PMID:26626112
A Proposal for Testing Local Realism Without Using Assumptions Related to Hidden Variable States
NASA Technical Reports Server (NTRS)
Ryff, Luiz Carlos
1996-01-01
A feasible experiment is discussed which allows us to prove a Bell's theorem for two particles without using an inequality. The experiment could be used to test local realism against quantum mechanics without the introduction of additional assumptions related to hidden variables states. Only assumptions based on direct experimental observation are needed.
Simplified environmental study on innovative bridge structure.
Bouhaya, Lina; Le Roy, Robert; Feraille-Fresnet, Adélaïde
2009-03-15
The aim of this paper is to present a simplified life cycle assessment on an innovative bridge structure, made of wood and ultra high performance concrete, which combines mechanical performance with minimum environmental impact. The environmental analysis was conducted from cradle to grave using the Life Cycle Assessment method. It was restricted to energy release and greenhouse gas emissions. Assumptions are detailed for each step of the analysis. For the wood end-of-life, three scenarios were proposed: dumping, burning, and recycling. Results show that the most energy needed is in the production phase, which represents 73.4% of the total amount. Analysis shows that the renewable energy is about 70% of the production energy. Wood, through its biomass CO2, contributes positively to the environmental impact. It was concluded that no scenario can be the winner on both impacts. Indeed, the end-of-life wood recycling gives the best impact on CO2 release, whereas burning wood, despite its remarkable energy impact is the worst. According to the emphasis given to each impact, designers will be able to choose one or the other. PMID:19368215
A Simplified Adiabatic Compression Apparatus
NASA Astrophysics Data System (ADS)
Moloney, Michael J.; McGarvey, Albert P.
2007-10-01
Mottmann described an excellent way to measure the ratio of specific heats for air (γ = Cp/Cv) by suddenly compressing a plastic 2-liter bottle. His arrangement can be simplified so that no valves are involved and only a single connection needs to be made. This is done by adapting the plastic cap of a 2-liter plastic bottle so it connects directly to a Vernier Software Gas Pressure Sensor2 and the LabPro3 interface.
Simplifying plasma chemistry via ILDM
NASA Astrophysics Data System (ADS)
Rehman, T.; Kemaneci, E.; Graef, W.; van Dijk, J.
2016-02-01
A plasma fluid model containing a large number of chemical species and reactions yields a high computational load. One of the methods to overcome this difficulty is to apply Chemical Reduction Techniques as used in combustion engineering. The chemical reduction technique that we study here is ILDM (Intrinsic Lower Dimensional Manifold). The ILDM method is used to simplify an argon plasma model and then a comparison is made with a CRM (Collisional Radiative Model).
Project M: An Assessment of Mission Assumptions
NASA Technical Reports Server (NTRS)
Edwards, Alycia
2010-01-01
Project M is a mission Johnson Space Center is working on to send an autonomous humanoid robot to the moon (also known as Robonaut 2) in l000 days. The robot will be in a lander, fueled by liquid oxygen and liquid methane, and land on the moon, avoiding any hazardous obstacles. It will perform tasks like maintenance, construction, and simple student experiments. This mission is also being used as inspiration for new advancements in technology. I am considering three of the design assumptions that contribute to determining the mission feasibility: maturity of robotic technology, launch vehicle determination, and the LOX/Methane fueled spacecraft
Gas/Aerosol partitioning: a simplified method for global modeling
NASA Astrophysics Data System (ADS)
Metzger, S. M.
2000-09-01
The main focus of this thesis is the development of a simplified method to routinely calculate gas/aerosol partitioning of multicomponent aerosols and aerosol associated water within global atmospheric chemistry and climate models. Atmospheric aerosols are usually multicomponent mixtures, partly composed of acids (e.g. H2SO4, HNO3), their salts (e.g. (NH4)2SO4, NH4NO3, respectively), and water. Because these acids and salts are highly hygroscopic, water, that is associated with aerosols in humid environments, often exceeds the total dry aerosol mass. Both the total dry aerosol mass and the aerosol associated water are important for the role of atmospheric aerosols in climate change simulations. Still, multicomponent aerosols are not yet routinely calculated within global atmospheric chemistry or climate models. The reason is that these particles, especially volatile aerosol compounds, require a complex and computationally expensive thermodynamical treatment. For instance, the aerosol associated water depends on the composition of the aerosol, which is determined by the gas/liquid/solid partitioning, in turn strongly dependent on temperature, relative humidity, and the presence of pre-existing aerosol particles. Based on thermodynamical relations such a simplified method has been derived. This method is based on the assumptions generally made by the modeling of multicomponent aerosols, but uses an alternative approach for the calculation of the aerosol activity and activity coefficients. This alternative approach relates activity coefficients to the ambient relative humidity, according to the vapor pressure reduction and the generalization of Raoult s law. This relationship, or simplification, is a consequence of the assumption that the aerosol composition and the aerosol associated water are in thermodynamic equilibrium with the ambient relative humidity, which determines the solute activity and, hence, activity coefficients of a multicomponent aerosol mixture
Alternative monotonicity assumptions for improving bounds on natural direct effects.
Chiba, Yasutaka; Taguri, Masataka
2013-01-01
Estimating the direct effect of a treatment on an outcome is often the focus of epidemiological and clinical research, when the treatment has more than one specified pathway to the defined outcome. Even if the total effect is unconfounded, the direct effect is not identified when unmeasured variables affect the intermediate and outcome variables. Therefore, bounds on direct effects have been presented via linear programming under two common definitions of direct effects: controlled and natural. Here, we propose bounds on natural direct effects without using linear programming, because such bounds on controlled direct effects have already been proposed. To derive narrow bounds, we introduce two monotonicity assumptions that are weaker than those in previous studies and another monotonicity assumption. Furthermore, we do not assume that an outcome variable is binary, whereas previous studies have made that assumption. An additional advantage of our bounds is that the bounding formulas are extremely simple. The proposed bounds are illustrated using a randomized trial for coronary heart disease. PMID:23893690
Simplified tools for evaluating domestic ventilation systems
Maansson, L.G.; Orme, M.
1999-07-01
Within an International Energy Agency (IEA) project, Annex 27, experts from 8 countries (Canada, France, Italy, Japan, The Netherlands, Sweden, UK and USA) have developed simplified tools for evaluating domestic ventilation systems during the heating season. Tools for building and user aspects, thermal comfort, noise, energy, life cycle cost, reliability and indoor air quality (IAQ) have been devised. The results can be used both for dwellings at the design stage and after construction. The tools lead to immediate answers and indications about the consequences of different choices that may arise during discussion with clients. This paper presents an introduction to these tools. Examples applications of the indoor air quality and energy simplified tools are also provided. The IAQ tool accounts for constant emission sources, CO{sub 2}, cooking products, tobacco smoke, condensation risks, humidity levels (i.e., for judging the risk for mould and house dust mites), and pressure difference (for identifying the risk for radon or land fill spillage entering the dwelling or problems with indoor combustion appliances). An elaborated set of design parameters were worked out that resulted in about 17,000 combinations. By using multi-variate analysis it was possible to reduce this to 174 combinations for IAQ. In addition, a sensitivity analysis was made using 990 combinations. The results from all the runs were used to develop a simplified tool, as well as quantifying equations relying on the design parameters. A computerized energy tool has also been developed within this project, which takes into account air tightness, climate, window airing pattern, outdoor air flow rate and heat exchange efficiency.
Simplified Model for iLIDS IDD
NASA Technical Reports Server (NTRS)
Lewis, James L.
2010-01-01
The NASA Docking System (NDS) Project has provided simplified volumetric models for use by potential hosts vehicles to assess vehicle integration. It should be noted that the JSC-65795 NDS Interface Definition Document (IDD) takes precedence over this simplified model. The simplified model serves as a graphical representation only. It is therefore important to state that dimensions and tolerances are to be taken from the IDD document and supersede any measurements derived from the provided simplified model geometry.
Hidden assumptions and the placebo effect.
Campbell, Anthony
2009-06-01
Whether, or how far, acupuncture effects can be explained as due to the placebo response is clearly an important issue, but there is an underlying philosophical assumption implicit in much of the debate, which is often ignored. Much of the argument is cast in terms which suggest that there is an immaterial mind hovering above the brain and giving rise to spurious effects. This model derives from Cartesian dualism which would probably be rejected by nearly all those involved, but it is characteristic of "folk psychology" and seems to have an unconscious influence on much of the terminology that is used. The majority of philosophers today reject dualism and this is also the dominant trend in science. Placebo effects, on this view, must be brain effects. It is important for modern acupuncture practitioners to keep this in mind when reading research on the placebo question. PMID:19502463
Simplifying microbial electrosynthesis reactor design
Giddings, Cloelle G. S.; Nevin, Kelly P.; Woodward, Trevor; Lovley, Derek R.; Butler, Caitlyn S.
2015-01-01
Microbial electrosynthesis, an artificial form of photosynthesis, can efficiently convert carbon dioxide into organic commodities; however, this process has only previously been demonstrated in reactors that have features likely to be a barrier to scale-up. Therefore, the possibility of simplifying reactor design by both eliminating potentiostatic control of the cathode and removing the membrane separating the anode and cathode was investigated with biofilms of Sporomusa ovata. S. ovata reduces carbon dioxide to acetate and acts as the microbial catalyst for plain graphite stick cathodes as the electron donor. In traditional ‘H-cell’ reactors, where the anode and cathode chambers were separated with a proton-selective membrane, the rates and columbic efficiencies of microbial electrosynthesis remained high when electron delivery at the cathode was powered with a direct current power source rather than with a potentiostat-poised cathode utilized in previous studies. A membrane-less reactor with a direct-current power source with the cathode and anode positioned to avoid oxygen exposure at the cathode, retained high rates of acetate production as well as high columbic and energetic efficiencies. The finding that microbial electrosynthesis is feasible without a membrane separating the anode from the cathode, coupled with a direct current power source supplying the energy for electron delivery, is expected to greatly simplify future reactor design and lower construction costs. PMID:26029199
Simplifying microbial electrosynthesis reactor design.
Giddings, Cloelle G S; Nevin, Kelly P; Woodward, Trevor; Lovley, Derek R; Butler, Caitlyn S
2015-01-01
Microbial electrosynthesis, an artificial form of photosynthesis, can efficiently convert carbon dioxide into organic commodities; however, this process has only previously been demonstrated in reactors that have features likely to be a barrier to scale-up. Therefore, the possibility of simplifying reactor design by both eliminating potentiostatic control of the cathode and removing the membrane separating the anode and cathode was investigated with biofilms of Sporomusa ovata. S. ovata reduces carbon dioxide to acetate and acts as the microbial catalyst for plain graphite stick cathodes as the electron donor. In traditional 'H-cell' reactors, where the anode and cathode chambers were separated with a proton-selective membrane, the rates and columbic efficiencies of microbial electrosynthesis remained high when electron delivery at the cathode was powered with a direct current power source rather than with a potentiostat-poised cathode utilized in previous studies. A membrane-less reactor with a direct-current power source with the cathode and anode positioned to avoid oxygen exposure at the cathode, retained high rates of acetate production as well as high columbic and energetic efficiencies. The finding that microbial electrosynthesis is feasible without a membrane separating the anode from the cathode, coupled with a direct current power source supplying the energy for electron delivery, is expected to greatly simplify future reactor design and lower construction costs. PMID:26029199
Flat sheet metal girders with very thin metal web. Part I : general theories and assumptions
NASA Technical Reports Server (NTRS)
Wagner, Herbert
1931-01-01
The object of this report was to develop the structural method of sheet metal girders and should for that reason be considered solely from this standpoint. The ensuing methods were based on the assumption of the infinitely low stiffness in bending of the metal web. This simplifies the basis of calculations to such an extent that many questions of great practical importance can be examined which otherwise cannot be included in any analysis of the bending stiffness of the buckled plate. This report refers to such points as the safety in buckling of uprights to the effect of bending flexibility of spars, to spars not set parallel, etc.
Simplified compact containment BWR plant
Heki, H.; Nakamaru, M.; Tsutagawa, M.; Hiraiwa, K.; Arai, K.; Hida, T.
2004-07-01
The reactor concept considered in this paper has a small power output, a compact containment and a simplified BWR configuration with comprehensive safety features. The Compact Containment Boiling Water Reactor (CCR), which is being developed with matured BWR technologies together with innovative systems/components, is expected to prove attractive in the world energy markets due to its flexibility in regard to both energy demands and site conditions, its high potential for reducing investment risk and its safety features facilitating public acceptance. The flexibility is achieved by CCR's small power output of 300 MWe class and capability of long operating cycle (refueling intervals). CCR is expected to be attractive from view point of investment due to its simplification/innovation in design such as natural circulation core cooling with the bottom located short core, internal upper entry control rod drives (CRDs) with ring-type dryers and simplified ECCS system with high pressure containment concept. The natural circulation core eliminates recirculation pumps and the maintenance of such pumps. The internal upper entry CRDs reduce the height of the reactor vessel (RPV) and consequently reduce the height of the primary containment vessel (PCV). The safety features mainly consist of large water inventory above the core without large penetration below the top of the core, passive cooling system by isolation condenser (IC), passive auto catalytic recombiner and in-vessel retention (IVR) capability. The large inventory increases the system response time in the case of design-base accidents, including loss of coolant accidents. The IC suppresses PCV pressure by steam condensation without any AC power. The recombiner decreases hydrogen concentration in the PCV in the case of a severe accident. Cooling the molten core inside the RPV if the core should be damaged by loss of core coolability could attain the IVR. The feasibility of CCR safety system has been confirmed by LOCA
Model investigation overthrows assumptions of watershed research
NASA Astrophysics Data System (ADS)
Schultz, Colin
2012-04-01
A 2009 study revealed serious flaws in a standard technique used by hydrological researchers to understand how changes in watershed land use affect stream flow behaviors, such as peak flows. The study caused academics and government agencies alike to rethink decades of watershed research and prompted Kuraś et al. to reinvestigate a number of long-standing assumptions in watershed research using a complex and well-validated computer model that accounts for a range of internal watershed dynamics and hydrologic processes. For the test site at 241 Creek in British Columbia, Canada, the authors found not only that deforestation increased the severity of foods but also that it had a scaling influence on both the magnitudes and frequencies of the foods. The model showed that the larger the food, the more its magnitude was amplified by deforestation, with 10-to 100-year-return-period foods increasing in size by 9%-25%. Following a simulated removal of half of the watershed's trees, the authors found that 10-year-return-period foods occurred twice as often, while 100-year-returnperiod events became 5-6.7 times more frequent. This proportional relationship between the increase in food magnitudes and frequencies following deforestation and the size of the food runs counter to the prevailing wisdom in hydrological science.
Model investigation overthrows assumptions of watershed research
NASA Astrophysics Data System (ADS)
Schultz, Colin
2012-04-01
A 2009 study revealed serious flaws in a standard technique used by hydrological researchers to understand how changes in watershed land use affect stream flow behaviors, such as peak flows. The study caused academics and government agencies alike to rethink decades of watershed research and prompted Kuraś et al. to reinvestigate a number of long-standing assumptions in watershed research using a complex and well-validated computer model that accounts for a range of internal watershed dynamics and hydrologic processes. For the test site at 241 Creek in British Columbia, Canada, the authors found not only that deforestation increased the severity of floods but also that it had a scaling influence on both the magnitudes and frequencies of the floods. The model showed that the larger the flood, the more its magnitude was amplified by deforestation, with 10-to 100-year-return-period floods increasing in size by 9%-25%. Following a simulated removal of half of the watershed's trees, the authors found that 10-year-return-period floods occurred twice as often, while 100-year-return-period events became 5-6.7 times more frequent. This proportional relationship between the increase in flood magnitudes and frequencies following deforestation and the size of the flood runs counter to the prevailing wisdom in hydrological science.
Culturally grounded review of research assumptions.
Hufford, D J
1996-07-01
In this article 11 assumptions underlying many discussions of alternative medicine are discussed and critiqued: that (1) cultural factors merely constitute noise in research data that can be removed by proper design; (2) the only proper goal of alternative medicine research is the incorporation of effective practices into medicine; (3) physicians are the primary consumers of good alternative medicine research; (4) control of pathology is the sole measure of the effectiveness of alternative medicine; (5) effects on pathology can be fully separated from effects on perception or quality of life; (6) effects on individual health should be the sole focus of alternative medical research; (7) medicine is aware of all sicknesses appropriate for alternative medicine research; (8) subjective data are less valuable than objective data; (9) the best leads for research come from recognizable systems with advocates; (10) more "modern-looking," highly articulated forms are necessarily better research "bets"; and (11) all good candidates for alternative medicine research are recognized as health practices by those who use them. PMID:8795922
Simplified Radioimmunoassay for Diagnostic Serology
Hutchinson, Harriet D.; Ziegler, Donald W.
1972-01-01
A simplified, indirect radioimmunoassay is described for Escherichia coli, vaccinia virus, and herpesvirus. The antigens were affixed to glass cover slips; thus both the primary and secondary reactions take place on the cover slips, and the unbound antiserum is easily separated from the bound antiserum by rinsing. Rabbit or human immune sera were reacted with the antigens, and the primary immune complex was quantitated by a secondary reaction with 125I-indicator globulin (anti-rabbit or anti-human). A direct relationship between the antiserum concentration and the 125I absorption was established. Variations in titers were detectable, and the titers were comparable to complement fixation titers. Homologous and heterologous reactions were distinguishable. The method affords an objective, quantitative, and qualitative evaluation of antibody, and results are reproducible. PMID:4344958
NASA Astrophysics Data System (ADS)
Daci, N.; De Bruyn, I.; Lowette, S.; Tytgat, M. H. G.; Zaldivar, B.
2015-11-01
The existence of Dark Matter (DM) in the form of Strongly Interacting Massive Particles (SIMPs) may be motivated by astrophysical observations that challenge the classical Cold DM scenario. Other observations greatly constrain, but do not completely exclude, the SIMP alternative. The signature of SIMPs at the LHC may consist of neutral, hadron-like, trackless jets produced in pairs. We show that the absence of charged content can provide a very efficient tool to suppress dijet backgrounds at the LHC, thus enhancing the sensitivity to a potential SIMP signal. We illustrate this using a simplified SIMP model and present a detailed feasibility study based on simulations, including a dedicated detector response parametrization. We evaluate the expected sensitivity to various signal scenarios and tentatively consider the exclusion limits on the SIMP elastic cross section with nucleons.
Simplified Analysis Model for Predicting Pyroshock Responses on Composite Panel
NASA Astrophysics Data System (ADS)
Iwasa, Takashi; Shi, Qinzhong
A simplified analysis model based on the frequency response analysis and the wave propagation analysis was established for predicting Shock Response Spectrum (SRS) on the composite panel subjected to pyroshock loadings. The complex composite panel was modeled as an isotropic single layer panel defined in NASA Lewis Method. Through the conductance of an impact excitation test on a composite panel with no equipment mounted on, it was presented that the simplified analysis model could estimate the SRS as well as the acceleration peak values in both near and far field in an accurate way. In addition, through the simulation for actual pyroshock tests on an actual satellite system, the simplified analysis model was proved to be applicable in predicting the actual pyroshock responses, while bringing forth several technical issues to estimate the pyroshock test specifications in early design stages.
Assumptions and ambiguities in nonplanar acoustic soliton theory
Verheest, Frank; Hellberg, Manfred A.
2014-02-15
There have been many recent theoretical investigations of the nonlinear evolution of electrostatic modes with cylindrical or spherical symmetry. Through a reductive perturbation analysis based on a quasiplanar stretching, a modified form of the Korteweg-de Vries or related equation is derived, containing an additional term which is linear in the electrostatic potential and singular at time t = 0. Unfortunately, these analyses contain several restrictive assumptions and ambiguities which are normally neither properly explained nor discussed, and severely limit the applicability of the technique. Most glaring are the use of plane-wave stretchings, the assumption that shape-preserving cylindrical modes can exist and that, although time is homogeneous, the origin of time (which can be chosen arbitrarily) needs to be avoided. Hence, only in the domain where the nonlinear modes are quasiplanar, far from the axis of cylindrical or from the origin of spherical symmetry can acceptable but unexciting results be obtained. Nonplanar nonlinear modes are clearly an interesting topic of research, as some of these phenomena have been observed in experiments. However, it is argued that a proper study of such modes needs numerical simulations rather than ill-suited analytical approximations.
Testing the assumptions of linear prediction analysis in normal vowels
NASA Astrophysics Data System (ADS)
Little, M. A.
This paper develops an improved surrogate data test to show experimental evidence, for all the simple vowels of US English, for both male and female speakers, that Gaussian linear prediction analysis, a ubiquitous technique in current speech technologies, cannot be used to extract all the dynamical structure of real speech time series. The test provides robust evidence undermining the validity of these linear techniques, supporting the assumptions of either dynamical nonlinearity and/or non-Gaussianity common to more recent, complex, efforts at dynamical modelling speech time series. However, an additional finding is that the classical assumptions cannot be ruled out entirely, and plausible evidence is given to explain the success of the linear Gaussian theory as a weak approximation to the true, nonlinear/non-Gaussian dynamics. This supports the use of appropriate hybrid linear/nonlinear/non-Gaussian modelling. With a calibrated calculation of statistic and particular choice of experimental protocol, some of the known systematic problems of the method of surrogate data testing are circumvented to obtain results to support the conclusions to a high level of significance.
Explaining the Pleistocene megafaunal extinctions: Models, chronologies, and assumptions
Brook, Barry W.; Bowman, David M. J. S.
2002-01-01
Understanding of the Pleistocene megafaunal extinctions has been advanced recently by the application of simulation models and new developments in geochronological dating. Together these have been used to posit a rapid demise of megafauna due to over-hunting by invading humans. However, we demonstrate that the results of these extinction models are highly sensitive to implicit assumptions concerning the degree of prey naivety to human hunters. In addition, we show that in Greater Australia, where the extinctions occurred well before the end of the last Ice Age (unlike the North American situation), estimates of the duration of coexistence between humans and megafauna remain imprecise. Contrary to recent claims, the existing data do not prove the “blitzkrieg” model of overkill. PMID:12417761
Finite Element Modeling of a Cylindrical Contact Using Hertzian Assumptions
NASA Technical Reports Server (NTRS)
Knudsen, Erik
2003-01-01
The turbine blades in the high-pressure fuel turbopump/alternate turbopump (HPFTP/AT) are subjected to hot gases rapidly flowing around them. This flow excites vibrations in the blades. Naturally, one has to worry about resonance, so a damping device was added to dissipate some energy from the system. The foundation is now laid for a very complex problem. The damper is in contact with the blade, so now there are contact stresses (both normal and tangential) to contend with. Since these stresses can be very high, it is not all that difficult to yield the material. Friction is another non-linearity and the blade is made out of a Nickel-based single-crystal superalloy that is orthotropic. A few approaches exist to solve such a problem and computer models, using contact elements, have been built with friction, plasticity, etc. These models are quite cumbersome and require many hours to solve just one load case and material orientation. A simpler approach is required. Ideally, the model should be simplified so the analysis can be conducted faster. When working with contact problems determining the contact patch and the stresses in the material are the main concerns. Closed-form solutions for non-conforming bodies, developed by Hertz, made out of isotropic materials are readily available. More involved solutions for 3-D cases using different materials are also available. The question is this: can Hertzian1 solutions be applied, or superimposed, to more complicated problems-like those involving anisotropic materials? That is the point of the investigation here. If these results agree with the more complicated computer models, then the analytical solutions can be used in lieu of the numerical solutions that take a very long time to process. As time goes on, the analytical solution will eventually have to include things like friction and plasticity. The models in this report use no contact elements and are essentially an applied load problem using Hertzian assumptions to
48 CFR 453.213 - Simplified Acquisition and other simplified purchase procedures (AD-838).
Code of Federal Regulations, 2011 CFR
2011-10-01
... other simplified purchase procedures (AD-838). 453.213 Section 453.213 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE CLAUSES AND FORMS FORMS Prescription of Forms 453.213 Simplified Acquisition and other simplified purchase procedures (AD-838). Form AD-838, Purchase Order, is prescribed...
Nichols, J.D.; Stokes, S.L.; Hines, J.E.; Conroy, M.J.
1982-01-01
We examined the problem of heterogeneous survival and recovery rates in bird banding estimation models. We suggest that positively correlated subgroup survival and recovery probabilities may result from winter banding operations and that this situation will produce positively biased survival rate estimates. The magnitude of the survival estimate bias depends on the proportion of the population in each subgroup. Power of the suggested goodness-of-fit test to reject the inappropriate model for heterogeneous data sets was low for all situations examined and was poorest for positively related subgroup survival and recovery rates. Despite the magnitude of some of the biases reported and the relative inability to detect heterogeneity, we suggest that levels of heterogeneity normally encountered in real data sets will produce relatively small biases of average survival rates.
Lens window simplifies TDL housing
NASA Technical Reports Server (NTRS)
Robinson, D. M.; Rowland, C. W.
1979-01-01
Lens window seal in tunable-diode-laser housing replaces plan parallel window. Lens seals housing and acts as optical-output coupler, thus eliminating need for additional reimaging or collimating optics.
Impact of actuarial assumptions on pension costs: A simulation analysis
NASA Astrophysics Data System (ADS)
Yusof, Shaira; Ibrahim, Rose Irnawaty
2013-04-01
This study investigates the sensitivity of pension costs to changes in the underlying assumptions of a hypothetical pension plan in order to gain a perspective on the relative importance of the various actuarial assumptions via a simulation analysis. Simulation analyses are used to examine the impact of actuarial assumptions on pension costs. There are two actuarial assumptions will be considered in this study which are mortality rates and interest rates. To calculate pension costs, Accrued Benefit Cost Method, constant amount (CA) modification, constant percentage of salary (CS) modification are used in the study. The mortality assumptions and the implied mortality experience of the plan can potentially have a significant impact on pension costs. While for interest rate assumptions, it is inversely related to the pension costs. Results of the study have important implications for analyst of pension costs.
NASA Astrophysics Data System (ADS)
Rajabzadeh Oghaz, Hamidreza; Damiano, Robert; Meng, Hui
2015-11-01
Intracranial aneurysms (IAs) are pathological outpouchings of cerebral vessels, the progression of which are mediated by complex interactions between the blood flow and vasculature. Image-based computational fluid dynamics (CFD) has been used for decades to investigate IA hemodynamics. However, the commonly adopted simplifying assumptions in CFD (e.g. rigid wall) compromise the simulation accuracy and mask the complex physics involved in IA progression and eventual rupture. Several groups have considered the wall compliance by using fluid-structure interaction (FSI) modeling. However, FSI simulation is highly sensitive to numerical assumptions (e.g. linear-elastic wall material, Newtonian fluid, initial vessel configuration, and constant pressure outlet), the effects of which are poorly understood. In this study, a comprehensive investigation of the sensitivity of FSI simulations in patient-specific IAs is investigated using a multi-stage approach with a varying level of complexity. We start with simulations incorporating several common simplifications: rigid wall, Newtonian fluid, and constant pressure at the outlets, and then we stepwise remove these simplifications until the most comprehensive FSI simulations. Hemodynamic parameters such as wall shear stress and oscillatory shear index are assessed and compared at each stage to better understand the sensitivity of in FSI simulations for IA to model assumptions. Supported by the National Institutes of Health (1R01 NS 091075-01).
NASA Technical Reports Server (NTRS)
Bursik, J. W.; Hall, R. M.
1980-01-01
The saturated equilibrium expansion approximation for two phase flow often involves ideal-gas and latent-heat assumptions to simplify the solution procedure. This approach is well documented by Wegener and Mack and works best at low pressures where deviations from ideal-gas behavior are small. A thermodynamic expression for liquid mass fraction that is decoupled from the equations of fluid mechanics is used to compare the effects of the various assumptions on nitrogen-gas saturated equilibrium expansion flow starting at 8.81 atm, 2.99 atm, and 0.45 atm, which are conditions representative of transonic cryogenic wind tunnels. For the highest pressure case, the entire set of ideal-gas and latent-heat assumptions are shown to be in error by 62 percent for the values of heat capacity and latent heat. An approximation of the exact, real-gas expression is also developed using a constant, two phase isentropic expansion coefficient which results in an error of only 2 percent for the high pressure case.
Simplified Models for Dark Matter Model Building
NASA Astrophysics Data System (ADS)
DiFranzo, Anthony Paul
The largest mass component of the universe is a longstanding mystery to the physics community. As a glaring source of new physics beyond the Standard Model, there is a large effort to uncover the quantum nature of dark matter. Many probes have been formed to search for this elusive matter; cultivating a rich environment for a phenomenologist. In addition to the primary probes---colliders, direct detection, and indirect detection---each with their own complexities, there is a plethora of prospects to illuminate our unanswered questions. In this work, phenomenological techniques for studying dark matter and other possible hints of new physics will be discussed. This work primarily focuses on the use of Simplified Models, which are intended to be a compromise between generality and validity of the theoretical description. They are often used to parameterize a particular search, develop a well-defined sense of complementarity between searches, or motivate new search strategies. Explicit examples of such models and how they may be used will be the highlight of each chapter.
Simplified method for nonlinear structural analysis
NASA Technical Reports Server (NTRS)
Kaufman, A.
1983-01-01
A simplified inelastic analysis computer program was developed for predicting the stress-strain history of a thermomechanically cycled structure from an elastic solution. The program uses an iterative and incremental procedure to estimate the plastic strains from the material stress-strain properties and a simulated plasticity hardening model. The simplified method was exercised on a number of problems involving uniaxial and multiaxial loading, isothermal and nonisothermal conditions, and different materials and plasticity models. Good agreement was found between these analytical results and nonlinear finite element solutions for these problems. The simplified analysis program used less than 1 percent of the CPU time required for a nonlinear finite element analysis.
Experimental demonstration of simplified quantum process tomography.
Wu, Z; Li, S; Zheng, W; Peng, X; Feng, M
2013-01-14
The essential tool to characterize dynamics of an open quantum system is quantum process tomography (QPT). Although standard QPT methods are hard to be scalable, simplified QPT approach is available if we have the prior knowledge that the system Hamiltonian commutes with the system-environment interaction Hamiltonian. Using a nuclear magnetic resonance (NMR) quantum simulator, we experimentally simulate dephasing channels to demonstrate the simplified QPT as well as the standard QPT method as a comparison. The experimental results agree well with our predictions which confirm the validity and better efficiency of the simplified QPT. PMID:23320694
A simplified Reynolds stress model for unsteady turbulent boundary layers
NASA Technical Reports Server (NTRS)
Fan, Sixin; Lakshminarayana, Budugur
1993-01-01
A simplified Reynolds stress model has been developed for the prediction of unsteady turbulent boundary layers. By assuming that the net transport of Reynolds stresses is locally proportional to the net transport of the turbulent kinetic energy, the time dependent full Reynolds stress model is reduced to a set of ordinary differential equations. These equations contain only time derivatives and can be readily integrated in a time dependent boundary layer or Navier-Stokes code. The turbulent kinetic energy and dissipation rate needed for the model are obtained by solving the k-epsilon equations. This simplified Reynolds stress turbulence model (SRSM) does not use the eddy viscosity assumption, which may not be valid for unsteady turbulent flows. The anisotropy of both the steady and the unsteady turbulent normal stresses can be captured by the SRSM model. Through proper damping of the shear stresses, the present model can be used in the near wall region of turbulent boundary layers. This model has been validated against data for steady and unsteady turbulent boundary layers, including periodic turbulent boundary layers subjected to a mean adverse pressure gradient. For the cases tested, the predicted unsteady velocity and turbulent stress components agree well with the experimental data. Comparison between the predictions from the SRSM model and a k-epsilon model is also presented.
On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis
Li, Bing; Chun, Hyonho; Zhao, Hongyu
2014-01-01
We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis. PMID:26401064
3. 6 simplified methods for design
Nickell, R.E.; Yahr, G.T.
1981-01-01
Simplified design analysis methods for elevated temperature construction are classified and reviewed. Because the major impetus for developing elevated temperature design methodology during the past ten years has been the LMFBR program, considerable emphasis is placed upon results from this source. The operating characteristics of the LMFBR are such that cycles of severe transient thermal stresses can be interspersed with normal elevated temperature operational periods of significant duration, leading to a combination of plastic and creep deformation. The various simplified methods are organized into two general categories, depending upon whether it is the material, or constitutive, model that is reduced, or the geometric modeling that is simplified. Because the elastic representation of material behavior is so prevalent, an entire section is devoted to elastic analysis methods. Finally, the validation of the simplified procedures is discussed.
Simplified Rotation In Acoustic Levitation
NASA Technical Reports Server (NTRS)
Barmatz, M. B.; Gaspar, M. S.; Trinh, E. H.
1989-01-01
New technique based on old discovery used to control orientation of object levitated acoustically in axisymmetric chamber. Method does not require expensive equipment like additional acoustic drivers of precisely adjustable amplitude, phase, and frequency. Reflecting object acts as second source of sound. If reflecting object large enough, close enough to levitated object, or focuses reflected sound sufficiently, Rayleigh torque exerted on levitated object by reflected sound controls orientation of object.
Assumptions of African-American Students about International Education Exchange.
ERIC Educational Resources Information Center
Fels, Michael D.
This study attempted to identify and compare some of the assumptions concerning international education exchange of first, the international education exchange community, and, second, the African-American student community. The study reviewed materials from published institutional literature for the assumptions held by the international education…
29 CFR 4231.10 - Actuarial calculations and assumptions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 29 Labor 9 2013-07-01 2013-07-01 false Actuarial calculations and assumptions. 4231.10 Section... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date...
29 CFR 4231.10 - Actuarial calculations and assumptions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 29 Labor 9 2014-07-01 2014-07-01 false Actuarial calculations and assumptions. 4231.10 Section... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date...
29 CFR 4231.10 - Actuarial calculations and assumptions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 29 Labor 9 2011-07-01 2011-07-01 false Actuarial calculations and assumptions. 4231.10 Section... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date...
29 CFR 4231.10 - Actuarial calculations and assumptions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 29 Labor 9 2012-07-01 2012-07-01 false Actuarial calculations and assumptions. 4231.10 Section... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date...
46 CFR 174.070 - General damage stability assumptions.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 7 2011-10-01 2011-10-01 false General damage stability assumptions. 174.070 Section 174.070 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SUBDIVISION AND STABILITY... Units § 174.070 General damage stability assumptions. For the purpose of determining compliance...
46 CFR 174.070 - General damage stability assumptions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 7 2010-10-01 2010-10-01 false General damage stability assumptions. 174.070 Section 174.070 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SUBDIVISION AND STABILITY... Units § 174.070 General damage stability assumptions. For the purpose of determining compliance...
14 CFR 29.473 - Ground loading conditions and assumptions.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Ground loading conditions and assumptions... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY ROTORCRAFT Strength Requirements Ground Loads § 29.473 Ground loading conditions and assumptions. (a) For specified landing conditions, a...
14 CFR 29.473 - Ground loading conditions and assumptions.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Ground loading conditions and assumptions... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY ROTORCRAFT Strength Requirements Ground Loads § 29.473 Ground loading conditions and assumptions. (a) For specified landing conditions, a...
14 CFR 29.473 - Ground loading conditions and assumptions.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Ground loading conditions and assumptions... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY ROTORCRAFT Strength Requirements Ground Loads § 29.473 Ground loading conditions and assumptions. (a) For specified landing conditions, a...
14 CFR 27.473 - Ground loading conditions and assumptions.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Ground loading conditions and assumptions... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL CATEGORY ROTORCRAFT Strength Requirements Ground Loads § 27.473 Ground loading conditions and assumptions. (a) For specified landing conditions, a...
14 CFR 27.473 - Ground loading conditions and assumptions.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Ground loading conditions and assumptions... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL CATEGORY ROTORCRAFT Strength Requirements Ground Loads § 27.473 Ground loading conditions and assumptions. (a) For specified landing conditions, a...
14 CFR 27.473 - Ground loading conditions and assumptions.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Ground loading conditions and assumptions... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL CATEGORY ROTORCRAFT Strength Requirements Ground Loads § 27.473 Ground loading conditions and assumptions. (a) For specified landing conditions, a...
29 CFR 4231.10 - Actuarial calculations and assumptions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 9 2010-07-01 2010-07-01 false Actuarial calculations and assumptions. 4231.10 Section... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date...
Where Are We Going? Planning Assumptions for Community Colleges.
ERIC Educational Resources Information Center
Maas, Rao, Taylor and Associates, Riverside, CA.
Designed to provide community college planners with a series of reference assumptions to consider in the planning process, this document sets forth assumptions related to finance (i.e., operational funds, capital funds, alternate funding sources, and campus financial operations); California state priorities; occupational trends; population (i.e.,…
29 CFR Appendix C to Part 4044 - Loading Assumptions
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 9 2010-07-01 2010-07-01 false Loading Assumptions C Appendix C to Part 4044 Labor... ASSETS IN SINGLE-EMPLOYER PLANS Pt. 4044, App. C Appendix C to Part 4044—Loading Assumptions If the total value of the plan's benefit liabilities (as defined in 29 U.S.C. § 1301(a)(16)), exclusive of...
10 CFR 71.83 - Assumptions as to unknown properties.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 2 2011-01-01 2011-01-01 false Assumptions as to unknown properties. 71.83 Section 71.83... Operating Controls and Procedures § 71.83 Assumptions as to unknown properties. When the isotopic abundance... fissile material in any package is not known, the licensee shall package the fissile material as if...
7 CFR 3575.88 - Transfers and assumptions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 15 2010-01-01 2010-01-01 false Transfers and assumptions. 3575.88 Section 3575.88 Agriculture Regulations of the Department of Agriculture (Continued) RURAL HOUSING SERVICE, DEPARTMENT OF AGRICULTURE GENERAL Community Programs Guaranteed Loans § 3575.88 Transfers and assumptions. (a) General....
Simplified Models for LHC New Physics Searches
Alves, Daniele; Arkani-Hamed, Nima; Arora, Sanjay; Bai, Yang; Baumgart, Matthew; Berger, Joshua; Buckley, Matthew; Butler, Bart; Chang, Spencer; Cheng, Hsin-Chia; Cheung, Clifford; Chivukula, R.Sekhar; Cho, Won Sang; Cotta, Randy; D'Alfonso, Mariarosaria; El Hedri, Sonia; Essig, Rouven,; Evans, Jared A.; Fitzpatrick, Liam; Fox, Patrick; Franceschini, Roberto; /more authors..
2012-06-01
This document proposes a collection of simplified models relevant to the design of new-physics searches at the LHC and the characterization of their results. Both ATLAS and CMS have already presented some results in terms of simplified models, and we encourage them to continue and expand this effort, which supplements both signature-based results and benchmark model interpretations. A simplified model is defined by an effective Lagrangian describing the interactions of a small number of new particles. Simplified models can equally well be described by a small number of masses and cross-sections. These parameters are directly related to collider physics observables, making simplified models a particularly effective framework for evaluating searches and a useful starting point for characterizing positive signals of new physics. This document serves as an official summary of the results from the 'Topologies for Early LHC Searches' workshop, held at SLAC in September of 2010, the purpose of which was to develop a set of representative models that can be used to cover all relevant phase space in experimental searches. Particular emphasis is placed on searches relevant for the first {approx} 50-500 pb{sup -1} of data and those motivated by supersymmetric models. This note largely summarizes material posted at http://lhcnewphysics.org/, which includes simplified model definitions, Monte Carlo material, and supporting contacts within the theory community. We also comment on future developments that may be useful as more data is gathered and analyzed by the experiments.
Simplified models for LHC new physics searches
NASA Astrophysics Data System (ADS)
Alves, Daniele; Arkani-Hamed, Nima; Arora, Sanjay; Bai, Yang; Baumgart, Matthew; Berger, Joshua; Buckley, Matthew; Butler, Bart; Chang, Spencer; Cheng, Hsin-Chia; Cheung, Clifford; Sekhar Chivukula, R.; Cho, Won Sang; Cotta, Randy; D'Alfonso, Mariarosaria; El Hedri, Sonia; Essig (Editor, Rouven; Evans, Jared A.; Fitzpatrick, Liam; Fox, Patrick; Franceschini, Roberto; Freitas, Ayres; Gainer, James S.; Gershtein, Yuri; Gray, Richard; Gregoire, Thomas; Gripaios, Ben; Gunion, Jack; Han, Tao; Haas, Andy; Hansson, Per; Hewett, JoAnne; Hits, Dmitry; Hubisz, Jay; Izaguirre, Eder; Kaplan, Jared; Katz, Emanuel; Kilic, Can; Kim, Hyung-Do; Kitano, Ryuichiro; Koay, Sue Ann; Ko, Pyungwon; Krohn, David; Kuflik, Eric; Lewis, Ian; Lisanti (Editor, Mariangela; Liu, Tao; Liu, Zhen; Lu, Ran; Luty, Markus; Meade, Patrick; Morrissey, David; Mrenna, Stephen; Nojiri, Mihoko; Okui, Takemichi; Padhi, Sanjay; Papucci, Michele; Park, Michael; Park, Myeonghun; Perelstein, Maxim; Peskin, Michael; Phalen, Daniel; Rehermann, Keith; Rentala, Vikram; Roy, Tuhin; Ruderman, Joshua T.; Sanz, Veronica; Schmaltz, Martin; Schnetzer, Stephen; Schuster (Editor, Philip; Schwaller, Pedro; Schwartz, Matthew D.; Schwartzman, Ariel; Shao, Jing; Shelton, Jessie; Shih, David; Shu, Jing; Silverstein, Daniel; Simmons, Elizabeth; Somalwar, Sunil; Spannowsky, Michael; Spethmann, Christian; Strassler, Matthew; Su, Shufang; Tait (Editor, Tim; Thomas, Brooks; Thomas, Scott; Toro (Editor, Natalia; Volansky, Tomer; Wacker (Editor, Jay; Waltenberger, Wolfgang; Yavin, Itay; Yu, Felix; Zhao, Yue; Zurek, Kathryn; LHC New Physics Working Group
2012-10-01
This document proposes a collection of simplified models relevant to the design of new-physics searches at the Large Hadron Collider (LHC) and the characterization of their results. Both ATLAS and CMS have already presented some results in terms of simplified models, and we encourage them to continue and expand this effort, which supplements both signature-based results and benchmark model interpretations. A simplified model is defined by an effective Lagrangian describing the interactions of a small number of new particles. Simplified models can equally well be described by a small number of masses and cross-sections. These parameters are directly related to collider physics observables, making simplified models a particularly effective framework for evaluating searches and a useful starting point for characterizing positive signals of new physics. This document serves as an official summary of the results from the ‘Topologies for Early LHC Searches’ workshop, held at SLAC in September of 2010, the purpose of which was to develop a set of representative models that can be used to cover all relevant phase space in experimental searches. Particular emphasis is placed on searches relevant for the first ˜50-500 pb-1 of data and those motivated by supersymmetric models. This note largely summarizes material posted at http://lhcnewphysics.org/, which includes simplified model definitions, Monte Carlo material, and supporting contacts within the theory community. We also comment on future developments that may be useful as more data is gathered and analyzed by the experiments.
Simplified Solutions for Activity Deposited on Moving Filter Media.
Smith, David L; Chabot, George E
2016-10-01
Simplified numerical solutions for particulate activity viewed on moving filter continuous air monitors are developed. The monitor configurations include both rectangular window (RW) and circular window (CW) types. The solutions are demonstrated first for a set of basic airborne radioactivity cases, for a series of concentration pulses, and for indicating the effects of step changes in reactor coolant system (RCS) leakage for a pressurized water reactor. The method is also compared to cases from the prior art. These simplified solutions have additional benefits: They are easily adaptable to multiple radionuclides, they will accommodate collection and detection efficiencies that vary in known ways across the collection area, and they also ease the solution programming. PMID:27575345
NASA Technical Reports Server (NTRS)
Hwang, S. Y.; Kaufman, A.
1985-01-01
Strain redistribution corrections were developed for a simplified inelastic analysis procedure to economically calculate material cyclic response at the critical location of a structure for life prediction purposes. The method was based on the assumption that the plastic region in the structure is local and the total strain history required for input can be defined from elastic finite element analyses. Cyclic stress-strain behavior was represented by a bilinear kinematic hardening model. The simplified procedure has been found to predict stress-strain response with reasonable accuracy for thermally cycled problems but needs improvement for mechanically load cycled problems. This study derived and incorporated Neuber type corrections in the simplified procedure to account for local total strain redistribution under cyclic mechanical loading. The corrected simplified method was exercised on a mechanically load cycled benchmark notched plate problem. Excellent agreement was found between the predicted material response and nonlinear finite element solutions for the problem. The simplified analysis computer program used 0.3 percent of the CPU time required for a nonlinear finite element analysis.
Sensitivity of Rooftop PV Projections in the SunShot Vision Study to Market Assumptions
Drury, E.; Denholm, P.; Margolis, R.
2013-01-01
The SunShot Vision Study explored the potential growth of solar markets if solar prices decreased by about 75% from 2010 to 2020. The SolarDS model was used to simulate rooftop PV demand for this study, based on several PV market assumptions--future electricity rates, customer access to financing, and others--in addition to the SunShot PV price projections. This paper finds that modeled PV demand is highly sensitive to several non-price market assumptions, particularly PV financing parameters.
The Immoral Assumption Effect: Moralization Drives Negative Trait Attributions.
Meindl, Peter; Johnson, Kate M; Graham, Jesse
2016-04-01
Jumping to negative conclusions about other people's traits is judged as morally bad by many people. Despite this, across six experiments (total N = 2,151), we find that multiple types of moral evaluations--even evaluations related to open-mindedness, tolerance, and compassion--play a causal role in these potentially pernicious trait assumptions. Our results also indicate that moralization affects negative-but not positive-trait assumptions, and that the effect of morality on negative assumptions cannot be explained merely by people's general (nonmoral) preferences or other factors that distinguish moral and nonmoral traits, such as controllability or desirability. Together, these results suggest that one of the more destructive human tendencies--making negative assumptions about others--can be caused by the better angels of our nature. PMID:26984017
7 CFR 1980.476 - Transfer and assumptions.
Code of Federal Regulations, 2014 CFR
2014-01-01
..., need not be consulted on a transfer and assumption case unless there is a change in loan terms. (p) If... on Line 24 as Net Collateral (Recovery). Approved protective advances and accrued interest...
7 CFR 1980.476 - Transfer and assumptions.
Code of Federal Regulations, 2012 CFR
2012-01-01
..., need not be consulted on a transfer and assumption case unless there is a change in loan terms. (p) If... on Line 24 as Net Collateral (Recovery). Approved protective advances and accrued interest...
Supporting calculations and assumptions for use in WESF safetyanalysis
Hey, B.E.
1997-03-07
This document provides a single location for calculations and assumptions used in support of Waste Encapsulation and Storage Facility (WESF) safety analyses. It also provides the technical details and bases necessary to justify the contained results.
Development of long operating cycle simplified BWR
Heki, H.; Nakamaru, M.; Maruya, T.; Hiraiwa, K.; Arai, K.; Narabayash, T.; Aritomi, M.
2002-07-01
This paper describes an innovative plant concept for long operating cycle simplified BWR (LSBWR) In this plant concept, 1) Long operating cycle ( 3 to 15 years), 2) Simplified systems and building, 3) Factory fabrication in module are discussed. Designing long operating core is based on medium enriched U-235 with burnable poison. Simplified systems and building are realized by using natural circulation with bottom located core, internal CRD and PCV with passive system and an integrated reactor and turbine building. This LSBWR concept will have make high degree of safety by IVR (In Vessel Retention) capability, large water inventory above the core region and no PCV vent to the environment due to PCCS (Passive Containment Cooling System) and internal vent tank. Integrated building concept could realize highly modular arrangement in hull structure (ship frame structure), ease of seismic isolation capability and high applicability of standardization and factory fabrication. (authors)
Simplified models for exotic BSM searches
NASA Astrophysics Data System (ADS)
Heisig, Jan; Lessa, Andre; Quertenmont, Loic
2015-12-01
Simplified models are a successful way of interpreting current LHC searches for models beyond the standard model (BSM). So far simplified models have focused on topologies featuring a missing transverse energy (MET) signature. However, in some BSM theories other, more exotic, signatures occur. If a charged particle becomes long-lived on collider time scales — as it is the case in parts of the SUSY parameter space — it leads to a very distinct signature. We present an extension of the computer package SModelS which includes simplified models for heavy stable charged particles (HSCP). As a physical application we investigate the CMSSM stau co-annihilation strip containing long-lived staus, which presents a potential solution to the Lithium problem. Applying both MET and HSCP constraints we show that, for low values of tan β, all this region of parameter space either violates Dark Matter constraints or is excluded by LHC searches.
Hypersonic Vehicle Propulsion System Simplified Model Development
NASA Technical Reports Server (NTRS)
Stueber, Thomas J.; Raitano, Paul; Le, Dzu K.; Ouzts, Peter
2007-01-01
This document addresses the modeling task plan for the hypersonic GN&C GRC team members. The overall propulsion system modeling task plan is a multi-step process and the task plan identified in this document addresses the first steps (short term modeling goals). The procedures and tools produced from this effort will be useful for creating simplified dynamic models applicable to a hypersonic vehicle propulsion system. The document continues with the GRC short term modeling goal. Next, a general description of the desired simplified model is presented along with simulations that are available to varying degrees. The simulations may be available in electronic form (FORTRAN, CFD, MatLab,...) or in paper form in published documents. Finally, roadmaps outlining possible avenues towards realizing simplified model are presented.
Citizen preparedness for disasters: are current assumptions valid?
Uscher-Pines, Lori; Chandra, Anita; Acosta, Joie; Kellermann, Arthur
2012-06-01
US government programs and communications regarding citizen preparedness for disasters rest on several untested, and therefore unverified, assumptions. We explore the assumptions related to citizen preparedness promotion and argue that in spite of extensive messaging about the importance of citizen preparedness and countless household surveys purporting to track the preparedness activities of individuals and households, the role individual Americans are being asked to play is largely based on conventional wisdom. Recommendations for conceptualizing and measuring citizen preparedness are discussed. PMID:22700027
Heavy Flavor Simplified Models at the LHC
Essig, Rouven; Izaguirre, Eder; Kaplan, Jared; Wacker, Jay G.; /SLAC
2012-04-03
We consider a comprehensive set of simplified models that contribute to final states with top and bottom quarks at the LHC. These simplified models are used to create minimal search strategies that ensure optimal coverage of new heavy flavor physics involving the pair production of color octets and triplets. We provide a set of benchmarks that are representative of model space, which can be used by experimentalists to perform their own optimization of search strategies. For data sets larger than 1 fb{sup -1}, same-sign dilepton and 3b search regions become very powerful. Expected sensitivities from existing and optimized searches are given.
Simplified cyclic structural analyses of SSME turbine blades
NASA Technical Reports Server (NTRS)
Kaufman, A.; Manderscheid, J. M.
1986-01-01
Anisotropic high-temperature alloys are used to meet the safety and durability requirements of turbine blades for high-pressure turbopumps in reusable space propulsion systems. The applicability to anisotropic components of a simplified inelastic structural analysis procedure developed at the NASA Lewis Research Center is assessed. The procedure uses as input the history of the total strain at the critical crack initiation location computed from elastic finite-element analyses. Cyclic heat transfer and structural analyses are performed for the first stage high-pressure fuel turbopump blade of the space shuttle main engine. The blade alloy is directionally solidified MAR-M 246 (nickel base). The analyses are based on a typical test stand engine cycle. Stress-strain histories for the airfoil critical location are computed using both the MARC nonlinear finite-element computer code and the simplified procedure. Additional cases are analyzed in which the material yield strength is arbitrarily reduced to increase the plastic strains and, therefore, the severity of the problem. Good agreement is shown between the predicted stress-strain solutions from the two methods. The simplified analysis uses about 0.02 percent (5 percent with the required elastic finite-element analyses) of the CPU time used by the nonlinear finite element analysis.
A simplified technique of performing splenorenal shunt (Omar's technique).
Shah, Omar Javed; Robbani, Irfan
2005-01-01
The splenorenal shunt procedure introduced by Robert Linton in 1947 is still used today in those regions of the world where portal hypertension is a common problem. However, because most surgeons find Linton's shunt procedure technically difficult, we felt that a simpler technique was needed. We present the surgical details and results of 20 splenorenal anastomosis procedures performed within a period of 30 months. Half of the patients (Group I) underwent Linton's conventional technique of splenorenal shunt; the other half (Group II) underwent a newly devised, simplified shunt technique. This new technique involves dissection of the fusion fascia of Toldt. The outcome of the 2 techniques was identical with respect to the reduction of preshunt portal pressure. However, our simplified technique was advantageous in that it significantly reduced the duration of surgery (P <0.001) and the amount of intraoperative blood loss (P <0.003). No patient died after either operation. Although Linton's splenorenal shunt is difficult and technically demanding, it is still routinely performed. The new technique described here, in addition to being simpler, helps achieve good vascular control, permits easier dissection of the splenic vein, enables an ideal anastomosis, decreases intraoperative blood loss, and reduces the duration of surgery. Therefore, we recommend the routine use of this simplified technique (Omar's technique) for the surgical treatment of portal hypertension. PMID:16429901
Food additives are substances that become part of a food product when they are added during the processing or making of that food. "Direct" food additives are often added during processing to: Add nutrients ...
Gaining Algorithmic Insight through Simplifying Constraints.
ERIC Educational Resources Information Center
Ginat, David
2002-01-01
Discusses algorithmic problem solving in computer science education, particularly algorithmic insight, and focuses on the relevance and effectiveness of the heuristic simplifying constraints which involves simplification of a given problem to a problem in which constraints are imposed on the input data. Presents three examples involving…
Simplified modeling for infiltration and radon entry
Sherman, M.H.
1992-08-01
Air leakage in the envelopes of residential buildings is the primary mechanism for provided ventilation to those buildings. For radon the same mechanisms that drive the ventilation, drive the radon entry This paper attempts to provide a simplified physical model that can be used to understand the interactions between the building leakage distribution, the forces that drive infiltration and ventilation, and indoor radon concentrations, Combining both ventilation and entry modeling together allows an estimation of Radon concentration and exposure to be made and demonstrates how changes in the envelope or ventilation system would affect it. This paper will develop simplified modeling approaches for estimating both ventilation rate and radon entry rate based on the air tightness of the envelope and the driving forces. These approaches will use conventional leakage values (i.e. effective leakage area ) to quantify the air tightness and include natural and mechanical driving forces. This paper will introduce a simplified parameter, the Radon Leakage Area, that quantifies the resistance to radon entry. To be practical for dwellings, modeling of the occupant exposures to indoor pollutants must be simple to use and not require unreasonable input data. This paper presents the derivation of the simplified physical model, and applies that model to representative situations to explore the tendencies to be expected under different circumstances.
Simplified Tutorial Programming for Interactive CAI.
ERIC Educational Resources Information Center
Jelden, D. L.
A validated instructional model generated on a large mainframe computer by the military was modified to a microcomputer format for use in programming tutorial computer assisted instruction (CAI) materials, and a simplified, compatible system of generating programs was identified--CP/M and MP/M from Digital Research Corporation. In order to…
Simplified Fabrication of Helical Copper Antennas
NASA Technical Reports Server (NTRS)
Petro, Andrew
2006-01-01
A simplified technique has been devised for fabricating helical antennas for use in experiments on radio-frequency generation and acceleration of plasmas. These antennas are typically made of copper (for electrical conductivity) and must have a specific helical shape and precise diameter.
Simplifying Data. USMES Beginning "How To" Set.
ERIC Educational Resources Information Center
Agro, Sally; And Others
In this set of three booklets on simplifying data, primary grade students learn how to round off data and to find the median and average from sets of data. The major emphasis in all Unified Sciences and Mathematics for Elementary Schools (USMES) units is on open-ended, long-range investigations of real problems. In most instances students learn…
Simplifying Data. USMES Intermediate "How To" Set.
ERIC Educational Resources Information Center
Agro, Sally; And Others
In this set of six booklets on simplifying data, intermediate grade students learn how to tell what data show, find the median/mean/mode from sets of data, find different kinds of ranges, and use key numbers to compare two sets of data. The major emphasis in all Unified Sciences and Mathematics for Elementary Schools (USMES) units is on…
Simplified Recipes for Day Care Centers.
ERIC Educational Resources Information Center
Asmussen, Patricia D.
The spiral-bound collection of 156 simplified recipes is designed to help those who prepare food for groups of children at day care centers. The recipes provide for 25 child-size servings to meet the nutritional needs and appetites of children from 2 to 6 years of age. The first section gives general information on ladle and scoop sizes, weights…
Simplified procedures for designing composite bolted joints
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
1988-01-01
Simplified procedures are described to design and analyze single and multi-bolt composite joints. Numerical examples illustrate the use of these methods. Factors affecting composite bolted joints are summarized. References are cited where more detailed discussion is presented on specific aspects of composite bolted joints. Design variables associated with these joints are summarized in the appendix.
Simplified Aid For Crew Rescue (SAFR)
NASA Technical Reports Server (NTRS)
Fisher, H. Thomas
1990-01-01
Viewgraphs and discussion of a Crew Emergency Rescue System (CERS) are presented. Topics covered include: functional description; operational description; interfaces with other subsystems/elements; simplified aid for crew rescue (SACR) characteristics; potential resource requirements; logistics, repair, and resupply; potential performance improvements; and automation impact.
Spencer, Michael
1974-01-01
Food additives are discussed from the food technology point of view. The reasons for their use are summarized: (1) to protect food from chemical and microbiological attack; (2) to even out seasonal supplies; (3) to improve their eating quality; (4) to improve their nutritional value. The various types of food additives are considered, e.g. colours, flavours, emulsifiers, bread and flour additives, preservatives, and nutritional additives. The paper concludes with consideration of those circumstances in which the use of additives is (a) justified and (b) unjustified. PMID:4467857
A VLSI architecture for simplified arithmetic Fourier transform algorithm
NASA Technical Reports Server (NTRS)
Reed, Irving S.; Shih, Ming-Tang; Truong, T. K.; Hendon, E.; Tufts, D. W.
1992-01-01
The arithmetic Fourier transform (AFT) is a number-theoretic approach to Fourier analysis which has been shown to perform competitively with the classical FFT in terms of accuracy, complexity, and speed. Theorems developed in a previous paper for the AFT algorithm are used here to derive the original AFT algorithm which Bruns found in 1903. This is shown to yield an algorithm of less complexity and of improved performance over certain recent AFT algorithms. A VLSI architecture is suggested for this simplified AFT algorithm. This architecture uses a butterfly structure which reduces the number of additions by 25 percent of that used in the direct method.
Simplifying CEA through Excel, VBA, and Subeq
NASA Technical Reports Server (NTRS)
Foster, Ryan
2004-01-01
Many people use compound equilibrium programs for very different reasons, varying from refrigerators to light bulbs to rockets. A commonly used equilibrium program is CEA. CEA can take various inputs such as pressure, temperature, and volume along with numerous reactants and run them through equilibrium equations to obtain valuable output information, including products formed and their relative amounts. A little over a year ago, Bonnie McBride created the program subeq with the goal to simplify the calling of CEA. Subeq was also designed to be called by other programs, including Excel, through the use of Visual Basic for Applications (VBA). The largest advantage of using Excel is that it allows the user to input the information in a colorful and user-friendly environment while allowing VBA to run subeq, which is in the form of a FORTRAN DLL (Dynamic Link Library). Calling subeq in this form makes it much faster than if it were converted to VBA. Since subeq requires such large lists of reactant and product names, all of which can't be passed in as an array, subeq had to be changed to accept very long strings of reactants and products. To pass this string and adjust the transfer of input and output parameters, the subeq DLL had to be changed. One program that does this is Compaq Visual FORTRAN, which allows DLLs to be edited, debugged, and compiled. Compaq Visual FORTRAN uses FORTRAN 90/95, which has additional features to that of FORTRAN 77. My goals this summer include finishing up the excel spreadsheet of subeq, which I started last summer, and putting it on the Internet so that others can use it without having to download my spreadsheet. To finish up the spreadsheet I will need to work on debugging current options and problems. I will also work on making it as robust as possible, so that all errors that may arise will be clearly communicated to the user. New features will be added old ones will be changed as I receive comments from people using the spreadsheet
The steady-state assumption in oscillating and growing systems.
Reimers, Alexandra-M; Reimers, Arne C
2016-10-01
The steady-state assumption, which states that the production and consumption of metabolites inside the cell are balanced, is one of the key aspects that makes an efficient analysis of genome-scale metabolic networks possible. It can be motivated from two different perspectives. In the time-scales perspective, we use the fact that metabolism is much faster than other cellular processes such as gene expression. Hence, the steady-state assumption is derived as a quasi-steady-state approximation of the metabolism that adapts to the changing cellular conditions. In this article we focus on the second perspective, stating that on the long run no metabolite can accumulate or deplete. In contrast to the first perspective it is not immediately clear how this perspective can be captured mathematically and what assumptions are required to obtain the steady-state condition. By presenting a mathematical framework based on the second perspective we demonstrate that the assumption of steady-state also applies to oscillating and growing systems without requiring quasi-steady-state at any time point. However, we also show that the average concentrations may not be compatible with the average fluxes. In summary, we establish a mathematical foundation for the steady-state assumption for long time periods that justifies its successful use in many applications. Furthermore, this mathematical foundation also pinpoints unintuitive effects in the integration of metabolite concentrations using nonlinear constraints into steady-state models for long time periods. PMID:27363728
Why is it Doing That? - Assumptions about the FMS
NASA Technical Reports Server (NTRS)
Feary, Michael; Immanuel, Barshi; Null, Cynthia H. (Technical Monitor)
1998-01-01
In the glass cockpit, it's not uncommon to hear exclamations such as "why is it doing that?". Sometimes pilots ask "what were they thinking when they set it this way?" or "why doesn't it tell me what it's going to do next?". Pilots may hold a conceptual model of the automation that is the result of fleet lore, which may or may not be consistent with what the engineers had in mind. But what did the engineers have in mind? In this study, we present some of the underlying assumptions surrounding the glass cockpit. Engineers and designers make assumptions about the nature of the flight task; at the other end, instructor and line pilots make assumptions about how the automation works and how it was intended to be used. These underlying assumptions are seldom recognized or acknowledged, This study is an attempt to explicitly arti culate such assumptions to better inform design and training developments. This work is part of a larger project to support training strategies for automation.
Development of a simplified biofilm model
NASA Astrophysics Data System (ADS)
Sarkar, Sushovan; Mazumder, Debabrata
2015-11-01
A simplified approach for analyzing the biofilm process in deriving an easy model has been presented. This simplified biofilm model formulated correlations between substrate concentration in the influent/effluent and at biofilm-liquid interface along with substrate flux and biofilm thickness. The model essentially considered the external mass transport according to Fick's Law, steady state substrate as well as biomass balance for attached growth microorganisms. In substrate utilization, Monod growth kinetics has been followed incorporating relevant boundary conditions at the liquid-biofilm interface and at the attachment surface. The numerical solution of equations was accomplished using Runge-Kutta method and accordingly an integrated computer program was developed. The model has been successfully applied in a distinct set of trials with varying range of representative input variables. The model performance was compared with available existing methods and it was found an easy, accurate method that can be used for process design of biofilm reactor.
Simplified models of mixed dark matter
Cheung, Clifford; Sanford, David E-mail: dsanford@caltech.edu
2014-02-01
We explore simplified models of mixed dark matter (DM), defined here to be a stable relic composed of a singlet and an electroweak charged state. Our setup describes a broad spectrum of thermal DM candidates that can naturally accommodate the observed DM abundance but are subject to substantial constraints from current and upcoming direct detection experiments. We identify ''blind spots'' at which the DM-Higgs coupling is identically zero, thus nullifying direct detection constraints on spin independent scattering. Furthermore, we characterize the fine-tuning in mixing angles, i.e. well-tempering, required for thermal freeze-out to accommodate the observed abundance. Present and projected limits from LUX and XENON1T force many thermal relic models into blind spot tuning, well-tempering, or both. This simplified model framework generalizes bino-Higgsino DM in the MSSM, singlino-Higgsino DM in the NMSSM, and scalar DM candidates that appear in models of extended Higgs sectors.
Two simplified procedures for predicting cyclic material response from a strain history
NASA Technical Reports Server (NTRS)
Kaufman, A.; Moreno, V.
1985-01-01
Simplified inelastic analysis procedures were developed at NASA Lewis and Pratt & Whitney Aircraft for predicting the stress-strain response at the critical location of a thermomechanically cycled structure. These procedures are intended primarily for use as economical structural analysis tools in the early design stages of aircraft engine hot section components where nonlinear finite-element analyses would be prohibitively expensive. Both simplified methods use as input the total strain history calculated from a linear elastic analysis. The elastic results are modified to approximate the characteristics of the inelastic cycle by incremental solution techniques. A von Mises yield criterion is used to determine the onset of active plasticity. The fundamental assumption of these methods is that the inelastic strain is local and constrained from redistribution by the surrounding elastic material.
A simplified solar cell array modelling program
NASA Technical Reports Server (NTRS)
Hughes, R. D.
1982-01-01
As part of the energy conversion/self sufficiency efforts of DSN engineering, it was necessary to have a simplified computer model of a solar photovoltaic (PV) system. This article describes the analysis and simplifications employed in the development of a PV cell array computer model. The analysis of the incident solar radiation, steady state cell temperature and the current-voltage characteristics of a cell array are discussed. A sample cell array was modelled and the results are presented.
Simplified robot arm dynamics for control
NASA Technical Reports Server (NTRS)
Bejczy, A. K.; Paul, R. P.
1981-01-01
A brief summary and evaluation is presented on the use of symbolic state equation techniques in order to represent robot arm dynamics with sufficient accuracy for controlling arm motion. The use of homogeneous transformations and the Lagrangian formulation of mechanics offers a convenient frame for the derivation, analysis and simplification of complex robot dynamics equations. It is pointed out that simplified state equations can represent robot arm dynamics with good accuracy.
Simplified dichromated gelatin hologram recording process
NASA Technical Reports Server (NTRS)
Georgekutty, Tharayil G.; Liu, Hua-Kuang
1987-01-01
A simplified method for making dichromated gelatin (DCG) holographic optical elements (HOE) has been discovered. The method is much less tedious and it requires a period of processing time comparable with that for processing a silver halide hologram. HOE characteristics including diffraction efficiency (DE), linearity, and spectral sensitivity have been quantitatively investigated. The quality of the holographic grating is very high. Ninety percent or higher diffraction efficiency has been achieved in simple plane gratings made by this process.
Simplified Linear Multivariable Control Of Robots
NASA Technical Reports Server (NTRS)
Seraji, Homayoun
1989-01-01
Simplified method developed to design control system that makes joints of robot follow reference trajectories. Generic design includes independent multivariable feedforward and feedback controllers. Feedforward controller based on inverse of linearized model of dynamics of robot and implements control law that contains only proportional and first and second derivatives of reference trajectories with respect to time. Feedback controller, which implements control law of proportional, first-derivative, and integral terms, makes tracking errors converge toward zero as time passes.
Transformation in Reverse: Naive Assumptions of an Urban Educator
ERIC Educational Resources Information Center
Hagiwara, Sumi; Wray, Susan
2009-01-01
The complexity of urban contexts is often subsumed into generalizations and deficit assumptions of urban communities and its members by those unfamiliar with urban culture. This is especially true for teachers seeking work in urban schools. This article addresses the complex interpretations of urban through the lens of a White male graduate…
7 CFR 4287.134 - Transfer and assumption.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 15 2012-01-01 2012-01-01 false Transfer and assumption. 4287.134 Section 4287.134 Agriculture Regulations of the Department of Agriculture (Continued) RURAL BUSINESS-COOPERATIVE SERVICE AND RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE SERVICING Servicing Business and Industry...
7 CFR 4287.134 - Transfer and assumption.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 15 2013-01-01 2013-01-01 false Transfer and assumption. 4287.134 Section 4287.134 Agriculture Regulations of the Department of Agriculture (Continued) RURAL BUSINESS-COOPERATIVE SERVICE AND RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE SERVICING Servicing Business and Industry...
7 CFR 4287.134 - Transfer and assumption.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 15 2014-01-01 2014-01-01 false Transfer and assumption. 4287.134 Section 4287.134 Agriculture Regulations of the Department of Agriculture (Continued) RURAL BUSINESS-COOPERATIVE SERVICE AND RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE SERVICING Servicing Business and Industry...
7 CFR 4287.134 - Transfer and assumption.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 15 2011-01-01 2011-01-01 false Transfer and assumption. 4287.134 Section 4287.134 Agriculture Regulations of the Department of Agriculture (Continued) RURAL BUSINESS-COOPERATIVE SERVICE AND RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE SERVICING Servicing Business and Industry...
Sensitivity Analysis for Hierarchical Models Employing "t" Level-1 Assumptions.
ERIC Educational Resources Information Center
Seltzer, Michael; Novak, John; Choi, Kilchan; Lim, Nelson
2002-01-01
Examines the ways in which level-1 outliers can impact the estimation of fixed effects and random effects in hierarchical models (HMs). Also outlines and illustrates the use of Markov Chain Monte Carlo algorithms for conducting sensitivity analyses under "t" level-1 assumptions, including algorithms for settings in which the degrees of freedom at…
46 CFR 172.087 - Cargo loading assumptions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 7 2014-10-01 2014-10-01 false Cargo loading assumptions. 172.087 Section 172.087 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SUBDIVISION AND STABILITY SPECIAL RULES PERTAINING TO BULK CARGOES Special Rules Pertaining to a Barge That Carries a Hazardous Liquid...
46 CFR 172.087 - Cargo loading assumptions.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 7 2012-10-01 2012-10-01 false Cargo loading assumptions. 172.087 Section 172.087 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SUBDIVISION AND STABILITY SPECIAL RULES PERTAINING TO BULK CARGOES Special Rules Pertaining to a Barge That Carries a Hazardous Liquid...
46 CFR 172.087 - Cargo loading assumptions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 7 2013-10-01 2013-10-01 false Cargo loading assumptions. 172.087 Section 172.087 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SUBDIVISION AND STABILITY SPECIAL RULES PERTAINING TO BULK CARGOES Special Rules Pertaining to a Barge That Carries a Hazardous Liquid...
46 CFR 172.087 - Cargo loading assumptions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 7 2010-10-01 2010-10-01 false Cargo loading assumptions. 172.087 Section 172.087 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SUBDIVISION AND STABILITY SPECIAL RULES PERTAINING TO BULK CARGOES Special Rules Pertaining to a Barge That Carries a Hazardous Liquid...
46 CFR 172.087 - Cargo loading assumptions.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 7 2011-10-01 2011-10-01 false Cargo loading assumptions. 172.087 Section 172.087 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SUBDIVISION AND STABILITY SPECIAL RULES PERTAINING TO BULK CARGOES Special Rules Pertaining to a Barge That Carries a Hazardous Liquid...
On the "Independence of Trials-Assumption" in Geometric Distribution
ERIC Educational Resources Information Center
Al-Saleh, Mohammad Fraiwan
2008-01-01
In this note, it is shown through an example that the assumption of the independence of Bernoulli trials in the geometric experiment may unexpectedly not be satisfied. The example can serve as a suitable and useful classroom activity for students in introductory probability courses.
Assumptions Underlying the Identification of Gifted and Talented Students
ERIC Educational Resources Information Center
Brown, Scott W.; Renzuli, Joseph S.; Gubbins, E. Jean; Siegle, Del; Zhang, Wanli; Chen, Ching-Hui
2005-01-01
This study examined a national sample of classroom teachers, teachers of the gifted, administrators, and consultants from rural, suburban, and urban areas regarding their assumptions about the gifted identification process. Respondents indicated the degree to which they agreed or disagreed with 20 items that reflected guidelines for a…
The quantum formulation derived from assumptions of epistemic processes
NASA Astrophysics Data System (ADS)
Helland, Inge S.
2015-04-01
Motivated by Quantum Bayesianism I give background for a general epistemic approach to quantum mechanics, where complementarity and symmetry are the only essential features. A general definition of a symmetric epistemic setting is introduced, and for this setting the basic Hilbert space formalism is arrived at under certain technical assumptions. Other aspects of ordinary quantum mechanics will be developed from the same basis elsewhere.
Educational Expansion in Ghana: Economic Assumptions and Expectations
ERIC Educational Resources Information Center
Rolleston, Caine; Oketch, Moses
2008-01-01
The neo-classical "human capital theory" continues to be invoked as part of the rationale for educational expansion in the developing world. While the theory provides a route from educational inputs to economic outputs in terms of increased incomes and standards of living, the route is contingent and relies upon a number of key assumptions. This…
7 CFR 1980.476 - Transfer and assumptions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 14 2011-01-01 2011-01-01 false Transfer and assumptions. 1980.476 Section 1980.476 Agriculture Regulations of the Department of Agriculture (Continued) RURAL HOUSING SERVICE, RURAL BUSINESS-COOPERATIVE SERVICE, RURAL UTILITIES SERVICE, AND FARM SERVICE AGENCY, DEPARTMENT OF AGRICULTURE (CONTINUED) PROGRAM REGULATIONS...
7 CFR 1980.366 - Transfer and assumption.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 14 2013-01-01 2013-01-01 false Transfer and assumption. 1980.366 Section 1980.366 Agriculture Regulations of the Department of Agriculture (Continued) RURAL HOUSING SERVICE, RURAL BUSINESS-COOPERATIVE SERVICE, RURAL UTILITIES SERVICE, AND FARM SERVICE AGENCY, DEPARTMENT OF AGRICULTURE (CONTINUED) PROGRAM REGULATIONS...
7 CFR 1980.366 - Transfer and assumption.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 14 2012-01-01 2012-01-01 false Transfer and assumption. 1980.366 Section 1980.366 Agriculture Regulations of the Department of Agriculture (Continued) RURAL HOUSING SERVICE, RURAL BUSINESS-COOPERATIVE SERVICE, RURAL UTILITIES SERVICE, AND FARM SERVICE AGENCY, DEPARTMENT OF AGRICULTURE (CONTINUED) PROGRAM REGULATIONS...
14 CFR 27.473 - Ground loading conditions and assumptions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Ground loading conditions and assumptions. 27.473 Section 27.473 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL CATEGORY ROTORCRAFT Strength Requirements Ground...
14 CFR 27.473 - Ground loading conditions and assumptions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Ground loading conditions and assumptions. 27.473 Section 27.473 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL CATEGORY ROTORCRAFT Strength Requirements Ground...
40 CFR 264.150 - State assumption of responsibility.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 26 2011-07-01 2011-07-01 false State assumption of responsibility. 264.150 Section 264.150 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS FOR OWNERS AND OPERATORS OF HAZARDOUS WASTE TREATMENT, STORAGE, AND...
40 CFR 264.150 - State assumption of responsibility.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 25 2010-07-01 2010-07-01 false State assumption of responsibility. 264.150 Section 264.150 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS FOR OWNERS AND OPERATORS OF HAZARDOUS WASTE TREATMENT, STORAGE, AND...
Errors in surface irrigation evaluation from incorrect model assumptions
Technology Transfer Automated Retrieval System (TEKTRAN)
Some two-dozen methods have been proposed in the literature for estimating an infiltration function from field measurements. These methods vary in their data requirements and analytical rigor, however most assume some functional form of the infiltration equations. The assumptions regarding the influ...
Viruses, Murphy's Law, and the Dangers of Assumptions....
ERIC Educational Resources Information Center
Lester, Dan
1999-01-01
An experienced library technology manager relates what happened in the wake of a serious library computer virus attack, which he accidentally unleashed. The narrative describes the combination of coincidences, mistakes, assumptions, and delays that caused the incident, and outlines the 10 key lessons learned. (AEF)
Philosophical Assumptions and Contemporary Research Perspectives: A Course Supplement.
ERIC Educational Resources Information Center
Fowler, Gene D.
To supplement course materials for classes in communication theory and research methods, this paper compares philosophical assumptions underlying three approaches to communication research: scientific, which stresses quantitative methods of analysis; humanistic, which encompasses many conflicting techniques but has as a common element--the…
The Assumptive World of Three State Policy Researchers.
ERIC Educational Resources Information Center
Sroufe, Gerald E.
1985-01-01
A critique of a research study regarding policy formation at the state level is presented, focusing on the "assumptive world" of the researchers. While the researchers have created a new vista for study in this area, there is a great need for improved mthodology. (CB)
Ten Frequent Assumptions of Cultural Bias in Counseling.
ERIC Educational Resources Information Center
Pedersen, Paul
1987-01-01
Identifies 10 of the most frequently encountered examples of cultural bias that consistently emerge in the literature about multicultural counseling and development. Assumptions are described in the areas of normal behavior, individualism, limits of academic disciplines, dependence on abstract words, independence, client support systems, linear…
Spatial Angular Compounding for Elastography without the Incompressibility Assumption
Rao, Min; Varghese, Tomy
2007-01-01
Spatial-angular compounding is a new technique that enables the reduction of noise artifacts in ultrasound elastography. Previous results using spatial angular compounding, however, were based on the use of the tissue incompressibility assumption. Compounded elastograms were obtained from a spatially-weighted average of local strain estimated from radiofrequency echo signals acquired at different insonification angles. In this paper, we present a new method for reducing the noise artifacts in the axial strain elastogram utilizing a least-squares approach on the angular displacement estimates that does not use the incompressibility assumption. This method produces axial strain elastograms with higher image quality, compared to noncompounded axial strain elastograms, and is referred to as the least-squares angular-compounding approach for elastography. To distinguish between these two angular compounding methods, the spatial-angular compounding with angular weighting based on the tissue incompressibility assumption is referred to as weighted compounding. In this paper, we compare the performance of the two angular-compounding techniques for elastography using beam steering on a linear-array transducer. Quantitative experimental results demonstrate that least-squares compounding provides comparable but smaller improvements in both the elastographic signal-to-noise ratio and the contrast-to-noise ratio, as compared to the weighted-compounding method. Ultrasound simulation results suggest that the least-squares compounding method performs better and provide accurate and robust results when compared to the weighted compounding method, in the case where the incompressibility assumption does not hold. PMID:16761786
Evaluation of assumptions in soil moisture triple collocation analysis
Technology Transfer Automated Retrieval System (TEKTRAN)
Triple collocation analysis (TCA) enables estimation of error variances for three or more products that retrieve or estimate the same geophysical variable using mutually-independent methods. Several statistical assumptions regarding the statistical nature of errors (e.g., mutual independence and ort...
Making Predictions about Chemical Reactivity: Assumptions and Heuristics
ERIC Educational Resources Information Center
Maeyer, Jenine; Talanquer, Vicente
2013-01-01
Diverse implicit cognitive elements seem to support but also constrain reasoning in different domains. Many of these cognitive constraints can be thought of as either implicit assumptions about the nature of things or reasoning heuristics for decision-making. In this study we applied this framework to investigate college students'…
Challenging Our Assumptions: Helping a Baby Adjust to Center Care.
ERIC Educational Resources Information Center
Elliot, Enid
2003-01-01
Contends that assumptions concerning infants' adjustment to child center care need to be tempered with attention to observation, thought, and commitment to each individual baby. Describes the Options Daycare program for pregnant teens and young mothers. Presents a case study illustrating the need for openness in strategy and planning for…
Woman's Moral Development in Search of Philosophical Assumptions.
ERIC Educational Resources Information Center
Sichel, Betty A.
1985-01-01
Examined is Carol Gilligan's thesis that men and women use different moral languages to resolve moral dilemmas, i.e., women speak a language of caring and responsibility, and men speak a language of rights and justice. Her thesis is not grounded with adequate philosophical assumptions. (Author/RM)
Assessing and Developing the Concept of Assumptions in Science Teachers.
ERIC Educational Resources Information Center
Yip, Din Yan
2001-01-01
Describes a method using small group and whole class discussions with guiding questions to enable teachers to construct successfully the concept of assumptions and develop a better appreciation of the nature and limitations of the process of scientific inquiry. (Author/SAH)
40 CFR 261.150 - State assumption of responsibility.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 25 2010-07-01 2010-07-01 false State assumption of responsibility. 261.150 Section 261.150 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) IDENTIFICATION AND LISTING OF HAZARDOUS WASTE Financial Requirements for Management...
Assessment of Complex Performances: Limitations of Key Measurement Assumptions.
ERIC Educational Resources Information Center
Delandshere, Ginette; Petrosky, Anthony R.
1998-01-01
Examines measurement concepts and assumptions traditionally used in educational assessment, using the Early Adolescence/English Language Arts assessment developed for the National Board for Professional Teaching Standards as a context. The use of numerical ratings in complex performance assessment is questioned. (SLD)
Male and Female Assumptions About Colleagues' Views of Their Competence.
ERIC Educational Resources Information Center
Heilman, Madeline E.; Kram, Kathy E.
1983-01-01
Compared the assumptions of 100 male and female employees about colleagues' views of their performance on a joint task. Results indicated women anticipated more blame for a joint failure, less credit for a joint success, and a work image of lesser effectiveness, regardless of the co-worker's sex. (JAC)
Quantum cryptography in real-life applications: Assumptions and security
NASA Astrophysics Data System (ADS)
Zhao, Yi
Quantum cryptography, or quantum key distribution (QKD), provides a means of unconditionally secure communication. The security is in principle based on the fundamental laws of physics. Security proofs show that if quantum cryptography is appropriately implemented, even the most powerful eavesdropper cannot decrypt the message from a cipher. The implementations of quantum crypto-systems in real life may not fully comply with the assumptions made in the security proofs. Such discrepancy between the experiment and the theory can be fatal to the security of a QKD system. In this thesis we address a number of these discrepancies. A perfect single-photon source is often assumed in many security proofs. However, a weak coherent source is widely used in a real-life QKD implementation. Decoy state protocols have been proposed as a novel approach to dramatically improve the performance of a weak coherent source based QKD implementation without jeopardizing its security. Here, we present the first experimental demonstrations of decoy state protocols. Our experimental scheme was later adopted by most decoy state QKD implementations. In the security proof of decoy state protocols as well as many other QKD protocols, it is widely assumed that a sender generates a phase-randomized coherent state. This assumption has been enforced in few implementations. We close this gap in two steps: First, we implement and verify the phase randomization experimentally; second, we prove the security of a QKD implementation without the coherent state assumption. In many security proofs of QKD, it is assumed that all the detectors on the receiver's side have identical detection efficiencies. We show experimentally that this assumption may be violated in a commercial QKD implementation due to an eavesdropper's malicious manipulation. Moreover, we show that the eavesdropper can learn part of the final key shared by the legitimate users as a consequence of this violation of the assumptions.
Tremaine, S C; Mills, A L
1987-12-01
The critical assumptions of the dilution method for estimating grazing rates of microzooplankton were tested by using a community from the sediment-water interface of Lake Anna, Va. Determination of the appropriate computational model was achieved by regression analysis; the exponential model was appropriate for bacterial growth at Lake Anna. The assumption that the change in grazing pressure is linearly proportional to the dilution factor was tested by analysis of variance with a lack-of-fit test. There was a significant (P < 0.0001) linear (P > 0.05) relationship between the dilution factor and time-dependent change in ln bacterial abundance. The assumption that bacterial growth is not altered by possible substrate enrichment in the dilution treatment was tested by amending diluted water with various amounts of dissolved organic carbon (either yeast extract or extracted carbon from lake sediments). Additions of carbon did not significantly alter bacterial growth rates during the incubation period (24 h). On the basis of these results, the assumptions of the dilution method proved to be valid for the system examined. PMID:16347507
Impact of one-layer assumption on diffuse reflectance spectroscopy of skin
NASA Astrophysics Data System (ADS)
Hennessy, Ricky; Markey, Mia K.; Tunnell, James W.
2015-02-01
Diffuse reflectance spectroscopy (DRS) can be used to noninvasively measure skin properties. To extract skin properties from DRS spectra, you need a model that relates the reflectance to the tissue properties. Most models are based on the assumption that skin is homogenous. In reality, skin is composed of multiple layers, and the homogeneity assumption can lead to errors. In this study, we analyze the errors caused by the homogeneity assumption. This is accomplished by creating realistic skin spectra using a computational model, then extracting properties from those spectra using a one-layer model. The extracted parameters are then compared to the parameters used to create the modeled spectra. We used a wavelength range of 400 to 750 nm and a source detector separation of 250 μm. Our results show that use of a one-layer skin model causes underestimation of hemoglobin concentration [Hb] and melanin concentration [mel]. Additionally, the magnitude of the error is dependent on epidermal thickness. The one-layer assumption also causes [Hb] and [mel] to be correlated. Oxygen saturation is overestimated when it is below 50% and underestimated when it is above 50%. We also found that the vessel radius factor used to account for pigment packaging is correlated with epidermal thickness.
Provably-secure (Chinese government) SM2 and simplified SM2 key exchange protocols.
Yang, Ang; Nam, Junghyun; Kim, Moonseong; Choo, Kim-Kwang Raymond
2014-01-01
We revisit the SM2 protocol, which is widely used in Chinese commercial applications and by Chinese government agencies. Although it is by now standard practice for protocol designers to provide security proofs in widely accepted security models in order to assure protocol implementers of their security properties, the SM2 protocol does not have a proof of security. In this paper, we prove the security of the SM2 protocol in the widely accepted indistinguishability-based Bellare-Rogaway model under the elliptic curve discrete logarithm problem (ECDLP) assumption. We also present a simplified and more efficient version of the SM2 protocol with an accompanying security proof. PMID:25276863
Simplified Analysis of Pulse Detonation Rocket Engine Blowdown Gasdynamics and Performance
NASA Technical Reports Server (NTRS)
Morris, C. I.; Rodgers, Stephen L. (Technical Monitor)
2002-01-01
Pulse detonation rocket engines (PDREs) offer potential performance improvements over conventional designs, but represent a challenging modellng task. A simplified model for an idealized, straight-tube, single-shot PDRE blowdown process and thrust determination is described and implemented. In order to form an assessment of the accuracy of the model, the flowfield time history is compared to experimental data from Stanford University. Parametric Studies of the effect of mixture stoichiometry, initial fill temperature, and blowdown pressure ratio on the performance of a PDRE are performed using the model. PDRE performance is also compared with a conventional steady-state rocket engine over a range of pressure ratios using similar gasdynamic assumptions.
Provably-Secure (Chinese Government) SM2 and Simplified SM2 Key Exchange Protocols
Nam, Junghyun; Kim, Moonseong
2014-01-01
We revisit the SM2 protocol, which is widely used in Chinese commercial applications and by Chinese government agencies. Although it is by now standard practice for protocol designers to provide security proofs in widely accepted security models in order to assure protocol implementers of their security properties, the SM2 protocol does not have a proof of security. In this paper, we prove the security of the SM2 protocol in the widely accepted indistinguishability-based Bellare-Rogaway model under the elliptic curve discrete logarithm problem (ECDLP) assumption. We also present a simplified and more efficient version of the SM2 protocol with an accompanying security proof. PMID:25276863
Berglund, F
1978-01-01
The use of additives to food fulfils many purposes, as shown by the index issued by the Codex Committee on Food Additives: Acids, bases and salts; Preservatives, Antioxidants and antioxidant synergists; Anticaking agents; Colours; Emulfifiers; Thickening agents; Flour-treatment agents; Extraction solvents; Carrier solvents; Flavours (synthetic); Flavour enhancers; Non-nutritive sweeteners; Processing aids; Enzyme preparations. Many additives occur naturally in foods, but this does not exclude toxicity at higher levels. Some food additives are nutrients, or even essential nutritents, e.g. NaCl. Examples are known of food additives causing toxicity in man even when used according to regulations, e.g. cobalt in beer. In other instances, poisoning has been due to carry-over, e.g. by nitrate in cheese whey - when used for artificial feed for infants. Poisonings also occur as the result of the permitted substance being added at too high levels, by accident or carelessness, e.g. nitrite in fish. Finally, there are examples of hypersensitivity to food additives, e.g. to tartrazine and other food colours. The toxicological evaluation, based on animal feeding studies, may be complicated by impurities, e.g. orthotoluene-sulfonamide in saccharin; by transformation or disappearance of the additive in food processing in storage, e.g. bisulfite in raisins; by reaction products with food constituents, e.g. formation of ethylurethane from diethyl pyrocarbonate; by metabolic transformation products, e.g. formation in the gut of cyclohexylamine from cyclamate. Metabolic end products may differ in experimental animals and in man: guanylic acid and inosinic acid are metabolized to allantoin in the rat but to uric acid in man. The magnitude of the safety margin in man of the Acceptable Daily Intake (ADI) is not identical to the "safety factor" used when calculating the ADI. The symptoms of Chinese Restaurant Syndrome, although not hazardous, furthermore illustrate that the whole ADI
Experience with simplified inelastic analysis of piping designed for elevated temperature service
Severud, L.K.
1980-03-01
Screening rules and preliminary design of FFTF piping were developed in 1974 based on expected behavior and engineering judgment, approximate calculations, and a few detailed inelastic analyses of pipelines. This paper provides findings from six additional detailed inelastic analyses with correlations to the simplified analysis screening rules. In addition, simplified analysis methods for treating weldment local stresses and strains as well as fabrication induced flaws are described. Based on the FFTF experience, recommendations for future Code and technology work to reduce design analysis costs are identified.
Simplified quaternary signed-digit arithmetic and its optical implementation
NASA Astrophysics Data System (ADS)
Li, Guoqiang; Liu, Liren; Cheng, Huiquan; Jing, Hongmei
1997-02-01
A simplified two-step quaternary signed-digit addition algorithm is presented. In contrast to the previously reported techniques using a large number of six-variable or four-variable minterms, the proposed algorithm requires only 10 minterms in the first step and 6 minterms in the second step. Furthermore, our scheme uses only two variables for each minterm. Therefore, the information to be stored is greatly reduced and the system complexity is decreased. With a shared-content-addressable memory (SCAM), it needs to store only one set of minterms independent of the operand length, and consequently, the system size does not increase with the increase of the operand digits. For optical implementation, an incoherent correlator based SCAM processor unit can be used to perform the two-step addition. The unit is very simple, easy to align and implement, and insensitive to the environment. An experimental result is given.
NASA Astrophysics Data System (ADS)
Yan, Xiao-Yong; Han, Xiao-Pu; Zhou, Tao; Wang, Bing-Hong
2011-12-01
We propose a simplified human regular mobility model to simulate an individual's daily travel with three sequential activities: commuting to workplace, going to do leisure activities and returning home. With the assumption that the individual has a constant travel speed and inferior limit of time at home and in work, we prove that the daily moving area of an individual is an ellipse, and finally obtain an exact solution of the gyration radius. The analytical solution captures the empirical observation well.
NASA Astrophysics Data System (ADS)
You, Bo; Li, Fang
2016-08-01
This paper is concerned with the long-time behaviour of the two-dimensional non-autonomous simplified Ericksen-Leslie system for nematic liquid crystal flows introduced in Lin and Liu (Commun Pure Appl Math, 48:501-537, 1995) with a non-autonomous forcing bulk term and order parameter field boundary conditions. In this paper, we prove the existence of pullback attractors and estimate the upper bound of its fractal dimension under some suitable assumptions.
A Simplified Model of Choice Behavior under Uncertainty
Lin, Ching-Hung; Lin, Yu-Kai; Song, Tzu-Jiun; Huang, Jong-Tsun; Chiu, Yao-Chu
2016-01-01
The Iowa Gambling Task (IGT) has been standardized as a clinical assessment tool (Bechara, 2007). Nonetheless, numerous research groups have attempted to modify IGT models to optimize parameters for predicting the choice behavior of normal controls and patients. A decade ago, most researchers considered the expected utility (EU) model (Busemeyer and Stout, 2002) to be the optimal model for predicting choice behavior under uncertainty. However, in recent years, studies have demonstrated that models with the prospect utility (PU) function are more effective than the EU models in the IGT (Ahn et al., 2008). Nevertheless, after some preliminary tests based on our behavioral dataset and modeling, it was determined that the Ahn et al. (2008) PU model is not optimal due to some incompatible results. This study aims to modify the Ahn et al. (2008) PU model to a simplified model and used the IGT performance of 145 subjects as the benchmark data for comparison. In our simplified PU model, the best goodness-of-fit was found mostly as the value of α approached zero. More specifically, we retested the key parameters α, λ, and A in the PU model. Notably, the influence of the parameters α, λ, and A has a hierarchical power structure in terms of manipulating the goodness-of-fit in the PU model. Additionally, we found that the parameters λ and A may be ineffective when the parameter α is close to zero in the PU model. The present simplified model demonstrated that decision makers mostly adopted the strategy of gain-stay loss-shift rather than foreseeing the long-term outcome. However, there are other behavioral variables that are not well revealed under these dynamic-uncertainty situations. Therefore, the optimal behavioral models may not have been found yet. In short, the best model for predicting choice behavior under dynamic-uncertainty situations should be further evaluated. PMID:27582715
A Simplified Model of Choice Behavior under Uncertainty.
Lin, Ching-Hung; Lin, Yu-Kai; Song, Tzu-Jiun; Huang, Jong-Tsun; Chiu, Yao-Chu
2016-01-01
The Iowa Gambling Task (IGT) has been standardized as a clinical assessment tool (Bechara, 2007). Nonetheless, numerous research groups have attempted to modify IGT models to optimize parameters for predicting the choice behavior of normal controls and patients. A decade ago, most researchers considered the expected utility (EU) model (Busemeyer and Stout, 2002) to be the optimal model for predicting choice behavior under uncertainty. However, in recent years, studies have demonstrated that models with the prospect utility (PU) function are more effective than the EU models in the IGT (Ahn et al., 2008). Nevertheless, after some preliminary tests based on our behavioral dataset and modeling, it was determined that the Ahn et al. (2008) PU model is not optimal due to some incompatible results. This study aims to modify the Ahn et al. (2008) PU model to a simplified model and used the IGT performance of 145 subjects as the benchmark data for comparison. In our simplified PU model, the best goodness-of-fit was found mostly as the value of α approached zero. More specifically, we retested the key parameters α, λ, and A in the PU model. Notably, the influence of the parameters α, λ, and A has a hierarchical power structure in terms of manipulating the goodness-of-fit in the PU model. Additionally, we found that the parameters λ and A may be ineffective when the parameter α is close to zero in the PU model. The present simplified model demonstrated that decision makers mostly adopted the strategy of gain-stay loss-shift rather than foreseeing the long-term outcome. However, there are other behavioral variables that are not well revealed under these dynamic-uncertainty situations. Therefore, the optimal behavioral models may not have been found yet. In short, the best model for predicting choice behavior under dynamic-uncertainty situations should be further evaluated. PMID:27582715
NASA Astrophysics Data System (ADS)
Simonenko, V. A.; Gryaznykh, D. A.; Litvinenko, I. A.; Lykov, V. A.; Shushlebin, A. N.
2012-04-01
Some thermonuclear X-ray bursters exhibit a high-frequency (about 300 Hz or more) brightness modulation at the rising phase of some bursts. These oscillations are explained by inhomogeneous heating of the surface layer on a rapidly rotating neutron star due to the finite propagation speed of thermonuclear burning. We suggest and substantiate a mechanism of this propagation that is consistent with experimental data. Initially, thermonuclear ignition occurs in a small region of the neutron star surface layer. The burning products rapidly rise and spread in the upper atmospheric layers due to turbulent convection. The accumulation of additional matter leads to matter compression and ignition at the bottom of the layer. This determines the propagation of the burning front. To substantiate this mechanism, we use the simplifying assumptions about a helium composition of the neutron star atmosphere and its initial adiabatic structure with a density of 1.75 × 108 g cm-3 at the bottom. 2D numerical simulations have been performed using a modified particle method in the adiabatic approximation.
An epidemic model to evaluate the homogeneous mixing assumption
NASA Astrophysics Data System (ADS)
Turnes, P. P.; Monteiro, L. H. A.
2014-11-01
Many epidemic models are written in terms of ordinary differential equations (ODE). This approach relies on the homogeneous mixing assumption; that is, the topological structure of the contact network established by the individuals of the host population is not relevant to predict the spread of a pathogen in this population. Here, we propose an epidemic model based on ODE to study the propagation of contagious diseases conferring no immunity. The state variables of this model are the percentages of susceptible individuals, infectious individuals and empty space. We show that this dynamical system can experience transcritical and Hopf bifurcations. Then, we employ this model to evaluate the validity of the homogeneous mixing assumption by using real data related to the transmission of gonorrhea, hepatitis C virus, human immunodeficiency virus, and obesity.
Systematic Model Building Based on Quark-Lepton Complementarity Assumptions
NASA Astrophysics Data System (ADS)
Winter, Walter
2008-02-01
In this talk, we present a procedure to systematically generate a large number of valid mass matrix textures from very generic assumptions. Compared to plain anarchy arguments, we postulate some structure for the theory, such as a possible connection between quarks and leptons, and a mechanism to generate flavor structure. We illustrate how this parameter space can be used to test the exclusion power of future experiments, and we point out that one can systematically generate embeddings in ZN product flavor symmetry groups.
Questioning Engelhardt's assumptions in Bioethics and Secular Humanism.
Ahmadi Nasab Emran, Shahram
2016-06-01
In Bioethics and Secular Humanism: The Search for a Common Morality, Tristram Engelhardt examines various possibilities of finding common ground for moral discourse among people from different traditions and concludes their futility. In this paper I will argue that many of the assumptions on which Engelhardt bases his conclusion about the impossibility of a content-full secular bioethics are problematic. By starting with the notion of moral strangers, there is no possibility, by definition, for a content-full moral discourse among moral strangers. It means that there is circularity in starting the inquiry with a definition of moral strangers, which implies that they do not share enough moral background or commitment to an authority to allow for reaching a moral agreement, and concluding that content-full morality is impossible among moral strangers. I argue that assuming traditions as solid and immutable structures that insulate people across their boundaries is problematic. Another questionable assumption in Engelhardt's work is the idea that religious and philosophical traditions provide content-full moralities. As the cardinal assumption in Engelhardt's review of the various alternatives for a content-full moral discourse among moral strangers, I analyze his foundationalist account of moral reasoning and knowledge and indicate the possibility of other ways of moral knowledge, besides the foundationalist one. Then, I examine Engelhardt's view concerning the futility of attempts at justifying a content-full secular bioethics, and indicate how the assumptions have shaped Engelhardt's critique of the alternatives for the possibility of content-full secular bioethics. PMID:26715286
Simplified Explosive Joining of Tubes to Fittings
NASA Technical Reports Server (NTRS)
Bement, L. J.; Bailey, J. W.; Perry, R.; Finch, M. S.
1987-01-01
Technique simplifies tube-to-fitting joining, as compared to fusion welding, and provides improvement on standard procedures used to join tubes explosively to tube fittings. Special tool inserted into tube to be joined. Tool allows strip of ribbon explosive to be placed right at joint. Ribbon explosive and mild detonating fuse allows use of smaller charge. Assembled tool storable, and process amenable to automation. Assembly of components, insertion of tool into weld site, and joining operation mechanized without human contact. Used to assemble components in nuclear reactors or in other environments hostile to humans.
Simplified stock markets described by number operators
NASA Astrophysics Data System (ADS)
Bagarello, F.
2009-06-01
In this paper we continue our systematic analysis of the operatorial approach previously proposed in an economical context and we discuss a mixed toy model of a simplified stock market, i.e. a model in which the price of the shares is given as an input. We deduce the time evolution of the portfolio of the various traders of the market, as well as of other observable quantities. As in a previous paper, we solve the equations of motion by means of a fixed point like approximation.
Simplified dynamic buckling assessment of steel containments
Farrar, C.R.; Duffey, T.A.; Renick, D.H.
1993-02-01
A simplified, three-degree-of-freedom analytical procedure for performing a response spectrum buckling analysis of a thin containment shell is developed. Two numerical examples with R/t values which bound many existing steel containments are used to illustrate the procedure. The role of damping on incipient buckling acceleration level is evaluated for a regulatory seismic spectrum using the two numerical examples. The zero-period acceleration level that causes incipient buckling in either of the two containments increases 31% when damping is increased from 1% to 4% of critical. Comparisons with finite element results on incipient buckling levels are favorable.
Chronic Meningitis: Simplifying a Diagnostic Challenge.
Baldwin, Kelly; Whiting, Chris
2016-03-01
Chronic meningitis can be a diagnostic dilemma for even the most experienced clinician. Many times, the differential diagnosis is broad and encompasses autoimmune, neoplastic, and infectious etiologies. This review will focus on a general approach to chronic meningitis to simplify the diagnostic challenges many clinicians face. The article will also review the most common etiologies of chronic meningitis in some detail including clinical presentation, diagnostic testing, treatment, and outcomes. By using a case-based approach, we will focus on the key elements of clinical presentation and laboratory analysis that will yield the most rapid and accurate diagnosis in these complicated cases. PMID:26888190
Rudolf Keller
2004-08-10
In this project, a concept to improve the performance of aluminum production cells by introducing potlining additives was examined and tested. Boron oxide was added to cathode blocks, and titanium was dissolved in the metal pool; this resulted in the formation of titanium diboride and caused the molten aluminum to wet the carbonaceous cathode surface. Such wetting reportedly leads to operational improvements and extended cell life. In addition, boron oxide suppresses cyanide formation. This final report presents and discusses the results of this project. Substantial economic benefits for the practical implementation of the technology are projected, especially for modern cells with graphitized blocks. For example, with an energy savings of about 5% and an increase in pot life from 1500 to 2500 days, a cost savings of $ 0.023 per pound of aluminum produced is projected for a 200 kA pot.
Harrup, Mason K; Rollins, Harry W
2013-11-26
An additive comprising a phosphazene compound that has at least two reactive functional groups and at least one capping functional group bonded to phosphorus atoms of the phosphazene compound. One of the at least two reactive functional groups is configured to react with cellulose and the other of the at least two reactive functional groups is configured to react with a resin, such as an amine resin of a polycarboxylic acid resin. The at least one capping functional group is selected from the group consisting of a short chain ether group, an alkoxy group, or an aryloxy group. Also disclosed are an additive-resin admixture, a method of treating a wood product, and a wood product.
Development of Generation System of Simplified Digital Maps
NASA Astrophysics Data System (ADS)
Uchimura, Keiichi; Kawano, Masato; Tokitsu, Hiroki; Hu, Zhencheng
In recent years, digital maps have been used in a variety of scenarios, including car navigation systems and map information services over the Internet. These digital maps are formed by multiple layers of maps of different scales; the map data most suitable for the specific situation are used. Currently, the production of map data of different scales is done by hand due to constraints related to processing time and accuracy. We conducted research concerning technologies for automatic generation of simplified map data from detailed map data. In the present paper, the authors propose the following: (1) a method to transform data related to streets, rivers, etc. containing widths into line data, (2) a method to eliminate the component points of the data, and (3) a method to eliminate data that lie below a certain threshold. In addition, in order to evaluate the proposed method, a user survey was conducted; in this survey we compared maps generated using the proposed method with the commercially available maps. From the viewpoint of the amount of data reduction and processing time, and on the basis of the results of the survey, we confirmed the effectiveness of the automatic generation of simplified maps using the proposed methods.
An experimental approach to a simplified model of human birth.
Lehn, Andrea M; Baumer, Alexa; Leftwich, Megan C
2016-07-26
This study presents a simplified experimental model of labor for the study of fetal lie and amniotic fluid properties. It mimics a ventouse (vacuum extraction) delivery to study the effect of amniotic fluid properties on force transfer to a passive fetus. The simplified vacuum delivery consists of a solid ovate spheroid being pulled from a passive, flexible spherical elastic shell filled with fluid. We compare the force necessary to remove the ovate fetus in fluids of varying properties. Additionally, the fetal lie-angular deviation from maternal/fetal spinal alignment-is changed by 5° intervals and the pullout force is measured. In both the concentric ovate experiments, the force to remove the fetus changes with the properties of the fluid occupying the space between the fetus and the uterus. Increasing the fluid viscosity by 35% decreases the maximum fetal removal force by up to 52.5%. Furthermore, while the force is dominated by the elastic force of the latex uterus, the properties of the amniotic fluid can significantly decrease the total removal force. This study demonstrates that the fluid components of a birth model can significantly alter the forces associated with fetus removal. This suggests that complete studies of human parturition should be designed to include both the material and fluid systems. PMID:26684434
Simplified signal processing for impedance spectroscopy with spectrally sparse sequences
NASA Astrophysics Data System (ADS)
Annus, P.; Land, R.; Reidla, M.; Ojarand, J.; Mughal, Y.; Min, M.
2013-04-01
Classical method for measurement of the electrical bio-impedance involves excitation with sinusoidal waveform. Sinusoidal excitation at fixed frequency points enables wide variety of signal processing options, most general of them being Fourier transform. Multiplication with two quadrature waveforms at desired frequency could be easily accomplished both in analogue and in digital domains, even simplest quadrature square waves can be considered, which reduces signal processing task in analogue domain to synchronous switching followed by low pass filter, and in digital domain requires only additions. So called spectrally sparse excitation sequences (SSS), which have been recently introduced into bio-impedance measurement domain, are very reasonable choice when simultaneous multifrequency excitation is required. They have many good properties, such as ease of generation and good crest factor compared to similar multisinusoids. Typically, the usage of discrete or fast Fourier transform in signal processing step is considered so far. Usage of simplified methods nevertheless would reduce computational burden, and enable simpler, less costly and less energy hungry signal processing platforms. Accuracy of the measurement with SSS excitation when using different waveforms for quadrature demodulation will be compared in order to evaluate the feasibility of the simplified signal processing. Sigma delta modulated sinusoid (binary signal) is considered to be a good alternative for a synchronous demodulation.
Miller, G Edward; Selden, Thomas M
2013-01-01
Objective To estimate 2012 tax expenditures for employer-sponsored insurance (ESI) in the United States and to explore the sensitivity of estimates to assumptions regarding the incidence of employer premium contributions. Data Sources Nationally representative Medical Expenditure Panel Survey data from the 2005–2007 Household Component (MEPS-HC) and the 2009–2010 Insurance Component (MEPS IC). Study Design We use MEPS HC workers to construct synthetic workforces for MEPS IC establishments, applying the workers' marginal tax rates to the establishments' insurance premiums to compute the tax subsidy, in aggregate and by establishment characteristics. Simulation enables us to examine the sensitivity of ESI tax subsidy estimates to a range of scenarios for the within-firm incidence of employer premium contributions when workers have heterogeneous health risks and make heterogeneous plan choices. Principal Findings We simulate the total ESI tax subsidy for all active, civilian U.S. workers to be $257.4 billion in 2012. In the private sector, the subsidy disproportionately flows to workers in large establishments and establishments with predominantly high wage or full-time workforces. The estimates are remarkably robust to alternative incidence assumptions. Conclusions The aggregate value of the ESI tax subsidy and its distribution across firms can be reliably estimated using simplified incidence assumptions. PMID:23398400
Evaluating Organic Aerosol Model Performance: Impact of two Embedded Assumptions
NASA Astrophysics Data System (ADS)
Jiang, W.; Giroux, E.; Roth, H.; Yin, D.
2004-05-01
Organic aerosols are important due to their abundance in the polluted lower atmosphere and their impact on human health and vegetation. However, modeling organic aerosols is a very challenging task because of the complexity of aerosol composition, structure, and formation processes. Assumptions and their associated uncertainties in both models and measurement data make model performance evaluation a truly demanding job. Although some assumptions are obvious, others are hidden and embedded, and can significantly impact modeling results, possibly even changing conclusions about model performance. This paper focuses on analyzing the impact of two embedded assumptions on evaluation of organic aerosol model performance. One assumption is about the enthalpy of vaporization widely used in various secondary organic aerosol (SOA) algorithms. The other is about the conversion factor used to obtain ambient organic aerosol concentrations from measured organic carbon. These two assumptions reflect uncertainties in the model and in the ambient measurement data, respectively. For illustration purposes, various choices of the assumed values are implemented in the evaluation process for an air quality model based on CMAQ (the Community Multiscale Air Quality Model). Model simulations are conducted for the Lower Fraser Valley covering Southwest British Columbia, Canada, and Northwest Washington, United States, for a historical pollution episode in 1993. To understand the impact of the assumed enthalpy of vaporization on modeling results, its impact on instantaneous organic aerosol yields (IAY) through partitioning coefficients is analysed first. The analysis shows that utilizing different enthalpy of vaporization values causes changes in the shapes of IAY curves and in the response of SOA formation capability of reactive organic gases to temperature variations. These changes are then carried into the air quality model and cause substantial changes in the organic aerosol modeling
Spicer, D. S.; Bingham, R.; Harrison, R.
2013-05-01
The fundamental assumptions of conventional solar flare and coronal mass ejection (CME) theory are re-examined. In particular, the common theoretical assumption that magnetic energy that drives flares and CMEs can be stored in situ in the corona with sufficient energy density is found wanting. In addition, the observational constraint that flares and CMEs produce non-thermal electrons with fluxes of order 10{sup 34}-10{sup 36} electrons s{sup -1}, with energies of order 10-20 keV, must also be explained. This constraint when imposed on the ''standard model'' for flares and CMEs is found to miss the mark by many orders of magnitude. We suggest, in conclusion, there are really only two possible ways to explain the requirements of observations and theory: flares and CMEs are caused by mass-loaded prominences or driven directly by emerging magnetized flux.
Simplified SBLOCA Analysis of AP1000
Brown, William L.
2004-07-01
The AP1000 is a 1000 MWe advanced nuclear power plant design that uses passive safety features such as a multi-stage, automatic depressurization system (ADS) and gravity-driven, safety injection from core make-up tanks (CMTs) and an in-containment refueling water storage tank (IRWST) to mitigate SBLOCA events. The period of most safety significance for AP1000 during a SBLOCA event is typically associated with the actuation of the fourth stage of the ADS and subsequent transition from CMT to IRWST safety injection. As this period of a SBLOCA is generally of a quasi-steady nature, the integral performance of the AP1000 can be understood and evaluated with a simplified model of the reactor vessel, ADS, and safety injection from the CMTs and IRWST. The simplified model of the AP1000 consists of a series of steady state simulations that uses drift flux in the core region and homogeneous treatment of the core exit region including the ADS flow paths to generate a family of core flow demand curves as a function of system pressure (i.e. mass flow required to satisfy core cooling). These core flow demand curves are plotted against passive safety system supply curves from the CMTs and IRWST to demonstrate the adequacy of the integral performance of the AP1000 during the most important phase of a SBLOCA. (author)
Paleostress inversion: A multi-parametric geomechanical evaluation of the Wallace-Bott assumptions
NASA Astrophysics Data System (ADS)
Lejri, Mostfa; Maerten, Frantz; Maerten, Laurent; Soliva, Roger
2015-08-01
Wallace (1951) and Bott (1959) were the first to introduce the idea that the slip on each fault surface has the same direction and sense as the maximum shear stress resolved on that surface. However, this simplified hypothesis is questionable since fault mechanical interactions may induce slip reorientations. Earlier numerical geomechanical models confirmed that the slickenlines (slip vectors) are not necessarily parallel to the maximum resolved shear stress but are consistent with local stress perturbations. This leads us to ask as to what extent the Wallace and Bott simplifications are reliable as a basis hypothesis for stress inversion from fault slip data. Here, a geomechanical multi-parametric study using a 3D boundary element method, covering (i) fault geometries such as intersected faults or corrugated fault surfaces, (ii) the full range of Andersonian state of stress, (iii) fault friction, (iv) fault fluid pressure, (v) half space effect and (vi), rock properties, is performed in order to understand the effect of each parameter on the misfit angle between geomechanical slip vectors and the resolved shear stresses. It is shown that significant misfit angles can be found under specific configurations invalidating the Wallace and Bott assumptions, even though fault friction tends to minimize the misfit. We therefore conclude that in such cases, stress inversions based on fault slip data should be interpreted with care.
Evaluating risk factor assumptions: a simulation-based approach
2011-01-01
Background Microsimulation models are an important tool for estimating the comparative effectiveness of interventions through prediction of individual-level disease outcomes for a hypothetical population. To estimate the effectiveness of interventions targeted toward high risk groups, the mechanism by which risk factors influence the natural history of disease must be specified. We propose a method for evaluating these risk factor assumptions as part of model-building. Methods We used simulation studies to examine the impact of risk factor assumptions on the relative rate (RR) of colorectal cancer (CRC) incidence and mortality for a cohort with a risk factor compared to a cohort without the risk factor using an extension of the CRC-SPIN model for colorectal cancer. We also compared the impact of changing age at initiation of screening colonoscopy for different risk mechanisms. Results Across CRC-specific risk factor mechanisms, the RR of CRC incidence and mortality decreased (towards one) with increasing age. The rate of change in RRs across age groups depended on both the risk factor mechanism and the strength of the risk factor effect. Increased non-CRC mortality attenuated the effect of CRC-specific risk factors on the RR of CRC when both were present. For each risk factor mechanism, earlier initiation of screening resulted in more life years gained, though the magnitude of life years gained varied across risk mechanisms. Conclusions Simulation studies can provide insight into both the effect of risk factor assumptions on model predictions and the type of data needed to calibrate risk factor models. PMID:21899767
A Comparison of the Free Ride and CISK Assumptions.
NASA Astrophysics Data System (ADS)
Strunge Pedersen, Torben
1991-08-01
In a recent paper Fraedrich and McBride have studied the relation between the `free ride' and CISK (conditional instability of the second kind) assumptions in a well-known two-layer model. Here the comparison is extended to a more general case. For this purpose the free ride and CISK assumptions are compared in linearized models with special emphasis on the small-scale limit. To this end a general solution of the linearized CISK problem is presented. The free ride can be interpretated both as a local and an integral constraint. It is shown within the context of analytic models that the CISK assumption satisfies the integrated free ride in the small-scale limit. However, interpretating the free ride as an integral constraint yields a solution that differs qualitatively from the CISK solution even though both satisfy the required balance. On the other hand, if the free ride is applied locally, the special constraint is obtained, which states that the nondimensional function must be unity at the top of the Ekman layer, and in this case the free ride and CISK solution becomes identical in the small-scale limit. From this, it is concluded that the free ride is not identical to CISK, but rather it constitutes a special subset of the CISK solutions. Further, the general CISK solution, which differs from that of the free ride, actually satisfies the local free ride balance except at the lowest levels of the atmosphere. This breakdown of the balance appears to be in accordance with results based on observations.
User assumptions about information retrieval systems: Ethical concerns
Froehlich, T.J.
1994-12-31
Information professionals, whether designers, intermediaries, database producers or vendors, bear some responsibility for the information that they make available to users of information systems. The users of such systems may tend to make many assumptions about the information that a system provides, such as believing: that the data are comprehensive, current and accurate, that the information resources or databases have same degree of quality and consistency of indexing; that the abstracts, if they exist, correctly and adequate reflect the content of the article; that there is consistency informs of author names or journal titles or indexing within and across databases; that there is standardization in and across databases; that once errors are detected, they are corrected; that appropriate choices of databases or information resources are a relatively easy matter, etc. The truth is that few of these assumptions are valid in commercia or corporate or organizational databases. However, given these beliefs and assumptions by many users, often promoted by information providers, information professionals, impossible, should intervene to warn users about the limitations and constraints of the databases they are using. With the growth of the Internet and end-user products (e.g., CD-ROMs), such interventions have significantly declined. In such cases, information should be provided on start-up or through interface screens, indicating to users, the constraints and orientation of the system they are using. The principle of {open_quotes}caveat emptor{close_quotes} is naive and socially irresponsible: information professionals or systems have an obligation to provide some framework or context for the information that users are accessing.
Systematic Model Building Based on Quark-Lepton Complementarity Assumptions
Winter, Walter
2008-02-21
In this talk, we present a procedure to systematically generate a large number of valid mass matrix textures from very generic assumptions. Compared to plain anarchy arguments, we postulate some structure for the theory, such as a possible connection between quarks and leptons, and a mechanism to generate flavor structure. We illustrate how this parameter space can be used to test the exclusion power of future experiments, and we point out that one can systematically generate embeddings in Z{sub N} product flavor symmetry groups.
Sensitivity of fine sediment source apportionment to mixing model assumptions
NASA Astrophysics Data System (ADS)
Cooper, Richard; Krueger, Tobias; Hiscock, Kevin; Rawlins, Barry
2015-04-01
Mixing models have become increasingly common tools for quantifying fine sediment redistribution in river catchments. The associated uncertainties may be modelled coherently and flexibly within a Bayesian statistical framework (Cooper et al., 2015). However, there is more than one way to represent these uncertainties because the modeller has considerable leeway in making error assumptions and model structural choices. In this presentation, we demonstrate how different mixing model setups can impact upon fine sediment source apportionment estimates via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges and subsurface material) under base flow conditions between August 2012 and August 2013 (Cooper et al., 2014). Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ~76%), comparison of apportionment estimates reveals varying degrees of sensitivity to changing prior parameter distributions, inclusion of covariance terms, incorporation of time-variant distributions and methods of proportion characterisation. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup and between a Bayesian and a popular Least Squares optimisation approach. Our OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon fine sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model setup prior to conducting fine sediment source apportionment investigations
Diversion assumptions for high-powered research reactors
Binford, F.T.
1984-01-01
This study deals with diversion assumptions for high-powered research reactors -- specifically, MTR fuel; pool- or tank-type research reactors with light-water moderator; and water, beryllium, or graphite reflectors, and which have a power level of 25 MW(t) or more. The objective is to provide assistance to the IAEA in documentation of criteria and inspection observables related to undeclared plutonium production in the reactors described above, including: criteria for undeclared plutonium production, necessary design information for implementation of these criteria, verification guidelines including neutron physics and heat transfer, and safeguards measures to facilitate the detection of undeclared plutonium production at large research reactors.
Analysis of the proof test with power law assumptions
NASA Astrophysics Data System (ADS)
Hanson, Thomas A.
1994-03-01
Prooftesting optical fiber is required to assure a minimum strength over all lengths of fiber. This is done as the fiber is wound onto a spool by applying a tensile stress over a length of fiber as it passes a stress region. The failure of weak flaws assures a minimum strength of lengths that survive the test. Flaw growth is assumed to follow the power law. Distributions of initial flaw size are assumed to be of the Weibull type. Experimental data are presented to validate these assumptions.
NASA Technical Reports Server (NTRS)
Kaufman, A.; Hwang, S. Y.
1985-01-01
Strain redistribution corrections were developed for a simplified inelastic analysis procedure to economically calculate material cyclic response at the critical location of a structure for life prediction proposes. The method was based on the assumption that the plastic region in the structure is local and the total strain history required for input can be defined from elastic finite-element analyses. Cyclic stress-strain behavior was represented by a bilinear kinematic hardening model. The simplified procedure predicts stress-strain response with reasonable accuracy for thermally cycled problems but needs improvement for mechanically load-cycled problems. Neuber-type corrections were derived and incorporated in the simplified procedure to account for local total strain redistribution under cyclic mechanical loading. The corrected simplified method was used on a mechanically load-cycled benchmark notched-plate problem. The predicted material response agrees well with the nonlinear finite-element solutions for the problem. The simplified analysis computer program was 0.3% of the central processor unit time required for a nonlinear finite-element analysis.
Detailed and simplified nonequilibrium helium ionization in the solar atmosphere
Golding, Thomas Peter; Carlsson, Mats; Leenaarts, Jorrit E-mail: mats.carlsson@astro.uio.no
2014-03-20
Helium ionization plays an important role in the energy balance of the upper chromosphere and transition region. Helium spectral lines are also often used as diagnostics of these regions. We carry out one-dimensional radiation-hydrodynamics simulations of the solar atmosphere and find that the helium ionization is set mostly by photoionization and direct collisional ionization, counteracted by radiative recombination cascades. By introducing an additional recombination rate mimicking the recombination cascades, we construct a simplified three-level helium model atom consisting of only the ground states. This model atom is suitable for modeling nonequilibrium helium ionization in three-dimensional numerical models. We perform a brief investigation of the formation of the He I 10830 and He II 304 spectral lines. Both lines show nonequilibrium features that are not recovered with statistical equilibrium models, and caution should therefore be exercised when such models are used as a basis for interpretating observations.