ERIC Educational Resources Information Center
Nieuwland, Mante S.
2013-01-01
People can establish whether a sentence is hypothetically true even if what it describes can never be literally true given the laws of the natural world. Two event-related potential (ERP) experiments examined electrophysiological responses to sentences about unrealistic counterfactual worlds that require people to construct novel conceptual…
The asymptotic behaviour of parton distributions at small and large x.
Ball, Richard D; Nocera, Emanuele R; Rojo, Juan
2016-01-01
It has been argued from the earliest days of quantum chromodynamics that at asymptotically small values of x the parton distribution functions (PDFs) of the proton behave as [Formula: see text], where the values of [Formula: see text] can be deduced from Regge theory, while at asymptotically large values of x the PDFs behave as [Formula: see text], where the values of [Formula: see text] can be deduced from the Brodsky-Farrar quark counting rules. We critically examine these claims by extracting the exponents [Formula: see text] and [Formula: see text] from various global fits of parton distributions, analysing their scale dependence, and comparing their values to the naive expectations. We find that for valence distributions both Regge theory and counting rules are confirmed, at least within uncertainties, while for sea quarks and gluons the results are less conclusive. We also compare results from various PDF fits for the structure function ratio [Formula: see text] at large x , and caution against unrealistic uncertainty estimates due to overconstrained parametrisations.
Combined Retrievals of Boreal Forest Fire Aerosol Properties with a Polarimeter and Lidar
NASA Technical Reports Server (NTRS)
Knobelspiesse, K.; Cairns, B.; Ottaviani, M.; Ferrare, R.; Haire, J.; Hostetler, C.; Obland, M.; Rogers, R.; Redemann, J.; Shinozuka, Y.;
2011-01-01
Absorbing aerosols play an important, but uncertain, role in the global climate. Much of this uncertainty is due to a lack of adequate aerosol measurements. While great strides have been made in observational capability in the previous years and decades, it has become increasingly apparent that this development must continue. Scanning polarimeters have been designed to help resolve this issue by making accurate, multi-spectral, multi-angle polarized observations. This work involves the use of the Research Scanning Polarimeter (RSP). The RSP was designed as the airborne prototype for the Aerosol Polarimetery Sensor (APS), which was due to be launched as part of the (ultimately failed) NASA Glory mission. Field observations with the RSP, however, have established that simultaneous retrievals of aerosol absorption and vertical distribution over bright land surfaces are quite uncertain. We test a merger of RSP and High Spectral Resolution Lidar (HSRL) data with observations of boreal forest fire smoke, collected during the Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS). During ARCTAS, the RSP and HSRL instruments were mounted on the same aircraft, and validation data were provided by instruments on an aircraft flying a coordinated flight pattern. We found that the lidar data did indeed improve aerosol retrievals using an optimal estimation method, although not primarily because of the constraints imposed on the aerosol vertical distribution. The more useful piece of information from the HSRL was the total column aerosol optical depth, which was used to select the initial value (optimization starting point) of the aerosol number concentration. When ground based sun photometer network climatologies of number concentration were used as an initial value, we found that roughly half of the retrievals had unrealistic sizes and imaginary indices, even though the retrieved spectral optical depths agreed within uncertainties to independent observations. The convergence to an unrealistic local minimum by the optimal estimator is related to the relatively low sensitivity to particles smaller than 0.1 ( m) at large optical thicknesses. Thus, optimization algorithms used for operational aerosol retrievals of the fine mode size distribution, when the total optical depth is large, will require initial values generated from table look-ups that exclude unrealistic size/complex index mixtures. External constraints from lidar on initial values used in the optimal estimation methods will also be valuable in reducing the likelihood of obtaining spurious retrievals.
Pressures in Tumuli: A Study of Tumuli Formation
NASA Technical Reports Server (NTRS)
Hansen, James E.
2005-01-01
Tumuli form via localized inflation in surface lava flows. These domed features have widths of 10-20 m, lengths of 10-150 m, and heights of 1-9 m. The axial fracture exposes a brittle crust overlying a ductilely deformed layer. The total crustal thickness is typically less than lm. Tumuli are observed on both terrestrial and martian lava flow surfaces, and provide insight on the flow formation processes and rates. Past studies have estimated the inflation pressure using a bending model for a circular, thin elastic plate, assuming small deflection (Rossi and Gudmundson, 1996). This formulation results in unrealistic pressures for some tumuli. We thus examine alternative models, including those with different shapes, bending of the ductile crust, large deflection, plastic deformation, and thick plate bending. Using the thickness of the ductile crust in the equations for thin, circular plates reduces most pressures to reasonable values. Alternative plate shapes do not cause a significant reduction in inflation pressure. Although the large deflection equations should be applicable based on the plate thickness to tumuli height ratios, they give even less realistic pressures. Tumuli with unrealistic pressures appear to have exceeded the critical bending moment, and have relatively thick crusts, requiring thick plate bending models.
Omega from the anisotropy of the redshift correlation function
NASA Technical Reports Server (NTRS)
Hamilton, A. J. S.
1993-01-01
Peculiar velocities distort the correlation function of galaxies observed in redshift space. In the large scale, linear regime, the distortion takes a characteristic quadrupole plus hexadecapole form, with the amplitude of the distortion depending on the cosmological density parameter omega. Preliminary measurements are reported here of the harmonics of the correlation function in the CfA, SSRS, and IRAS 2 Jansky redshift surveys. The observed behavior of the harmonics agrees qualitatively with the predictions of linear theory on large scales in every survey. However, real anisotropy in the galaxy distribution induces large fluctuations in samples which do not yet probe a sufficiently fair volume of the Universe. In the CfA 14.5 sample in particular, the Great Wall induces a large negative quadrupole, which taken at face value implies an unrealistically large omega 20. The IRAS 2 Jy survey, which covers a substantially larger volume than the optical surveys and is less affected by fingers-of-god, yields a more reliable and believable value, omega = 0.5 sup +.5 sub -.25.
Multiple Fault Isolation in Redundant Systems
NASA Technical Reports Server (NTRS)
Pattipati, Krishna R.; Patterson-Hine, Ann; Iverson, David
1997-01-01
Fault diagnosis in large-scale systems that are products of modern technology present formidable challenges to manufacturers and users. This is due to large number of failure sources in such systems and the need to quickly isolate and rectify failures with minimal down time. In addition, for fault-tolerant systems and systems with infrequent opportunity for maintenance (e.g., Hubble telescope, space station), the assumption of at most a single fault in the system is unrealistic. In this project, we have developed novel block and sequential diagnostic strategies to isolate multiple faults in the shortest possible time without making the unrealistic single fault assumption.
Multiple Fault Isolation in Redundant Systems
NASA Technical Reports Server (NTRS)
Pattipati, Krishna R.
1997-01-01
Fault diagnosis in large-scale systems that are products of modem technology present formidable challenges to manufacturers and users. This is due to large number of failure sources in such systems and the need to quickly isolate and rectify failures with minimal down time. In addition, for fault-tolerant systems and systems with infrequent opportunity for maintenance (e.g., Hubble telescope, space station), the assumption of at most a single fault in the system is unrealistic. In this project, we have developed novel block and sequential diagnostic strategies to isolate multiple faults in the shortest possible time without making the unrealistic single fault assumption.
Anticipation of the landing shock phenomenon in flight simulation
NASA Technical Reports Server (NTRS)
Mcfarland, Richard E.
1987-01-01
An aircraft landing may be described as a controlled crash because a runway surface is intercepted. In a simulation model the transition from aerodynamic flight to weight on wheels involves a single computational cycle during which stiff differential equations are activated; with a significant probability these initial conditions are unrealistic. This occurs because of the finite cycle time, during which large restorative forces will accompany unrealistic initial oleo compressions. This problem was recognized a few years ago at Ames Research Center during simulation studies of a supersonic transport. The mathematical model of this vehicle severely taxed computational resources, and required a large cycle time. The ground strike problem was solved by a described technique called anticipation equations. This extensively used technique has not been previously reported. The technique of anticipating a significant event is a useful tool in the general field of discrete flight simulation. For the differential equations representing a landing gear model stiffness, rate of interception and cycle time may combine to produce an unrealistic simulation of the continuum.
NASA Astrophysics Data System (ADS)
Filali, Walid; Sengouga, Nouredine; Oussalah, Slimane; Mari, Riaz H.; Jameel, Dler; Al Saqri, Noor Alhuda; Aziz, Mohsin; Taylor, David; Henini, Mohamed
2017-11-01
Forward and reverse current-voltage (Isbnd V) of Ti/Au/n-Al0.33Ga0.67As/n-GaAs/n-Al0.33Ga0.67As multi-quantum well (MQW) Schottky diodes were measured over a range of temperatures from 20 to 400 K by a step of 20 K. The Schottky diodes parameters were then extracted from these characteristics. The Cheung method is used for this purpose, assuming a thermionic conduction mechanism. The extracted ideality factor decrease with increasing temperatures. But their values at low temperatures were found to be unrealistic. In order to explain this uncertainty, three assumptions were explored. Firstly an assumed inhomogeneous barrier height gave better parameters especially the Richardson constant but the ideality factor is still unrealistic at low temperatures. Secondly, by using numerical simulation, it was demonstrated that defects including interface states are not responsible for the apparent unrealistic Schottky diode parameters. The third assumption is the tunnelling mechanism through the barrier in the low temperature range. At these lower temperatures, the tunnelling mechanism was more suitable to explain the extracted parameters values.
48 CFR 1516.303-74 - Determining the value of in-kind contributions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Determining the value of in-kind contributions. 1516.303-74 Section 1516.303-74 Federal Acquisition Regulations System... depreciation, if any) at the time of donation. If the booked costs reflect unrealistic values when compared to...
Unrealistic optimism in advice taking: A computational account.
Leong, Yuan Chang; Zaki, Jamil
2018-02-01
Expert advisors often make surprisingly inaccurate predictions about the future, yet people heed their suggestions nonetheless. Here we provide a novel, computational account of this unrealistic optimism in advice taking. Across 3 studies, participants observed as advisors predicted the performance of a stock. Advisors varied in their accuracy, performing reliably above, at, or below chance. Despite repeated feedback, participants exhibited inflated perceptions of advisors' accuracy, and reliably "bet" on advisors' predictions more than their performance warranted. Participants' decisions tightly tracked a computational model that makes 2 assumptions: (a) people hold optimistic initial expectations about advisors, and (b) people preferentially incorporate information that adheres to their expectations when learning about advisors. Consistent with model predictions, explicitly manipulating participants' initial expectations altered their optimism bias and subsequent advice-taking. With well-calibrated initial expectations, participants no longer exhibited an optimism bias. We then explored crowdsourced ratings as a strategy to curb unrealistic optimism in advisors. Star ratings for each advisor were collected from an initial group of participants, which were then shown to a second group of participants. Instead of calibrating expectations, these ratings propagated and exaggerated the unrealistic optimism. Our results provide a computational account of the cognitive processes underlying inflated perceptions of expertise, and explore the boundary conditions under which they occur. We discuss the adaptive value of this optimism bias, and how our account can be extended to explain unrealistic optimism in other domains. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Taking Stock of Unrealistic Optimism.
Shepperd, James A; Klein, William M P; Waters, Erika A; Weinstein, Neil D
2013-07-01
Researchers have used terms such as unrealistic optimism and optimistic bias to refer to concepts that are similar but not synonymous. Drawing from three decades of research, we critically discuss how researchers define unrealistic optimism and we identify four types that reflect different measurement approaches: unrealistic absolute optimism at the individual and group level and unrealistic comparative optimism at the individual and group level. In addition, we discuss methodological criticisms leveled against research on unrealistic optimism and note that the criticisms are primarily relevant to only one type-the group form of unrealistic comparative optimism. We further clarify how the criticisms are not nearly as problematic even for unrealistic comparative optimism as they might seem. Finally, we note boundary conditions on the different types of unrealistic optimism and reflect on five broad questions that deserve further attention.
Taking Stock of Unrealistic Optimism
Shepperd, James A.; Klein, William M. P.; Waters, Erika A.; Weinstein, Neil D.
2015-01-01
Researchers have used terms such as unrealistic optimism and optimistic bias to refer to concepts that are similar but not synonymous. Drawing from three decades of research, we critically discuss how researchers define unrealistic optimism and we identify four types that reflect different measurement approaches: unrealistic absolute optimism at the individual and group level and unrealistic comparative optimism at the individual and group level. In addition, we discuss methodological criticisms leveled against research on unrealistic optimism and note that the criticisms are primarily relevant to only one type—the group form of unrealistic comparative optimism. We further clarify how the criticisms are not nearly as problematic even for unrealistic comparative optimism as they might seem. Finally, we note boundary conditions on the different types of unrealistic optimism and reflect on five broad questions that deserve further attention. PMID:26045714
Assessing the consequences of unrealistic optimism: Challenges and recommendations.
Shepperd, James A; Pogge, Gabrielle; Howell, Jennifer L
2017-04-01
Of the hundreds of studies published on unrealistic optimism (i.e., expecting a better personal future than is reasonably likely), most have focused on demonstrating the phenomenon, examining boundary conditions, or documenting causes. Few studies have examined the consequences of unrealistic optimism. In this article, we provide an overview of the measurement of unrealistic optimism, review possible consequences, and identify numerous challenges confronting investigators attempting to understand the consequences. Assessing the consequences of unrealistic optimism is tricky, and ultimately probably impossible when researchers assess unrealistic optimism at the group level (which reveals if a group of people is displaying unrealistic optimism on average) rather than the individual level (which reveals whether a specific individual displays unrealistic optimism). We offer recommendations to researchers who wish to examine the consequences of unrealistic optimism. Copyright © 2016 Elsevier Inc. All rights reserved.
Trottier, Kathryn; Polivy, Janet; Herman, C Peter
2005-03-01
The false-hope syndrome suggests that unrealistic expectations about dieting set dieters up for failure and then promote renewed efforts at weight loss. Many dieters believe the inflated promises typical of diet advertisements, which may be the source of at least some of their unrealistic expectations. Diet advertisements promoting unrealistic expectations were expected to inspire restrained eaters to diet and lead to enhanced self-perceptions, relative to more circumspect advertisements. Female undergraduates rated their expectations in response to a control advertisement or to advertisements containing realistic, moderately unrealistic, or highly unrealistic promises of dieting. Participants then rated their self-perceptions and participated in an apparent "taste-test". Restrained eaters had higher expectations for themselves than did unrestrained eaters, and restrained and unrestrained eaters had similar expectations concerning dieting for others. Those who viewed the advertisements containing unrealistic expectations ate fewer cookies ad libitum than did those who viewed the realistic or control advertisements. This finding is consistent with the suggestion that unrealistic expectations contribute to the decision to change oneself. (c) 2005 by Wiley Periodicals, Inc.
Jefferson, Anneli; Bortolotti, Lisa; Kuzmanovic, Bojana
2017-04-01
Here we consider the nature of unrealistic optimism and other related positive illusions. We are interested in whether cognitive states that are unrealistically optimistic are belief states, whether they are false, and whether they are epistemically irrational. We also ask to what extent unrealistically optimistic cognitive states are fixed. Based on the classic and recent empirical literature on unrealistic optimism, we offer some preliminary answers to these questions, thereby laying the foundations for answering further questions about unrealistic optimism, such as whether it has biological, psychological, or epistemic benefits. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Zercher, Florian; Schmidt, Peter; Cieciuch, Jan; Davidov, Eldad
2015-01-01
Over the last decades, large international datasets such as the European Social Survey (ESS), the European Value Study (EVS) and the World Value Survey (WVS) have been collected to compare value means over multiple time points and across many countries. Yet analyzing comparative survey data requires the fulfillment of specific assumptions, i.e., that these values are comparable over time and across countries. Given the large number of groups that can be compared in repeated cross-national datasets, establishing measurement invariance has been, however, considered unrealistic. Indeed, studies which did assess it often failed to establish higher levels of invariance such as scalar invariance. In this paper we first introduce the newly developed approximate approach based on Bayesian structural equation modeling (BSEM) to assess cross-group invariance over countries and time points and contrast the findings with the results from the traditional exact measurement invariance test. BSEM examines whether measurement parameters are approximately (rather than exactly) invariant. We apply BSEM to a subset of items measuring the universalism value from the Portrait Values Questionnaire (PVQ) in the ESS. The invariance of this value is tested simultaneously across 15 ESS countries over six ESS rounds with 173,071 respondents and 90 groups in total. Whereas, the use of the traditional approach only legitimates the comparison of latent means of 37 groups, the Bayesian procedure allows the latent mean comparison of 73 groups. Thus, our empirical application demonstrates for the first time the BSEM test procedure on a particularly large set of groups. PMID:26089811
Mars-GRAM 2010: Improving the Precision of Mars-GRAM
NASA Technical Reports Server (NTRS)
Justh, H. L.; Justus, C. G.; Ramey, H. S.
2011-01-01
It has been discovered during the Mars Science Laboratory (MSL) site selection process that the Mars Global Reference Atmospheric Model (Mars-GRAM) when used for sensitivity studies for Thermal Emission Spectrometer (TES) MapYear=0 and large optical depth values, such as tau=3, is less than realistic. Mars-GRAM's perturbation modeling capability is commonly used, in a Monte-Carlo mode, to perform high fidelity engineering end-to-end simulations for entry, descent, and landing (EDL). Mars-GRAM 2005 has been validated against Radio Science data, and both nadir and limb data from TES. Traditional Mars-GRAM options for representing the mean atmosphere along entry corridors include: (1) TES mapping year 0, with user-controlled dust optical depth and Mars-GRAM data interpolated from NASA Ames Mars General Circulation Model (MGCM) results driven by selected values of globally-uniform dust optical depth, or (2) TES mapping years 1 and 2, with Mars-GRAM data coming from MGCM results driven by observed TES dust optical depth. From the surface to 80 km altitude, Mars-GRAM is based on NASA Ames MGCM. Above 80 km, Mars-GRAM is based on the University of Michigan Mars Thermospheric General Circulation Model (MTGCM). MGCM results that were used for Mars-GRAM with MapYear=0 were from a MGCM run with a fixed value of tau=3 for the entire year at all locations. This choice of data has led to discrepancies that have become apparent during recent sensitivity studies for MapYear=0 and large optical depths. Unrealistic energy absorption by time-invariant atmospheric dust leads to an unrealistic thermal energy balance on the polar caps. The outcome is an inaccurate cycle of condensation/sublimation of the polar caps and, as a consequence, an inaccurate cycle of total atmospheric mass and global-average surface pressure. Under an assumption of unchanged temperature profile and hydrostatic equilibrium, a given percentage change in surface pressure would produce a corresponding percentage change in density at all altitudes. Consequently, the final result of a change in surface pressure is an imprecise atmospheric density at all altitudes.
14 CFR 399.81 - Unrealistic or deceptive scheduling.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Unrealistic or deceptive scheduling. 399.81... Unrealistic or deceptive scheduling. (a) It is the policy of the Board to consider unrealistic scheduling of... to the advertising of scheduled performance, it is the policy of the Board to regard as an unfair or...
Bluffing promotes overconfidence on social networks
NASA Astrophysics Data System (ADS)
Li, Kun; Cong, Rui; Wu, Te; Wang, Long
2014-06-01
The overconfidence, a well-established bias, in fact leads to unrealistic expectations or faulty assessment. So it remains puzzling why such psychology of self-deception is stabilized in human society. To investigate this problem, we draw lessons from evolutionary game theory which provides a theoretical framework to address the subtleties of cooperation among selfish individuals. Here we propose a spatial resource competition model showing that, counter-intuitively, moderate values rather than large values of resource-to-cost ratio boost overconfidence level most effectively. In contrast to theoretical results in infinite well-mixed populations, network plays a role both as a ``catalyst'' and a ``depressant'' in the spreading of overconfidence, especially when resource-to-cost ratio is in a certain range. Moreover, when bluffing is taken into consideration, overconfidence evolves to a higher level to counteract its detrimental effect, which may well explain the prosperity of this ``erroneous'' psychology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Varble, Adam; Zipser, Edward J.; Fridlind, Ann M.
2014-12-18
Ten 3D cloud-resolving model (CRM) simulations and four 3D limited area model (LAM) simulations of an intense mesoscale convective system observed on 23-24 January 2006 during the Tropical Warm Pool – International Cloud Experiment (TWP-ICE) are compared with each other and with observed radar reflectivity fields and dual-Doppler retrievals of vertical wind speeds in an attempt to explain published results showing a high bias in simulated convective radar reflectivity aloft. This high bias results from ice water content being large, which is a product of large, strong convective updrafts, although hydrometeor size distribution assumptions modulate the size of this bias.more » Making snow mass more realistically proportional to D2 rather than D3 eliminates unrealistically large snow reflectivities over 40 dBZ in some simulations. Graupel, unlike snow, produces high biased reflectivity in all simulations, which is partly a result of parameterized microphysics, but also partly a result of overly intense simulated updrafts. Peak vertical velocities in deep convective updrafts are greater than dual-Doppler retrieved values, especially in the upper troposphere. Freezing of liquid condensate, often rain, lofted above the freezing level in simulated updraft cores greatly contributes to these excessive upper tropospheric vertical velocities. The strongest simulated updraft cores are nearly undiluted, with some of the strongest showing supercell characteristics during the multicellular (pre-squall) stage of the event. Decreasing horizontal grid spacing from 900 to 100 meters slightly weakens deep updraft vertical velocity and moderately decreases the amount of condensate aloft, but not enough to match observational retrievals. Therefore, overly intense simulated updrafts may additionally be a product of unrealistic interactions between convective dynamics, parameterized microphysics, and the large-scale model forcing that promote different convective strengths than observed.« less
Tight finite-key analysis for quantum cryptography
Tomamichel, Marco; Lim, Charles Ci Wen; Gisin, Nicolas; Renner, Renato
2012-01-01
Despite enormous theoretical and experimental progress in quantum cryptography, the security of most current implementations of quantum key distribution is still not rigorously established. One significant problem is that the security of the final key strongly depends on the number, M, of signals exchanged between the legitimate parties. Yet, existing security proofs are often only valid asymptotically, for unrealistically large values of M. Another challenge is that most security proofs are very sensitive to small differences between the physical devices used by the protocol and the theoretical model used to describe them. Here we show that these gaps between theory and experiment can be simultaneously overcome by using a recently developed proof technique based on the uncertainty relation for smooth entropies. PMID:22252558
Tight finite-key analysis for quantum cryptography.
Tomamichel, Marco; Lim, Charles Ci Wen; Gisin, Nicolas; Renner, Renato
2012-01-17
Despite enormous theoretical and experimental progress in quantum cryptography, the security of most current implementations of quantum key distribution is still not rigorously established. One significant problem is that the security of the final key strongly depends on the number, M, of signals exchanged between the legitimate parties. Yet, existing security proofs are often only valid asymptotically, for unrealistically large values of M. Another challenge is that most security proofs are very sensitive to small differences between the physical devices used by the protocol and the theoretical model used to describe them. Here we show that these gaps between theory and experiment can be simultaneously overcome by using a recently developed proof technique based on the uncertainty relation for smooth entropies.
Science, policy, and the transparency of values.
Elliott, Kevin C; Resnik, David B
2014-07-01
Opposing groups of scientists have recently engaged in a heated dispute over a preliminary European Commission (EC) report on its regulatory policy for endocrine-disrupting chemicals. In addition to the scientific issues at stake, a central question has been how scientists can maintain their objectivity when informing policy makers. Drawing from current ethical, conceptual, and empirical studies of objectivity and conflicts of interest in scientific research, we propose guiding principles for communicating scientific findings in a manner that promotes objectivity, public trust, and policy relevance. Both conceptual and empirical studies of scientific reasoning have shown that it is unrealistic to prevent policy-relevant scientific research from being influenced by value judgments. Conceptually, the current dispute over the EC report illustrates how scientists are forced to make value judgments about appropriate standards of evidence when informing public policy. Empirical studies provide further evidence that scientists are unavoidably influenced by a variety of potentially subconscious financial, social, political, and personal interests and values. When scientific evidence is inconclusive and major regulatory decisions are at stake, it is unrealistic to think that values can be excluded from scientific reasoning. Thus, efforts to suppress or hide interests or values may actually damage scientific objectivity and public trust, whereas a willingness to bring implicit interests and values into the open may be the best path to promoting good science and policy.
Ruthig, Joelle C; Gamblin, Bradlee W; Jones, Kelly; Vanderzanden, Karen; Kehn, Andre
2017-02-01
Researchers have spent considerable effort examining unrealistic absolute optimism and unrealistic comparative optimism, yet there is a lack of research exploring them concurrently. This longitudinal study repeatedly assessed unrealistic absolute and comparative optimism within a performance context over several months to identify the degree to which they shift as a function of proximity to performance and performance feedback, their associations with global individual difference and event-specific factors, and their link to subsequent behavioural outcomes. Results showed similar shifts in unrealistic absolute and comparative optimism based on proximity to performance and performance feedback. Moreover, increases in both types of unrealistic optimism were associated with better subsequent performance beyond the effect of prior performance. However, several differences were found between the two forms of unrealistic optimism in their associations with global individual difference factors and event-specific factors, highlighting the distinctiveness of the two constructs. © 2016 The British Psychological Society.
Competition and Cooperation: Evil Twins or Fated Lovers?
ERIC Educational Resources Information Center
Fitch, Frank; Loving, Greg
2007-01-01
The competing global forces of homogenizing commercialism and absolutist sectarianism continue to engender a regime of fear and have all but eclipsed what John Dewey called the democratic "habit of amicable cooperation." The values of cooperation are increasingly seen as "unrealistic" and even taken as signs of weakness in the…
Modeling Achievement Trajectories when Attrition Is Informative
ERIC Educational Resources Information Center
Feldman, Betsy J.; Rabe-Hesketh, Sophia
2012-01-01
In longitudinal education studies, assuming that dropout and missing data occur completely at random is often unrealistic. When the probability of dropout depends on covariates and observed responses (called "missing at random" [MAR]), or on values of responses that are missing (called "informative" or "not missing at random" [NMAR]),…
Lipkus, Isaac M; Scholl, Sarah; McQueen, Amy; Cerully, Jennifer; Harris, Peter R
2009-01-01
We examined whether self-affirmation would facilitate intentions to engage in colorectal cancer (CRC) screening among individuals who were off-schedule for CRC screening and who were categorized as unrealistically optimistic, realistic, or unrealistically pessimistic about their CRC risk. All participants received tailored risk feedback; in addition, one group received threatening social comparison information regarding their risk factors, a second received this information after a self-affirmation exercise, and a third was a no-treatment control. When participants were unrealistically optimistic about their CRC risk (determined by comparing their perceived comparative risk to calculations from a risk algorithm), they expressed greater interest in screening if they were self-affirmed (relative to controls). Non-affirmed unrealistic optimists expressed lower interest relative to controls, suggesting that they were responding defensively. Realistic participants and unrealistically pessimistic participants who were self-affirmed expressed relatively less interest in CRC screening, suggesting that self-affirmation can be helpful or hurtful depending on the accuracy of one’s risk perceptions. PMID:20204982
Physical Activity in Vietnam: Estimates and Measurement Issues.
Bui, Tan Van; Blizzard, Christopher Leigh; Luong, Khue Ngoc; Truong, Ngoc Le Van; Tran, Bao Quoc; Otahal, Petr; Srikanth, Velandai; Nelson, Mark Raymond; Au, Thuy Bich; Ha, Son Thai; Phung, Hai Ngoc; Tran, Mai Hoang; Callisaya, Michele; Gall, Seana
2015-01-01
Our aims were to provide the first national estimates of physical activity (PA) for Vietnam, and to investigate issues affecting their accuracy. Measurements were made using the Global Physical Activity Questionnaire (GPAQ) on a nationally-representative sample of 14706 participants (46.5% males, response 64.1%) aged 25-64 years selected by multi-stage stratified cluster sampling. Approximately 20% of Vietnamese people had no measureable PA during a typical week, but 72.9% (men) and 69.1% (women) met WHO recommendations for PA by adults for their age. On average, 52.0 (men) and 28.0 (women) Metabolic Equivalent Task (MET)-hours/week (largely from work activities) were reported. Work and total PA were higher in rural areas and varied by season. Less than 2% of respondents provided incomplete information, but an additional one-in-six provided unrealistically high values of PA. Those responsible for reporting errors included persons from rural areas and all those with unstable work patterns. Box-Cox transformation (with an appropriate constant added) was the most successful method of reducing the influence of large values, but energy-scaled values were most strongly associated with pathophysiological outcomes. Around seven-in-ten Vietnamese people aged 25-64 years met WHO recommendations for total PA, which was mainly from work activities and higher in rural areas. Nearly all respondents were able to report their activity using the GPAQ, but with some exaggerated values and seasonal variation in reporting. Data transformation provided plausible summary values, but energy-scaling fared best in association analyses.
Impact of an Intimate Relationships Class on Unrealistic Relationship Beliefs
ERIC Educational Resources Information Center
Bass, Brenda L.; Drake, Teske R.; Linney, Kirsten D.
2007-01-01
Unrealistic relationship beliefs have been shown to be related to lower levels of relationship satisfaction. Yet, young adults often hold unrealistic or irrational beliefs about intimate relationships. The purpose of this study was to assess the effectiveness of an intimate relationships course in reducing young adults' irrational relationship…
Why Do New Teachers Leave? How Could They Stay?
ERIC Educational Resources Information Center
Simos, Elaine
2013-01-01
The author of this article posits that some teachers leave the profession because they entered it with unrealistic expectations, and that the reality of multiple preparations, unpaid orientation sessions, and large student loads is overburdening.for new teachers. Many new teachers leave their positions because of the dissonance between their…
Development of Risk Uncertainty Factors from Historical NASA Projects
NASA Technical Reports Server (NTRS)
Amer, Tahani R.
2011-01-01
NASA is a good investment of federal funds and strives to provide the best value to the nation. NASA has consistently budgeted to unrealistic cost estimates, which are evident in the cost growth in many of its programs. In this investigation, NASA has been using available uncertainty factors from the Aerospace Corporation, Air Force, and Booz Allen Hamilton to develop projects risk posture. NASA has no insight into the developmental of these factors and, as demonstrated here, this can lead to unrealistic risks in many NASA Programs and projects (P/p). The primary contribution of this project is the development of NASA missions uncertainty factors, from actual historical NASA projects, to aid cost-estimating as well as for independent reviews which provide NASA senior management with information and analysis to determine the appropriate decision regarding P/p. In general terms, this research project advances programmatic analysis for NASA projects.
Physical Activity in Vietnam: Estimates and Measurement Issues
Bui, Tan Van; Blizzard, Christopher Leigh; Luong, Khue Ngoc; Truong, Ngoc Le Van; Tran, Bao Quoc; Otahal, Petr; Srikanth, Velandai; Nelson, Mark Raymond; Au, Thuy Bich; Ha, Son Thai; Phung, Hai Ngoc; Tran, Mai Hoang; Callisaya, Michele; Gall, Seana
2015-01-01
Introduction Our aims were to provide the first national estimates of physical activity (PA) for Vietnam, and to investigate issues affecting their accuracy. Methods Measurements were made using the Global Physical Activity Questionnaire (GPAQ) on a nationally-representative sample of 14706 participants (46.5% males, response 64.1%) aged 25−64 years selected by multi-stage stratified cluster sampling. Results Approximately 20% of Vietnamese people had no measureable PA during a typical week, but 72.9% (men) and 69.1% (women) met WHO recommendations for PA by adults for their age. On average, 52.0 (men) and 28.0 (women) Metabolic Equivalent Task (MET)-hours/week (largely from work activities) were reported. Work and total PA were higher in rural areas and varied by season. Less than 2% of respondents provided incomplete information, but an additional one-in-six provided unrealistically high values of PA. Those responsible for reporting errors included persons from rural areas and all those with unstable work patterns. Box-Cox transformation (with an appropriate constant added) was the most successful method of reducing the influence of large values, but energy-scaled values were most strongly associated with pathophysiological outcomes. Conclusions Around seven-in-ten Vietnamese people aged 25–64 years met WHO recommendations for total PA, which was mainly from work activities and higher in rural areas. Nearly all respondents were able to report their activity using the GPAQ, but with some exaggerated values and seasonal variation in reporting. Data transformation provided plausible summary values, but energy-scaling fared best in association analyses. PMID:26485044
Managing the Socioeconomic Impacts of Energy Development. A Guide for the Small Community.
ERIC Educational Resources Information Center
Armbrust, Roberta
Decisions concerning large-scale energy development projects near small communities or in predominantly rural areas are usually complex, requiring cooperation of all levels of government, as well as the general public and the private sector. It is unrealistic to expect the typical small community to develop capabilities to independently evaluate a…
R. Justin DeRose; John D. Shaw; Giorgio Vacchiano; James N. Long
2008-01-01
The Southern Variant of the Forest Vegetation Simulator (FVS-SN) is made up of individual submodels that predict tree growth, recruitment and mortality. Forest managers on Ft. Bragg, North Carolina, discovered biologically unrealistic longleaf pine (Pinus palustris) size-density predictions at large diameters when using FVS-SN to project red-cockaded...
ERIC Educational Resources Information Center
Andsager, Julie L.; Austin, Erica Weintraub; Pinkleton, Bruce E.
2001-01-01
Finds that: (1) perceived realism and themes that students could identify with are important factors in increasing the salience and persuasiveness of alcohol-related public service announcements (PSAs) among undergraduate students; (2) realistic but logic-based PSAs were not as effective as unrealistic but enjoyable ads; and (3) low production…
The Heliotail: Theory and Modeling
Pogorelov, N. V.
2016-05-31
Physical processes are discussed related to the heliotail which is formed when the solar wind interacts with the local interstellar medium. Although astrotails are commonly observed, the heliotail observations are only indirect. As a consequence, the direct comparison of the observed astrophysical objects and the Sun is impossible. This requires proper theoretical understanding of the heliotail formation and evolution, and numerical simulations in sufficiently large computational boxes. In this paper, we review some previous results related to the heliotail flow and show new simulations which demonstrate that the solar wind collimation inside the Parker spiral field lines diverted by themore » heliopause toward the heliotail is unrealistic. On the contrary, solar cycle effects ensure that the solar wind density reaches its largest values near the solar equatorial plane. We also argue that a realistic heliotail should be very long to account for the observed anisotropy of 1-10 TeV cosmic rays.« less
Strategies to Improve the Accuracy of Mars-GRAM Sensitivity Studies at Large Optical Depths
NASA Technical Reports Server (NTRS)
Justh, Hilary L.; Justus, Carl G.; Badger, Andrew M.
2009-01-01
The Mars Global Reference Atmospheric Model (Mars-GRAM) is an engineering-level atmospheric model widely used for diverse mission applications. Mars-GRAM s perturbation modeling capability is commonly used, in a Monte-Carlo mode, to perform high fidelity engineering end-to-end simulations for entry, descent, and landing (EDL). It has been discovered during the Mars Science Laboratory (MSL) site selection process that Mars-GRAM when used for sensitivity studies for MapYear=0 and large optical depth values such as tau=3 is less than realistic. A comparison study between Mars atmospheric density estimates from Mars- GRAM and measurements by Mars Global Surveyor (MGS) has been undertaken for locations of varying latitudes, Ls, and LTST on Mars. The preliminary results from this study have validated the Thermal Emission Spectrometer (TES) limb data. From the surface to 80 km altitude, Mars- GRAM is based on the NASA Ames Mars General Circulation Model (MGCM). MGCM results that were used for Mars-GRAM with MapYear=0 were from a MGCM run with a fixed value of tau=3 for the entire year at all locations. Unrealistic energy absorption by uniform atmospheric dust leads to an unrealistic thermal energy balance on the polar caps. The outcome is an inaccurate cycle of condensation/sublimation of the polar caps and, as a consequence, an inaccurate cycle of total atmospheric mass and global-average surface pressure. Under an assumption of unchanged temperature profile and hydrostatic equilibrium, a given percentage change in surface pressure would produce a corresponding percentage change in density at all altitudes. Consequently, the final result of a change in surface pressure is an imprecise atmospheric density at all altitudes. To solve this pressure-density problem, a density factor value was determined for tau=.3, 1 and 3 that will adjust the input values of MGCM MapYear 0 pressure and density to achieve a better match of Mars-GRAM MapYear=0 with MapYears 1 and 2 MGCM output at comparable dust loading. Currently, these density factors are fixed values for all latitudes and Ls. Results will be presented of the work underway to derive better multipliers by including possible variation with latitude and/or Ls. This is achieved by comparison of Mars-GRAM MapYear=0 output with TES limb data. The addition of these density factors to Mars-GRAM will improve the results of the sensitivity studies done for large optical depths. Answers may also be provided to the issues raised in a recent study by Desai(2008). Desai has shown that the actual landing sites of Mars Pathfinder, the Mars Exploration Rovers and the Phoenix Mars Lander have been further downrange than predicted by models prior to landing. Desai s reconstruction of their entries into the Martian atmosphere showed that the models consistently predicted higher densities than those found upon EDL. The solution of this problem would be important to the Mars Program since future exploration of Mars by landers and rovers will require more accurate landing capabilities, especially for the proposed Mars Sample Return mission.
The differential effect of realistic and unrealistic counterfactual thinking on regret.
Sevdalis, Nick; Kokkinaki, Flora
2006-06-01
Research has established that realistic counterfactual thinking can determine the intensity and the content of people's affective reactions to decision outcomes and events. Not much is known, however, about the affective consequences of counterfactual thinking that is unrealistic (i.e., that does not correspond to the main causes of a negative outcome). In three experiments, we investigate the influence of realistic and unrealistic counterfactuals on experienced regret after negative outcomes. In Experiment 1, we found that participants who thought unrealistically about a poor outcome reported less regret than those who thought realistically about it. In Experiments 2a and 2b, we replicated this finding and we showed that the decrease in regret was associated with a shift in the causal attributions of the poor outcome. Participants who thought unrealistically attributed it more to external circumstances and less to their own behaviours than those who thought realistically about it. We discuss the implications of these findings for the role of counterfactuals as self-serving biases and the functionality of regret as a counterfactual emotion.
The need to control for regression to the mean in social psychology studies.
Yu, Rongjun; Chen, Li
2014-01-01
It is common in repeated measurements for extreme values at the first measurement to approach the mean at the subsequent measurement, a phenomenon called regression to the mean (RTM). If RTM is not fully controlled, it will lead to erroneous conclusions. The wide use of repeated measurements in social psychology creates a risk that an RTM effect will influence results. However, insufficient attention is paid to RTM in most social psychological research. Notable cases include studies on the phenomena of social conformity and unrealistic optimism (Klucharev et al., 2009, 2011; Sharot et al., 2011, 2012b; Campbell-Meiklejohn et al., 2012; Kim et al., 2012; Garrett and Sharot, 2014). In Study 1, 13 university students rated and re-rated the facial attractiveness of a series of female faces as a test of the social conformity effect (Klucharev et al., 2009). In Study 2, 15 university students estimated and re-estimated their risk of experiencing a series of adverse life events as a test of the unrealistic optimism effect (Sharot et al., 2011). Although these studies used methodologies similar to those used in earlier research, the social conformity and unrealistic optimism effects were no longer evident after controlling for RTM. Based on these findings we suggest several ways to control for the RTM effect in social psychology studies, such as adding the initial rating as a covariate in regression analysis, selecting a subset of stimuli for which the participant' initial ratings were matched across experimental conditions, and using a control group.
Detecting a Non-Gaussian Stochastic Background of Gravitational Radiation
NASA Astrophysics Data System (ADS)
Drasco, Steve; Flanagan, Éanna É.
2002-12-01
We derive a detection method for a stochastic background of gravitational waves produced by events where the ratio of the average time between events to the average duration of an event is large. Such a signal would sound something like popcorn popping. Our derivation is based on the somewhat unrealistic assumption that the duration of an event is smaller than the detector time resolution.
Contact Transfer of VX from Contaminated Grass onto Army Combat Uniform
2017-01-01
intervals for agricultural workers who use pesticides . The reentry intervals are based on the available toxicity data, concentrations of chemicals used...for workers using some of the more toxic organophosphate pesticides . State regulators are free to set more stringent intervals. Watson suggested...report, the RASH method that uses RPF values for pesticide exposure of agricultural workers appears to be unrealistic for extrapolating to the exposure
ERIC Educational Resources Information Center
Garrison, Joshua
2009-01-01
Unrealistic as they may have been, television shows like Leave it to Beaver and Ozzie and Harriet served important social purposes during an age of tumult and anxiety. The domestic sit-coms of the 1950s played an educative function by reinforcing and disseminating traditional values at a time when forces of change were becoming quite disruptive.…
Tada, S; Tarbell, J M
2001-06-01
Interstitial flow through the subendothelial intima and media of an artery wall was simulated numerically to investigate the water flow distribution through fenestral pores which affects the wall shear stress on smooth muscle cells right beneath the internal elastic lamina (IEL). A two-dimensional analysis using the Brinkman model of porous media flow was performed. It was observed that the hydraulic permeability of the intimal layer should be much greater than that of the media in order to predict a reasonable magnitude for the pressure drop across the subendothelial intima and IEL (about 23 mostly at a 70 mm Hg luminal pressure). When Ki was set equal to the value in the media, this pressure drop was unrealistically high. Furthermore, the higher value of Ki produced a nearly uniform distribution of water flow through a simple array of fenestral pores all having the same diameters (1.2 microm), whereas when Ki was set at the value in the media, the flow distribution through fenestral pores was highly nonuniform and nonphysiologic. A deformable intima model predicted a nonuniform flow distribution at high pressure (180 mm Hg). Damage to the IEL was simulated by introducing a large fenestral pore (up to 17.8 microm) into the array. A dramatic increase in flow through the large pore was observed implying an altered fluid mechanical environment on the smooth muscle cells near the large pore which has implications for intimal hyperplasia and atherosclerosis. The model also predicted that the fluid shear stress on the bottom surface of an endothelial cell is on the order of 10 dyne/cm2, a level which can affect cell function.
NASA Astrophysics Data System (ADS)
Daley, T. J.; Barber, K. E.; Street-Perrott, F. A.; Loader, N. J.; Marshall, J. D.; Crowley, S. F.; Fisher, E. H.
2010-07-01
Stable isotope analyses of Sphagnum alpha-cellulose, precipitation and bog water from three sites across northwestern Europe (Raheenmore, Ireland, Walton Moss, northern England and Dosenmoor, northern Germany) over a total period of 26 months were used to investigate the nature of the climatic signal recorded by Sphagnum moss. The δ18O values of modern alpha-cellulose tracked precipitation more closely than bog water, with a mean isotopic fractionation factor αcellulose-precipitation of 1.0274 ± 0.001 (1 σ) (≈27‰). Sub-samples of isolated Sphagnum alpha-cellulose were subsequently analysed from core WLM22, Walton Moss, northern England yielding a Sphagnum-specific isotope record spanning the last 4300 years. The palaeo-record, calibrated using the modern data, provides evidence for large amplitude variations in the estimated oxygen isotope composition of precipitation during the mid- to late Holocene. Estimates of palaeotemperature change derived from statistical relationships between modern surface air temperatures and δ18O precipitation values for the British Isles give unrealistically large variation in comparison to proxies from other archives. We conclude that use of such relationships to calibrate mid-latitude palaeo-data must be undertaken with caution. The δ18O record from Sphagnum cellulose was highly correlated with a palaeoecologically-derived index of bog surface wetness (BSW), suggesting a common climatic driver.
ERIC Educational Resources Information Center
Woolman, David C.
Although the Shah of Iran should be admired for his efforts to use education to deal with formidable social challenges, his goal of producing a modern state in a single generation was unrealistic. Entrenched traditional values and unpredicted economic changes such as the need in 1977 to slow down Iran's rate of growth in the face of runaway…
The need to control for regression to the mean in social psychology studies
Yu, Rongjun; Chen, Li
2014-01-01
It is common in repeated measurements for extreme values at the first measurement to approach the mean at the subsequent measurement, a phenomenon called regression to the mean (RTM). If RTM is not fully controlled, it will lead to erroneous conclusions. The wide use of repeated measurements in social psychology creates a risk that an RTM effect will influence results. However, insufficient attention is paid to RTM in most social psychological research. Notable cases include studies on the phenomena of social conformity and unrealistic optimism (Klucharev et al., 2009, 2011; Sharot et al., 2011, 2012b; Campbell-Meiklejohn et al., 2012; Kim et al., 2012; Garrett and Sharot, 2014). In Study 1, 13 university students rated and re-rated the facial attractiveness of a series of female faces as a test of the social conformity effect (Klucharev et al., 2009). In Study 2, 15 university students estimated and re-estimated their risk of experiencing a series of adverse life events as a test of the unrealistic optimism effect (Sharot et al., 2011). Although these studies used methodologies similar to those used in earlier research, the social conformity and unrealistic optimism effects were no longer evident after controlling for RTM. Based on these findings we suggest several ways to control for the RTM effect in social psychology studies, such as adding the initial rating as a covariate in regression analysis, selecting a subset of stimuli for which the participant' initial ratings were matched across experimental conditions, and using a control group. PMID:25620951
NASA Astrophysics Data System (ADS)
Ekström, M.; Jones, P. D.; Fowler, H. J.; Lenderink, G.; Buishand, T. A.; Conway, D.
2007-04-01
Climate data for studies within the SWURVE (Sustainable Water: Uncertainty, Risk and Vulnerability in Europe) project, assessing the risk posed by future climatic change to various hydrological and hydraulic systems were obtained from the regional climate model HadRM3H, developed at the Hadley Centre of the UK Met Office. This paper gives some background to HadRM3H; it also presents anomaly maps of the projected future changes in European temperature, rainfall and potential evapotranspiration (PET, estimated using a variant of the Penman formula). The future simulations of temperature and rainfall, following the SRES A2 emissions scenario, suggest that most of Europe will experience warming in all seasons, with heavier precipitation in winter in much of western Europe (except for central and northern parts of the Scandinavian mountains) and drier summers in most parts of western and central Europe (except for the north-west and the eastern part of the Baltic Sea). Particularly large temperature anomalies (>6°C) are projected for north-east Europe in winter and for southern Europe, Asia Minor and parts of Russia in summer. The projected PET displayed very large increases in summer for a region extending from southern France to Russia. The unrealistically large values could be the result of an enhanced hydrological cycle in HadRM3H, affecting several of the input parameters to the PET calculation. To avoid problems with hydrological modelling schemes, PET was re-calculated, using empirical relationships derived from observational values of temperature and PET.
Dispositional Optimism and Therapeutic Expectations in Early Phase Oncology Trials
Jansen, Lynn A.; Mahadevan, Daruka; Appelbaum, Paul S.; Klein, William MP; Weinstein, Neil D.; Mori, Motomi; Daffé, Racky; Sulmasy, Daniel P.
2016-01-01
Purpose Prior research has identified unrealistic optimism as a bias that might impair informed consent among patient-subjects in early phase oncology trials. Optimism, however, is not a unitary construct – it can also be defined as a general disposition, or what is called dispositional optimism. We assessed whether dispositional optimism would be related to high expectations for personal therapeutic benefit reported by patient-subjects in these trials but not to the therapeutic misconception. We also assessed how dispositional optimism related to unrealistic optimism. Methods Patient-subjects completed questionnaires designed to measure expectations for therapeutic benefit, dispositional optimism, unrealistic optimism, and the therapeutic misconception. Results Dispositional optimism was significantly associated with higher expectations for personal therapeutic benefit (Spearman r=0.333, p<0.0001), but was not associated with the therapeutic misconception. (Spearman r=−0.075, p=0.329). Dispositional optimism was weakly associated with unrealistic optimism (Spearman r=0.215, p=0.005). In multivariate analysis, both dispositional optimism (p=0.02) and unrealistic optimism (p<0.0001) were independently associated with high expectations for personal therapeutic benefit. Unrealistic optimism (p=.0001), but not dispositional optimism, was independently associated with the therapeutic misconception. Conclusion High expectations for therapeutic benefit among patient-subjects in early phase oncology trials should not be assumed to result from misunderstanding of specific information about the trials. Our data reveal that these expectations are associated with either a dispositionally positive outlook on life or biased expectations about specific aspects of trial participation. Not all manifestations of optimism are the same, and different types of optimism likely have different consequences for informed consent in early phase oncology research. PMID:26882017
Dispositional optimism and therapeutic expectations in early-phase oncology trials.
Jansen, Lynn A; Mahadevan, Daruka; Appelbaum, Paul S; Klein, William M P; Weinstein, Neil D; Mori, Motomi; Daffé, Racky; Sulmasy, Daniel P
2016-04-15
Prior research has identified unrealistic optimism as a bias that might impair informed consent among patient-subjects in early-phase oncology trials. However, optimism is not a unitary construct; it also can be defined as a general disposition, or what is called dispositional optimism. The authors assessed whether dispositional optimism would be related to high expectations for personal therapeutic benefit reported by patient-subjects in these trials but not to the therapeutic misconception. The authors also assessed how dispositional optimism related to unrealistic optimism. Patient-subjects completed questionnaires designed to measure expectations for therapeutic benefit, dispositional optimism, unrealistic optimism, and the therapeutic misconception. Dispositional optimism was found to be significantly associated with higher expectations for personal therapeutic benefit (Spearman rank correlation coefficient [r], 0.333; P<.0001), but was not associated with the therapeutic misconception (Spearman r, -0.075; P = .329). Dispositional optimism was found to be weakly associated with unrealistic optimism (Spearman r, 0.215; P = .005). On multivariate analysis, both dispositional optimism (P = .02) and unrealistic optimism (P<.0001) were found to be independently associated with high expectations for personal therapeutic benefit. Unrealistic optimism (P = .0001), but not dispositional optimism, was found to be independently associated with the therapeutic misconception. High expectations for therapeutic benefit among patient-subjects in early-phase oncology trials should not be assumed to result from misunderstanding of specific information regarding the trials. The data from the current study indicate that these expectations are associated with either a dispositionally positive outlook on life or biased expectations concerning specific aspects of trial participation. Not all manifestations of optimism are the same, and different types of optimism likely have different consequences for informed consent in early-phase oncology research. © 2016 American Cancer Society.
A quality assured surface wind database in Eastern Canada
NASA Astrophysics Data System (ADS)
Lucio-Eceiza, E. E.; González-Rouco, J. F.; Navarro, J.; Beltrami, H.; Jiménez, P. A.; García-Bustamante, E.; Hidalgo, A.
2012-04-01
This work summarizes the results of a Quality Assurance (QA) procedure applied to wind data centred over a wide area in Eastern Canada. The region includes the provinces of Quebec, Prince Edward Island, New Brunswick, Nova Scotia, Newfoundland, Labrador and parts of the north-eastern U.S. (Maine, New Hampshire, Massachusetts, New York and Vermont). The data set consists of 527 stations compiled from three different sources: 344 land sites from Environment Canada (EC; 1940-2009), 40 buoys distributed over the East Coast and the Canadian Great Lakes provided by the Department of Fisheries and Oceans (DFO; 1988-2008), and 143 land sites over both eastern Canada and north-eastern U.S. provided by the National Center of Atmospheric Research (NCAR; 1975-2007). The complexity of the QA process is enhanced in this case by the variety of institutional observational protocols that lead to different temporal resolutions (hourly, 3-h and 6-h), unit systems (km/h in EC; m/s in DFO and knots in NCAR), time references (e.g. UTC, UTC+1, UTC-5, UTC-4), etc. Initial corrections comprised the establishment of common reference systems for time (UTC) and units (MKS). The QA applied on the resulting dataset is structured in three steps that involve the detection and correction of: manipulation errors (i.e. repetitions); unrealistic values and ranges in wind module and direction; abnormally low (e.g. long constant periods) and high variations (e.g. extreme values and inhomogeneities). Results from the first step indicate 22 sites (8 EC; 14 DFO) showing temporal patterns that are unrealistically repeated along the stations. After the QA is applied, the dataset will be subject to statistical and dynamical downscaling studies. The statistical approaches will allow for an understanding of the wind field variability related to changes in the large scale atmospheric circulation as well as their dependence on local/regional features like topography, land-sea contrasts, snow/ice presence, etc. The dynamical downscaling will allow for process understanding assessments by performing high spatial resolution simulations with the WRF model. Finally, model validation will be targeted through the comparison with observations.
Collective behaviour in vertebrates: a sensory perspective
Collignon, Bertrand; Fernández-Juricic, Esteban
2016-01-01
Collective behaviour models can predict behaviours of schools, flocks, and herds. However, in many cases, these models make biologically unrealistic assumptions in terms of the sensory capabilities of the organism, which are applied across different species. We explored how sensitive collective behaviour models are to these sensory assumptions. Specifically, we used parameters reflecting the visual coverage and visual acuity that determine the spatial range over which an individual can detect and interact with conspecifics. Using metric and topological collective behaviour models, we compared the classic sensory parameters, typically used to model birds and fish, with a set of realistic sensory parameters obtained through physiological measurements. Compared with the classic sensory assumptions, the realistic assumptions increased perceptual ranges, which led to fewer groups and larger group sizes in all species, and higher polarity values and slightly shorter neighbour distances in the fish species. Overall, classic visual sensory assumptions are not representative of many species showing collective behaviour and constrain unrealistically their perceptual ranges. More importantly, caution must be exercised when empirically testing the predictions of these models in terms of choosing the model species, making realistic predictions, and interpreting the results. PMID:28018616
Moral contracts and the patient-physician relationship.
Rothbard, D
1984-01-01
Rothbart critically examines Robert Veatch's contractual model of the physician patient relationship, which grounds a physician's obligations in a just decision procedure and requires a mutual, full disclosure of personal values and ethical principles. He sees Veatch's model as making unrealistic demands on both parties, and instead proposes a counseling-sanctioning model. In this model, two conceptions of individual autonomy, one creating a right to voluntary and independent decision making and the other addressing the ability to act freely, establish rights and duties for patient and physician. Rothbart argues that this model realistically represents the value-laden dimensions of medicine and mandates reasonable standards of patient care.
More Outstanding Nonsense: A Critique of Ofsted Criteria
ERIC Educational Resources Information Center
Richards, Colin
2015-01-01
The Office for Standards in Education's most recently published criteria for "outstanding" teaching are scrutinised and found wanting. They are seen as unrealistic for teachers to meet and equally unrealistic as criteria for use by inspectors. An explanation is offered as to why they are framed as they are and an alternative, more…
David B. South; Curtis L. VanderSchaaf; Larry D. Teeter
2006-01-01
Some researchers claim that continuously increasing intensive plantation management will increase profits and reduce the unit cost of wood production while others believe in the law of diminishing returns. We developed four hypothetical production models where yield is a function of silvicultural effort. Models that produced unrealistic results were (1) an exponential...
Particle astronomy and particle physics from the moon - The particle observatory
NASA Technical Reports Server (NTRS)
Wilson, Thomas L.
1990-01-01
Promising experiments from the moon using particle detectors are discussed, noting the advantage of the large flux collecting power Pc offered by the remote, stable environment of a lunar base. An observatory class of particle experiments is presented, based upon proposals at NASA's recent Stanford workshop. They vary from neutrino astronomy, particle astrophysics, and cosmic ray experiments to space physics and fundamental physics experiments such as proton decay and 'table-top' arrays. This research is background-limited on earth, and it is awkward and unrealistic in earth orbit, but is particularly suited for the moon where Pc can be quite large and the instrumentation is not subject to atmospheric erosion as it is (for large t) in low earth orbit.
New Exact Solutions of Relativistic Hydrodynamics for Longitudinally Expanding Fireballs
NASA Astrophysics Data System (ADS)
Csörgő, Tamás; Kasza, Gábor; Csanád, Máté; Jiang, Zefang
2018-06-01
We present new, exact, finite solutions of relativistic hydrodynamics for longitudinally expanding fireballs for arbitrary constant value of the speed of sound. These new solutions generalize earlier, longitudinally finite, exact solutions, from an unrealistic to a reasonable equation of state, characterized by a temperature independent (average) value of the speed of sound. Observables like the rapidity density and the pseudorapidity density are evaluated analytically, resulting in simple and easy to fit formulae that can be matched to the high energy proton-proton and heavy ion collision data at RHIC and LHC. In the longitudinally boost-invariant limit, these new solutions approach the Hwa-Bjorken solution and the corresponding rapidity distributions approach a rapidity plateaux.
Weinberg, Seth H.; Smith, Gregory D.
2012-01-01
Cardiac myocyte calcium signaling is often modeled using deterministic ordinary differential equations (ODEs) and mass-action kinetics. However, spatially restricted “domains” associated with calcium influx are small enough (e.g., 10−17 liters) that local signaling may involve 1–100 calcium ions. Is it appropriate to model the dynamics of subspace calcium using deterministic ODEs or, alternatively, do we require stochastic descriptions that account for the fundamentally discrete nature of these local calcium signals? To address this question, we constructed a minimal Markov model of a calcium-regulated calcium channel and associated subspace. We compared the expected value of fluctuating subspace calcium concentration (a result that accounts for the small subspace volume) with the corresponding deterministic model (an approximation that assumes large system size). When subspace calcium did not regulate calcium influx, the deterministic and stochastic descriptions agreed. However, when calcium binding altered channel activity in the model, the continuous deterministic description often deviated significantly from the discrete stochastic model, unless the subspace volume is unrealistically large and/or the kinetics of the calcium binding are sufficiently fast. This principle was also demonstrated using a physiologically realistic model of calmodulin regulation of L-type calcium channels introduced by Yue and coworkers. PMID:23509597
Dynamical role of Ekman pumping in rapidly rotating convection
NASA Astrophysics Data System (ADS)
Stellmach, Stephan; Julien, Keith; Cheng, Jonathan; Aurnou, Jonathan
2015-04-01
The exact nature of the mechanical boundary conditions (i.e. no-slip versus stress-free) is usually considered to be of secondary importance in the rapidly rotating parameter regime characterizing planetary cores. While they have considerable influence for the Ekman numbers achievable in today's global simulations, for planetary values both the viscous Ekman layers and the associated secondary flows are generally expected to become negligibly small. In fact, usually the main purpose of using stress-free boundary conditions in numerical dynamo simulations is to suppress unrealistically large friction and pumping effects. In this study, we investigate the influence of the mechanical boundary conditions on core convection systematically. By restricting ourselves to the idealized case of rapidly rotating Rayleigh-Bénard convection, we are able to combine results from direct numerical simulations (DNS), laboratory experiments and asymptotic theory into a coherent picture. Contrary to the general expectation, we show that the dynamical effects of Ekman pumping increase with decreasing Ekman number over the investigated parameter range. While stress-free DNS results converge to the asymptotic predictions, both no-slip simulations and laboratory experiments consistently reveal increasingly large deviations from the existing asymptotic theory based on dynamically passive Ekman layers. The implications of these results for core dynamics are discussed briefly.
Harris, Adam J. L.; de Molière, Laura; Soh, Melinda; Hahn, Ulrike
2017-01-01
One of the most accepted findings across psychology is that people are unrealistically optimistic in their judgments of comparative risk concerning future life events—they judge negative events as less likely to happen to themselves than to the average person. Harris and Hahn (2011), however, demonstrated how unbiased (non-optimistic) responses can result in data patterns commonly interpreted as indicative of optimism due to statistical artifacts. In the current paper, we report the results of 5 studies that control for these statistical confounds and observe no evidence for residual unrealistic optimism, even observing a ‘severity effect’ whereby severe outcomes were overestimated relative to neutral ones (Studies 3 & 4). We conclude that there is no evidence supporting an optimism interpretation of previous results using the prevalent comparison method. PMID:28278200
Murray, Sandra L.; Griffin, Dale W.; Derrick, Jaye L.; Harris, Brianna; Aloni, Maya; Leder, Sadie
2014-01-01
The authors examine whether unrealistically viewing a romantic partner as the image of one’s ideal partner accelerates or slows declines in marital satisfaction among newlyweds. A longitudinal study linked unrealistic idealization at the point of marriage to changes in satisfaction over the first three years of marriage. Overall, satisfaction declined markedly, consistent with past research. However, seeing a less-than-ideal partner as a reflection of one’s ideals predicted a certain level of immunity to the corrosive effects of time: People who initially idealized their partner highly experienced no declines in satisfaction. The obtained benefits of idealization remained in analyses that separately controlled for the positivity of partner perceptions and the possibility that better adjusted people might be in better relationships. PMID:21467549
Harris, Adam J L; de Molière, Laura; Soh, Melinda; Hahn, Ulrike
2017-01-01
One of the most accepted findings across psychology is that people are unrealistically optimistic in their judgments of comparative risk concerning future life events-they judge negative events as less likely to happen to themselves than to the average person. Harris and Hahn (2011), however, demonstrated how unbiased (non-optimistic) responses can result in data patterns commonly interpreted as indicative of optimism due to statistical artifacts. In the current paper, we report the results of 5 studies that control for these statistical confounds and observe no evidence for residual unrealistic optimism, even observing a 'severity effect' whereby severe outcomes were overestimated relative to neutral ones (Studies 3 & 4). We conclude that there is no evidence supporting an optimism interpretation of previous results using the prevalent comparison method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Hongyi; Huang, Maoyi; Wigmosta, Mark S.
2011-12-24
Previous studies using the Community Land Model (CLM) focused on simulating landatmosphere interactions and water balance at continental to global scales, with limited attention paid to its capability for hydrologic simulations at watershed or regional scales. This study evaluates the performance of CLM 4.0 (CLM4) for hydrologic simulations, and explores possible directions of improvement. Specifically, it is found that CLM4 tends to produce unrealistically large temporal variation of runoff for applications at a mountainous catchment in the Northwest United States where subsurface runoff is dominant, as well as at a few flux tower sites. We show that runoff simulations frommore » CLM4 can be improved by: (1) increasing spatial resolution of the land surface representations; (2) calibrating parameter values; (3) replacing the subsurface formulation with a more general nonlinear function; (4) implementing the runoff generation schemes from the Variability Infiltration Capacity (VIC) model. This study also highlights the importance of evaluating both the energy and water fluxes application of land surface models across multiple scales.« less
Accurate modeling and evaluation of microstructures in complex materials
NASA Astrophysics Data System (ADS)
Tahmasebi, Pejman
2018-02-01
Accurate characterization of heterogeneous materials is of great importance for different fields of science and engineering. Such a goal can be achieved through imaging. Acquiring three- or two-dimensional images under different conditions is not, however, always plausible. On the other hand, accurate characterization of complex and multiphase materials requires various digital images (I) under different conditions. An ensemble method is presented that can take one single (or a set of) I(s) and stochastically produce several similar models of the given disordered material. The method is based on a successive calculating of a conditional probability by which the initial stochastic models are produced. Then, a graph formulation is utilized for removing unrealistic structures. A distance transform function for the Is with highly connected microstructure and long-range features is considered which results in a new I that is more informative. Reproduction of the I is also considered through a histogram matching approach in an iterative framework. Such an iterative algorithm avoids reproduction of unrealistic structures. Furthermore, a multiscale approach, based on pyramid representation of the large Is, is presented that can produce materials with millions of pixels in a matter of seconds. Finally, the nonstationary systems—those for which the distribution of data varies spatially—are studied using two different methods. The method is tested on several complex and large examples of microstructures. The produced results are all in excellent agreement with the utilized Is and the similarities are quantified using various correlation functions.
StePS: Stereographically Projected Cosmological Simulations
NASA Astrophysics Data System (ADS)
Rácz, Gábor; Szapudi, István; Csabai, István; Dobos, László
2018-05-01
StePS (Stereographically Projected Cosmological Simulations) compactifies the infinite spatial extent of the Universe into a finite sphere with isotropic boundary conditions to simulate the evolution of the large-scale structure. This eliminates the need for periodic boundary conditions, which are a numerical convenience unsupported by observation and which modifies the law of force on large scales in an unrealistic fashion. StePS uses stereographic projection for space compactification and naive O(N2) force calculation; this arrives at a correlation function of the same quality more quickly than standard (tree or P3M) algorithms with similar spatial and mass resolution. The N2 force calculation is easy to adapt to modern graphics cards, hence StePS can function as a high-speed prediction tool for modern large-scale surveys.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leng, Guoyong; Huang, Maoyi; Tang, Qiuhong
2013-09-16
Previous studies on irrigation impacts on land surface fluxes/states were mainly conducted as sensitivity experiments, with limited analysis of uncertainties from the input data and model irrigation schemes used. In this study, we calibrated and evaluated the performance of irrigation water use simulated by the Community Land Model version 4 (CLM4) against observations from agriculture census. We investigated the impacts of irrigation on land surface fluxes and states over the conterminous United States (CONUS) and explored possible directions of improvement. Specifically, we found large uncertainty in the irrigation area data from two widely used sources and CLM4 tended to producemore » unrealistically large temporal variations of irrigation demand for applications at the water resources region scale over CONUS. At seasonal to interannual time scales, the effects of irrigation on surface energy partitioning appeared to be large and persistent, and more pronounced in dry than wet years. Even with model calibration to yield overall good agreement with the irrigation amounts from the National Agricultural Statistics Service (NASS), differences between the two irrigation area datasets still dominate the differences in the interannual variability of land surface response to irrigation. Our results suggest that irrigation amount simulated by CLM4 can be improved by (1) calibrating model parameter values to account for regional differences in irrigation demand and (2) accurate representation of the spatial distribution and intensity of irrigated areas.« less
How unrealistic optimism is maintained in the face of reality.
Sharot, Tali; Korn, Christoph W; Dolan, Raymond J
2011-10-09
Unrealistic optimism is a pervasive human trait that influences domains ranging from personal relationships to politics and finance. How people maintain unrealistic optimism, despite frequently encountering information that challenges those biased beliefs, is unknown. We examined this question and found a marked asymmetry in belief updating. Participants updated their beliefs more in response to information that was better than expected than to information that was worse. This selectivity was mediated by a relative failure to code for errors that should reduce optimism. Distinct regions of the prefrontal cortex tracked estimation errors when those called for positive update, both in individuals who scored high and low on trait optimism. However, highly optimistic individuals exhibited reduced tracking of estimation errors that called for negative update in right inferior prefrontal gyrus. These findings indicate that optimism is tied to a selective update failure and diminished neural coding of undesirable information regarding the future.
NASA Astrophysics Data System (ADS)
Feng, Juan; Chen, Wen; Gong, Hainan; Ying, Jun; Jiang, Wenping
2018-06-01
The delayed impacts of the central Pacific (CP) El Niño on the East Asian summer monsoon (EASM) are evaluated by comparing historical runs from Coupled Model Intercomparison Project Phase 5 models against reanalysis data. In observations, an anomalous western North Pacific anticyclone (WNPAC), linking CP El Niño to the EASM, forms due to the transition of sea surface temperature (SST) warming into SST cooling over the CP, which generates a WNPAC through a Gill-Matsuno response. In comparison with the observational result, only one-third of the models (i.e., the type-I models) capture a weaker and smaller WNPAC, whereas the other two-thirds (i.e., the type-II models) fail to reproduce a WNPAC. The simulation biases in both of type-I models and type-II models mainly arise from an unrealistic, long-lasting CP El Niño warming, which causes a north Indian Ocean SST warming bias in models through air-sea interaction process. This north Indian Ocean SST warming generates the WNPAC through capacitor effects, which is different from the WNPAC formation mechanism in observations. This discrepancy leads to simulation biases in type-I models. In type-II models, the unrealistic CP El Niño warming persists into summer, which produces an anomalous cyclone over the central-western Pacific. The opposite effect of the CP and north Indian Ocean SST warming on the WNP atmospheric circulation leads to disappearance of the WNPAC. Hence, large simulation biases are produced in type-II models. Further analysis demonstrates the slow decay of CP El Niño is caused by the unrealistically simulated climatological SST, which creates strong warm meridional oceanic advection and results in a sustained CP El Niño warming.
Reward and uncertainty in exploration programs
NASA Technical Reports Server (NTRS)
Kaufman, G. M.; Bradley, P. G.
1971-01-01
A set of variables which are crucial to the economic outcome of petroleum exploration are discussed. These are treated as random variables; the values they assume indicate the number of successes that occur in a drilling program and determine, for a particular discovery, the unit production cost and net economic return if that reservoir is developed. In specifying the joint probability law for those variables, extreme and probably unrealistic assumptions are made. In particular, the different random variables are assumed to be independently distributed. Using postulated probability functions and specified parameters, values are generated for selected random variables, such as reservoir size. From this set of values the economic magnitudes of interest, net return and unit production cost are computed. This constitutes a single trial, and the procedure is repeated many times. The resulting histograms approximate the probability density functions of the variables which describe the economic outcomes of an exploratory drilling program.
O'Flynn-Magee, Kathy; Clauson, Marion
2013-09-01
Fair and consistent assessment, specifically grading, is crucial to teaching and learning scholarship and is a professional responsibility of nurse educators. Yet, many would agree that assessment is one of the most challenging aspects of their role. Despite differing beliefs, values, and meanings attributed to grading and grades, teachers' grading practices should be guided by principles and supported by policies. Inconsistent grading practices among educators, students' unrealistic expectations of grades, and a trend toward grade inflation may be contributing to both educators' and students' concerns. A teaching scholarship project that led to a research study explored nurse educators' beliefs, values, and practices related to the grading of written academic work. The purpose of this article is to share the findings and the resulting grading guidelines that were developed to support nurse educators' endeavors to enact equitable grading practices. Copyright 2013, SLACK Incorporated.
Mapel, D; Pearson, M
2002-08-01
Healthcare payers make decisions on funding for treatments for diseases, such as chronic obstructive pulmonary disease (COPD), on a population level, so require evidence of treatment success in appropriate populations, using usual routine care as the comparison for alternative management approaches. Such health outcomes evidence can be obtained from a number of sources. The 'gold standard' method for obtaining evidence of treatment success is usually taken as the randomized controlled prospective clinical trial. Yet the value of such studies in providing evidence for decision-makers can be questioned due to the restricted entry criteria limiting the ability to generalize to real life populations, narrow focus on individual parameters, use of placebo for comparison rather than usual therapy and unrealistic intense monitoring of patients. Evidence obtained from retrospective and observational studies can supplement that from randomized clinical trials, providing that care is taken to guard against bias and confounders. However, very large numbers of patients must be investigated if small differences between drugs and treatment approaches are to be detected. Administrative databases from healthcare systems provide an opportunity to obtain observational data on large numbers of patients. Such databases have shown that high healthcare costs in patients with COPD are associated with co-morbid conditions and current smoking status. Analysis of an administrative database has also shown that elderly patients with COPD who received inhaled corticosteroids within 90 days of discharge from hospital had 24% fewer repeat hospitalizations for COPD and were 29% less likely to die during the 1-year follow-up period. In conclusion, there are a number of sources of meaningful evidence of the health outcomes arising from different therapeutic approaches that should be of value to healthcare payers making decisions on resource allocation.
A modified social force model for crowd dynamics
NASA Astrophysics Data System (ADS)
Hassan, Ummi Nurmasyitah; Zainuddin, Zarita; Abu-Sulyman, Ibtesam M.
2017-08-01
The Social Force Model (SFM) is one of the most successful models in microscopic pedestrian studies that is used to study the movement of pedestrians. Many modifications have been done to improvise the SFM by earlier researchers such as the incorporation of a constant respect factor into the self-stopping mechanism. Before the new mechanism is introduced, the researchers found out that a pedestrian will immediately come to a halt if other pedestrians are near to him, which seems to be an unrealistic behavior. Therefore, researchers introduce a self-slowing mechanism to gradually stop a pedestrian when he is approaching other pedestrians. Subsequently, the dynamic respect factor is introduced into the self-slowing mechanism based on the density of the pedestrians to make the model even more realistic. In real life situations, the respect factor of the pedestrians should be dynamic values instead of a constant value. However, when we reproduce the simulation of the dynamic respect factor, we found that the movement of the pedestrians are unrealistic because the pedestrians are lacking perception of the pedestrians in front of him. In this paper, we adopted both dynamic respect factor and dynamic angular parameter, called modified dynamic respect factor, which is dependent on the density of the pedestrians. Simulations are performed in a normal unidirectional walkway to compare the simulated pedestrians' movements produced by both models. The results obtained showed that the modified dynamic respect factor produces more realistic movement of the pedestrians which conform to the real situation. Moreover, we also found that the simulations endow the pedestrian with a self-slowing mechanism and a perception of other pedestrians in front of him.
Biologically-Inspired Concepts for Self-Management of Complexity
NASA Technical Reports Server (NTRS)
Sterritt, Roy; Hinchey, G.
2006-01-01
Inherent complexity in large-scale applications may be impossible to eliminate or even ameliorate despite a number of promising advances. In such cases, the complexity must be tolerated and managed. Such management may be beyond the abilities of humans, or require such overhead as to make management by humans unrealistic. A number of initiatives inspired by concepts in biology have arisen for self-management of complex systems. We present some ideas and techniques we have been experimenting with, inspired by lesser-known concepts in biology that show promise in protecting complex systems and represent a step towards self-management of complexity.
Nonlinear reflection of shock shear waves in soft elastic media.
Pinton, Gianmarco; Coulouvrat, François; Gennisson, Jean-Luc; Tanter, Mickaël
2010-02-01
For fluids, the theoretical investigation of shock wave reflection has a good agreement with experiments when the incident shock Mach number is large. But when it is small, theory predicts that Mach reflections are physically unrealistic, which contradicts experimental evidence. This von Neumann paradox is investigated for shear shock waves in soft elastic solids with theory and simulations. The nonlinear elastic wave equation is approximated by a paraxial wave equation with a cubic nonlinear term. This equation is solved numerically with finite differences and the Godunov scheme. Three reflection regimes are observed. Theory is developed for shock propagation by applying the Rankine-Hugoniot relations and entropic constraints. A characteristic parameter relating diffraction and non-linearity is introduced and its theoretical values are shown to match numerical observations. The numerical solution is then applied to von Neumann reflection, where curved reflected and Mach shocks are observed. Finally, the case of weak von Neumann reflection, where there is no reflected shock, is examined. The smooth but non-monotonic transition between these three reflection regimes, from linear Snell-Descartes to perfect grazing case, provides a solution to the acoustical von Neumann paradox for the shear wave equation. This transition is similar to the quadratic non-linearity in fluids.
Comparison of two trajectory based models for locating particle sources for two rural New York sites
NASA Astrophysics Data System (ADS)
Zhou, Liming; Hopke, Philip K.; Liu, Wei
Two back trajectory-based statistical models, simplified quantitative transport bias analysis and residence-time weighted concentrations (RTWC) have been compared for their capabilities of identifying likely locations of source emissions contributing to observed particle concentrations at Potsdam and Stockton, New York. Quantitative transport bias analysis (QTBA) attempts to take into account the distribution of concentrations around the directions of the back trajectories. In full QTBA approach, deposition processes (wet and dry) are also considered. Simplified QTBA omits the consideration of deposition. It is best used with multiple site data. Similarly the RTWC approach uses concentrations measured at different sites along with the back trajectories to distribute the concentration contributions across the spatial domain of the trajectories. In this study, these models are used in combination with the source contribution values obtained by the previous positive matrix factorization analysis of particle composition data from Potsdam and Stockton. The six common sources for the two sites, sulfate, soil, zinc smelter, nitrate, wood smoke and copper smelter were analyzed. The results of the two methods are consistent and locate large and clearly defined sources well. RTWC approach can find more minor sources but may also give unrealistic estimations of the source locations.
Shepperd, James A; Lipsey, Nikolette P; Pachur, Thorsten; Waters, Erika A
2018-07-01
Medical decisions made on behalf of another person-particularly those made by adult caregivers for their minor children-are often informed by the decision maker's beliefs about the treatment's risks and benefits. However, we know little about the cognitive and affective mechanisms influencing such "proxy" risk perceptions and about how proxy risk perceptions are related to prominent judgment phenomena. Adult caregivers of minor children with asthma ( N = 132) completed an online, cross-sectional survey assessing 1) cognitions and affects that form the basis of the availability, representativeness, and affect heuristics; 2) endorsement of the absent-exempt and the better-than-average effect; and 3) proxy perceived risk and unrealistic comparative optimism of an asthma exacerbation. We used the Pediatric Asthma Control and Communication Instrument (PACCI) to assess asthma severity. Respondents with higher scores on availability, representativeness, and negative affect indicated higher proxy risk perceptions and (for representativeness only) lower unrealistic optimism, irrespective of asthma severity. Conversely, respondents who showed a stronger display of the better-than-average effect indicated lower proxy risk perceptions but did not differ in unrealistic optimism. The absent-exempt effect was unrelated to proxy risk perceptions and unrealistic optimism. Heuristic judgment processes appear to contribute to caregivers' proxy risk perceptions of their child's asthma exacerbation risk. Moreover, the display of other, possibly erroneous, judgment phenomena is associated with lower caregiver risk perceptions. Designing interventions that target these mechanisms may help caregivers work with their children to reduce exacerbation risk.
Maximum likelihood resampling of noisy, spatially correlated data
NASA Astrophysics Data System (ADS)
Goff, J.; Jenkins, C.
2005-12-01
In any geologic application, noisy data are sources of consternation for researchers, inhibiting interpretability and marring images with unsightly and unrealistic artifacts. Filtering is the typical solution to dealing with noisy data. However, filtering commonly suffers from ad hoc (i.e., uncalibrated, ungoverned) application, which runs the risk of erasing high variability components of the field in addition to the noise components. We present here an alternative to filtering: a newly developed methodology for correcting noise in data by finding the "best" value given the data value, its uncertainty, and the data values and uncertainties at proximal locations. The motivating rationale is that data points that are close to each other in space cannot differ by "too much", where how much is "too much" is governed by the field correlation properties. Data with large uncertainties will frequently violate this condition, and in such cases need to be corrected, or "resampled." The best solution for resampling is determined by the maximum of the likelihood function defined by the intersection of two probability density functions (pdf): (1) the data pdf, with mean and variance determined by the data value and square uncertainty, respectively, and (2) the geostatistical pdf, whose mean and variance are determined by the kriging algorithm applied to proximal data values. A Monte Carlo sampling of the data probability space eliminates non-uniqueness, and weights the solution toward data values with lower uncertainties. A test with a synthetic data set sampled from a known field demonstrates quantitatively and qualitatively the improvement provided by the maximum likelihood resampling algorithm. The method is also applied to three marine geology/geophysics data examples: (1) three generations of bathymetric data on the New Jersey shelf with disparate data uncertainties; (2) mean grain size data from the Adriatic Sea, which is combination of both analytic (low uncertainty) and word-based (higher uncertainty) sources; and (3) sidescan backscatter data from the Martha's Vineyard Coastal Observatory which are, as is typical for such data, affected by speckly noise.
Applying Dispersive Changes to Lagrangian Particles in Groundwater Transport Models
Konikow, Leonard F.
2010-01-01
Method-of-characteristics groundwater transport models require that changes in concentrations computed within an Eulerian framework to account for dispersion be transferred to moving particles used to simulate advective transport. A new algorithm was developed to accomplish this transfer between nodal values and advecting particles more precisely and realistically compared to currently used methods. The new method scales the changes and adjustments of particle concentrations relative to limiting bounds of concentration values determined from the population of adjacent nodal values. The method precludes unrealistic undershoot or overshoot for concentrations of individual particles. In the new method, if dispersion causes cell concentrations to decrease during a time step, those particles in the cell having the highest concentration will decrease the most, and those with the lowest concentration will decrease the least. The converse is true if dispersion is causing concentrations to increase. Furthermore, if the initial concentration on a particle is outside the range of the adjacent nodal values, it will automatically be adjusted in the direction of the acceptable range of values. The new method is inherently mass conservative. ?? US Government 2010.
Applying dispersive changes to Lagrangian particles in groundwater transport models
Konikow, Leonard F.
2010-01-01
Method-of-characteristics groundwater transport models require that changes in concentrations computed within an Eulerian framework to account for dispersion be transferred to moving particles used to simulate advective transport. A new algorithm was developed to accomplish this transfer between nodal values and advecting particles more precisely and realistically compared to currently used methods. The new method scales the changes and adjustments of particle concentrations relative to limiting bounds of concentration values determined from the population of adjacent nodal values. The method precludes unrealistic undershoot or overshoot for concentrations of individual particles. In the new method, if dispersion causes cell concentrations to decrease during a time step, those particles in the cell having the highest concentration will decrease the most, and those with the lowest concentration will decrease the least. The converse is true if dispersion is causing concentrations to increase. Furthermore, if the initial concentration on a particle is outside the range of the adjacent nodal values, it will automatically be adjusted in the direction of the acceptable range of values. The new method is inherently mass conservative.
Mazzaro, Laura J.; Munoz-Esparza, Domingo; Lundquist, Julie K.; ...
2017-07-06
Multiscale atmospheric simulations can be computationally prohibitive, as they require large domains and fine spatiotemporal resolutions. Grid-nesting can alleviate this by bridging mesoscales and microscales, but one turbulence scheme must run at resolutions within a range of scales known as the terra incognita (TI). TI grid-cell sizes can violate both mesoscale and microscale subgrid-scale parametrization assumptions, resulting in unrealistic flow structures. Herein we assess the impact of unrealistic lateral boundary conditions from parent mesoscale simulations at TI resolutions on nested large eddy simulations (LES), to determine whether parent domains bias the nested LES. We present a series of idealized nestedmore » mesoscale-to-LES runs of a dry convective boundary layer (CBL) with different parent resolutions in the TI. We compare the nested LES with a stand-alone LES with periodic boundary conditions. The nested LES domains develop ~20% smaller convective structures, while potential temperature profiles are nearly identical for both the mesoscales and LES simulations. The horizontal wind speed and surface wind shear in the nested simulations closely resemble the reference LES. Heat fluxes are overestimated by up to ~0.01 K m s –1 in the top half of the PBL for all nested simulations. Overestimates of turbulent kinetic energy (TKE) and Reynolds stress in the nested domains are proportional to the parent domain's grid-cell size, and are almost eliminated for the simulation with the finest parent grid-cell size. Furthermore, based on these results, we recommend that LES of the CBL be forced by mesoscale simulations with the finest practical resolution.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazzaro, Laura J.; Munoz-Esparza, Domingo; Lundquist, Julie K.
Multiscale atmospheric simulations can be computationally prohibitive, as they require large domains and fine spatiotemporal resolutions. Grid-nesting can alleviate this by bridging mesoscales and microscales, but one turbulence scheme must run at resolutions within a range of scales known as the terra incognita (TI). TI grid-cell sizes can violate both mesoscale and microscale subgrid-scale parametrization assumptions, resulting in unrealistic flow structures. Herein we assess the impact of unrealistic lateral boundary conditions from parent mesoscale simulations at TI resolutions on nested large eddy simulations (LES), to determine whether parent domains bias the nested LES. We present a series of idealized nestedmore » mesoscale-to-LES runs of a dry convective boundary layer (CBL) with different parent resolutions in the TI. We compare the nested LES with a stand-alone LES with periodic boundary conditions. The nested LES domains develop ~20% smaller convective structures, while potential temperature profiles are nearly identical for both the mesoscales and LES simulations. The horizontal wind speed and surface wind shear in the nested simulations closely resemble the reference LES. Heat fluxes are overestimated by up to ~0.01 K m s –1 in the top half of the PBL for all nested simulations. Overestimates of turbulent kinetic energy (TKE) and Reynolds stress in the nested domains are proportional to the parent domain's grid-cell size, and are almost eliminated for the simulation with the finest parent grid-cell size. Furthermore, based on these results, we recommend that LES of the CBL be forced by mesoscale simulations with the finest practical resolution.« less
Boer, S.; Elias, E.; Aarninkhof, S.; Roelvink, D.; Vellinga, T.
2007-01-01
Morphological model computations based on uniform (non-graded) sediment revealed an unrealistically strong scour of the sea floor in the immediate vicinity to the west of Maasvlakte 2. By means of a state-of-the-art graded sediment transport model the effect of natural armouring and sorting of bed material on the scour process has been examined. Sensitivity computations confirm that the development of the scour hole is strongly reduced due to the incorporation of armouring processes, suggesting an approximately 30% decrease in terms of erosion area below the -20m depth contour. ?? 2007 ASCE.
Fine-particle pH for Beijing winter haze as inferred from different thermodynamic equilibrium models
NASA Astrophysics Data System (ADS)
Song, Shaojie; Gao, Meng; Xu, Weiqi; Shao, Jingyuan; Shi, Guoliang; Wang, Shuxiao; Wang, Yuxuan; Sun, Yele; McElroy, Michael B.
2018-05-01
pH is an important property of aerosol particles but is difficult to measure directly. Several studies have estimated the pH values for fine particles in northern China winter haze using thermodynamic models (i.e., E-AIM and ISORROPIA) and ambient measurements. The reported pH values differ widely, ranging from close to 0 (highly acidic) to as high as 7 (neutral). In order to understand the reason for this discrepancy, we calculated pH values using these models with different assumptions with regard to model inputs and particle phase states. We find that the large discrepancy is due primarily to differences in the model assumptions adopted in previous studies. Calculations using only aerosol-phase composition as inputs (i.e., reverse mode) are sensitive to the measurement errors of ionic species, and inferred pH values exhibit a bimodal distribution, with peaks between -2 and 2 and between 7 and 10, depending on whether anions or cations are in excess. Calculations using total (gas plus aerosol phase) measurements as inputs (i.e., forward mode) are affected much less by these measurement errors. In future studies, the reverse mode should be avoided whereas the forward mode should be used. Forward-mode calculations in this and previous studies collectively indicate a moderately acidic condition (pH from about 4 to about 5) for fine particles in northern China winter haze, indicating further that ammonia plays an important role in determining this property. The assumed particle phase state, either stable (solid plus liquid) or metastable (only liquid), does not significantly impact pH predictions. The unrealistic pH values of about 7 in a few previous studies (using the standard ISORROPIA model and stable state assumption) resulted from coding errors in the model, which have been identified and fixed in this study.
NASA Technical Reports Server (NTRS)
Kashlinsky, A.
1992-01-01
This study presents a method for obtaining the true rms peculiar flow in the universe on scales up to 100-120/h Mpc using APM data as an input assuming only that peculiar motions are caused by peculiar gravity. The comparison to the local (Great Attractor) flow is expected to give clear information on the density parameter, Omega, and the local bias parameter, b. The observed peculiar flows in the Great Attractor region are found to be in better agreement with the open (Omega = 0.1) universe in which light traces mass (b = 1) than with a flat (Omega = 1) universe unless the bias parameter is unrealistically large (b is not less than 4). Constraints on Omega from a comparison of the APM and PV samples are discussed.
Turbo test rig with hydroinertia air bearings for a palmtop gas turbine
NASA Astrophysics Data System (ADS)
Tanaka, Shuji; Isomura, Kousuke; Togo, Shin-ichi; Esashi, Masayoshi
2004-11-01
This paper describes a turbo test rig to test the compressor of a palmtop gas turbine generator at low temperature (<100 °C). Impellers are 10 mm in diameter and have three-dimensional blades machined using a five-axis NC milling machine. Hydroinertia bearings are employed in both radial and axial directions. The performance of the compressor was measured at 50% (435 000 rpm) and 60% (530 000 rpm) of the rated rotational speed (870 000 rpm) by driving a turbine using compressed air at room temperature. The measured pressure ratio is lower than the predicted value. This could be mainly because impeller tip clearance was larger than the designed value. The measured adiabatic efficiency is unrealistically high due to heat dissipation from compressed air. During acceleration toward the rated rotational speed, a shaft crashed to the bearing at 566 000 rpm due to whirl. At that time, the whirl ratio was 8.
Sandiford, P
1993-09-01
In recent years Lot quality assurance sampling (LQAS), a method derived from production-line industry, has been advocated as an efficient means to evaluate the coverage rates achieved by child immunization programmes. This paper examines the assumptions on which LQAS is based and the effect that these assumptions have on its utility as a management tool. It shows that the attractively low sample sizes used in LQAS are achieved at the expense of specificity unless unrealistic assumptions are made about the distribution of coverage rates amongst the immunization programmes to which the method is applied. Although it is a very sensitive test and its negative predictive value is probably high in most settings, its specificity and positive predictive value are likely to be low. The implications of these strengths and weaknesses with regard to management decision-making are discussed.
Hooper, I R; Philbin, T G
2013-12-30
We describe a design methodology for modifying the refractive index profile of graded-index optical instruments that incorporate singularities or zeros in their refractive index. The process maintains the device performance whilst resulting in graded profiles that are all-dielectric, do not require materials with unrealistic values, and that are impedance matched to the bounding medium. This is achieved by transmuting the singularities (or zeros) using the formalism of transformation optics, but with an additional boundary condition requiring the gradient of the co-ordinate transformation be continuous. This additional boundary condition ensures that the device is impedance matched to the bounding medium when the spatially varying permittivity and permeability profiles are scaled to realizable values. We demonstrate the method in some detail for an Eaton lens, before describing the profiles for an "invisible disc" and "multipole" lenses.
Finite element elastic-plastic-creep and cyclic life analysis of a cowl lip
NASA Technical Reports Server (NTRS)
Arya, Vinod K.; Melis, Matthew E.; Halford, Gary R.
1990-01-01
Results are presented of elastic, elastic-plastic, and elastic-plastic-creep analyses of a test-rig component of an actively cooled cowl lip. A cowl lip is part of the leading edge of an engine inlet of proposed hypersonic aircraft and is subject to severe thermal loadings and gradients during flight. Values of stresses calculated by elastic analysis are well above the yield strength of the cowl lip material. Such values are highly unrealistic, and thus elastic stress analyses are inappropriate. The inelastic (elastic-plastic and elastic-plastic-creep) analyses produce more reasonable and acceptable stress and strain distributions in the component. Finally, using the results from these analyses, predictions are made for the cyclic crack initiation life of a cowl lip. A comparison of predicted cyclic lives shows the cyclic life prediction from the elastic-plastic-creep analysis to be the lowest and, hence, most realistic.
SAS program for quantitative stratigraphic correlation by principal components
Hohn, M.E.
1985-01-01
A SAS program is presented which constructs a composite section of stratigraphic events through principal components analysis. The variables in the analysis are stratigraphic sections and the observational units are range limits of taxa. The program standardizes data in each section, extracts eigenvectors, estimates missing range limits, and computes the composite section from scores of events on the first principal component. Provided is an option of several types of diagnostic plots; these help one to determine conservative range limits or unrealistic estimates of missing values. Inspection of the graphs and eigenvalues allow one to evaluate goodness of fit between the composite and measured data. The program is extended easily to the creation of a rank-order composite. ?? 1985.
Quinn, Kevin
This commentary analyzes the patient-centered medical home (PCMH) model within a framework of the 8 basic payment methods in health care. PCMHs are firmly within the fee-for-service tradition. Changes to the process and structure of the Resource Based Relative Value Scale, which underlies almost all physician fee schedules, could make PCMHs more financially viable. Of the alternative payment methods being considered, shared savings models are unlikely to transform medical practice whereas capitation models place unrealistic expectations on providers to accept epidemiological risk. Episode payment may strike a feasible balance for PCMHs, with newly available episode definitions presenting opportunities not previously available.
Team reasoning and collective rationality: piercing the veil of obviousness.
Colman, Andrew M; Pulford, Briony D; Rose, Jo
2008-06-01
The experiments reported in our target article provide strong evidence of collective utility maximization, and the findings suggest that team reasoning should now be included among the social value orientations used in cognitive and social psychology. Evidential decision theory offers a possible alternative explanation for our results but fails to predict intuitively compelling strategy choices in simple games with asymmetric team-reasoning outcomes. Although many of our experimental participants evidently used team reasoning, some appear to have ignored the other players' expected strategy choices and used lower-level, nonstrategic forms of reasoning. Standard payoff transformations cannot explain the experimental findings, nor team reasoning in general, without an unrealistic assumption that players invariably reason nonstrategically.
The atmospheric boundary layer in the CSIRO global climate model: simulations versus observations
NASA Astrophysics Data System (ADS)
Garratt, J. R.; Rotstayn, L. D.; Krummel, P. B.
2002-07-01
A 5-year simulation of the atmospheric boundary layer in the CSIRO global climate model (GCM) is compared with detailed boundary-layer observations at six locations, two over the ocean and four over land. Field observations, in the form of surface fluxes and vertical profiles of wind, temperature and humidity, are generally available for each hour over periods of one month or more in a single year. GCM simulations are for specific months corresponding to the field observations, for each of five years. At three of the four land sites (two in Australia, one in south-eastern France), modelled rainfall was close to the observed climatological values, but was significantly in deficit at the fourth (Kansas, USA). Observed rainfall during the field expeditions was close to climatology at all four sites. At the Kansas site, modelled screen temperatures (Tsc), diurnal temperature amplitude and sensible heat flux (H) were significantly higher than observed, with modelled evaporation (E) much lower. At the other three land sites, there is excellent correspondence between the diurnal amplitude and phase and absolute values of each variable (Tsc, H, E). Mean monthly vertical profiles for specific times of the day show strong similarities: over land and ocean in vertical shape and absolute values of variables, and in the mixed-layer and nocturnal-inversion depths (over land) and the height of the elevated inversion or height of the cloud layer (over the sea). Of special interest is the presence climatologically of early morning humidity inversions related to dewfall and of nocturnal low-level jets; such features are found in the GCM simulations. The observed day-to-day variability in vertical structure is captured well in the model for most sites, including, over a whole month, the temperature range at all levels in the boundary layer, and the mix of shallow and deep mixed layers. Weaknesses or unrealistic structure include the following, (a) unrealistic model mixed-layer temperature profiles over land in clear skies, related to use of a simple local first-order turbulence closure, (b) a tendency to overpredict cloud liquid water near the surface.
Quasistationary solutions of scalar fields around accreting black holes
NASA Astrophysics Data System (ADS)
Sanchis-Gual, Nicolas; Degollado, Juan Carlos; Izquierdo, Paula; Font, José A.; Montero, Pedro J.
2016-08-01
Massive scalar fields can form long-lived configurations around black holes. These configurations, dubbed quasibound states, have been studied both in the linear and nonlinear regimes. In this paper, we show that quasibound states can form in a dynamical scenario in which the mass of the black hole grows significantly due to the capture of infalling matter. We solve the Klein-Gordon equation numerically in spherical symmetry, mimicking the evolution of the spacetime through a sequence of analytic Schwarzschild black hole solutions of increasing mass. It is found that the frequency of oscillation of the quasibound states decreases as the mass of the black hole increases. In addition, accretion leads to an increase of the exponential decay of the scalar field energy. We compare the black hole mass growth rates used in our study with estimates from observational surveys and extrapolate our results to values of the scalar field masses consistent with models that propose scalar fields as dark matter in the universe. We show that, even for unrealistically large mass accretion rates, quasibound states around accreting black holes can survive for cosmological time scales. Our results provide further support to the intriguing possibility of the existence of dark matter halos based on (ultralight) scalar fields surrounding supermassive black holes in galactic centers.
Mastroiacovo, Pierpaolo
2018-01-01
Abstract Preventing neural tube defects (NTDs) easily qualifies as a high‐value opportunity to improve childhood survival and health: the unmet need is significant (major preventable burden), the intervention is transformative (providing sufficient folic acid), and delivery strategies (e.g., fortification) are effective in low‐resource countries. Yet, NTD prevention is lagging. Can public health surveillance help fix this problem? Critics contend that surveillance is largely unnecessary, that limited resources are best spent on interventions, and that surveillance is unrealistic in developing countries. The counterargument is twofold: (1) in the absence of surveillance, interventions will provide fewer benefits and cost more and (2) effective surveillance is likely possible nearly everywhere, with appropriate strategies. As a base strategy, we propose “triple surveillance:” integrating surveillance of cause (folate insufficiency), of disease occurrence (NTD prevalence), and of health outcomes (morbidity, mortality, and disability). For better sustainability and usefulness, it is crucial to refocus and streamline surveillance activities (no recreational data collection), weave surveillance into clinical care (integrate in clinical workflow), and, later, work on including additional risk factors and pediatric outcomes (increase benefits at low marginal cost). By doing so, surveillance becomes not a roadblock but a preferential path to prevention and better care. PMID:29532515
Physical properties of the Saturn's rings with the opposition effect.
NASA Astrophysics Data System (ADS)
Deau, E.
2012-04-01
We use the Cassini/ISS images from the early prime mission to build lit phase curves data from 0.01 degrees to 155 degrees at a solar elevation of 23-20 degrees. All the main rings exhibit on their phase curves a prominent surge at small phase angles. We use various opposition effect models to explain the opposition surge of the rings, including the coherent backscattering, the shadow hiding and a combination of the two (Kawata & Irvine 1974 In: Exploration of the planetary system Book p441; Shkuratov et al. 1999, Icarus, 141, p132; Poulet et al. 2002 Icarus, 158, p224 ; Hapke et al. 2002 Icarus, 157, p523). Our results show that either the coherent backscattering alone or a combination of the shadow hiding and the coherent backscattering can explain the observations providing physical properties (albedo, filling factor, grain size) consistent with previous other studies. However, they disagree with the most recent work of Degiorgio et al. 2011 (EPSC-DPS Abstract #732). We think that their attempt to use the shadow hiding alone lead to unrealistic values of the filling factor of the ring particles layer. For example they found 10^-3 in one of the thickest regions of the C ring (a plateau at R=88439km with an optical depth tau=0.22). We totally disagree with their conclusions stating that these values are consistent for the C ring plateaux and did not found any references that are consistent with theirs, as they claimed. We believe that their unrealistic values originated from the assumptions of the models they used (Kawata & Irvine and Hapke), which are basically an uniform size distribution. Any model using an uniform size distribution force the medium to be very diluted to reproduce the opposition surge. Our modeling that uses a power law size distribution provides realistic values. All these results have been already published previously (http://adsabs.harvard.edu/abs/2007PhDT........25D) and are summarized in a forthcoming manuscript submitted to publication so we recommend to Degiorgio et al. to either cite our work properly or at least try to produce an original work. This research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA. Copyright 2012 California Institute of Technology. Government sponsorship is acknowledged.
Klaver, Jacqueline M; Palo, Amanda D; DiLalla, Lisabeth F
2014-01-01
The authors examined problem behaviors in preschool children as a function of perceived competence. Prior research has demonstrated a link between inaccuracy of self-perceptions and teacher-reported externalizing behaviors in preschool aged boys. This study extended past research by adding data collected from observed behaviors in a laboratory setting, as well as parent reports of internalizing and externalizing behaviors. Five-year-old children completed the Pictorial Scale of Perceived Competence and Social Acceptance for Young Children (PSPCSA) in the lab, participated in a 10-min puzzle interaction task with their cotwin and mother, and completed a short task assessing cognitive abilities. Children were grouped into 3 self-esteem categories (unrealistically low, realistic, and unrealistically high) based on comparisons of self-reported (PSPCSA) versus actual competencies for maternal acceptance, peer acceptance, and cognitive competence. Results showed that children who overreported their maternal acceptance and peer acceptance had significantly more parent-reported externalizing problems as well as internalizing problems. There were no significant differences in accuracy for cognitive competence. The findings from this study underscore the negative impact of unrealistically high self-appraisal on problem behaviors in young children.
Learning Supervised Topic Models for Classification and Regression from Crowds.
Rodrigues, Filipe; Lourenco, Mariana; Ribeiro, Bernardete; Pereira, Francisco C
2017-12-01
The growing need to analyze large collections of documents has led to great developments in topic modeling. Since documents are frequently associated with other related variables, such as labels or ratings, much interest has been placed on supervised topic models. However, the nature of most annotation tasks, prone to ambiguity and noise, often with high volumes of documents, deem learning under a single-annotator assumption unrealistic or unpractical for most real-world applications. In this article, we propose two supervised topic models, one for classification and another for regression problems, which account for the heterogeneity and biases among different annotators that are encountered in practice when learning from crowds. We develop an efficient stochastic variational inference algorithm that is able to scale to very large datasets, and we empirically demonstrate the advantages of the proposed model over state-of-the-art approaches.
A Scheme for Targeting Optical SETI Observations
NASA Astrophysics Data System (ADS)
Shostak, Seth
2004-06-01
In optical SETI (OSETI) experiments, it is generally assumed that signals will be deliberate, narrowly targeted beacons sent by extraterrestrial societies to large numbers of candidate star systems. If this is so, then it may be unrealistic to expect a high duty cycle for the received signal. Ergo, an advantage accrues to any OSETI scheme that realistically suggests where and when to search. In this paper, we elaborate a proposal (Castellano, Doyle, &McIntosh 2000) for selecting regions of sky for intensive optical SETI monitoring based on characteristics of our solar system that would be visible at great distance. This can enormously lessen the amount of sky that needs to be searched. In addition, this is an attractive approach for the transmitting society because it both increases the chances of reception and provides a large reduction in energy required. With good astrometric information, the transmitter need be no more powerful than an automobile tail light.
Choice, social interaction and addiction: the social roots of addictive preferences.
Skog, Ole-Jørgen
2005-01-01
It is argued that addicts, as people in general, are forward-looking and that they try to make the best of what they have got. However, this does not imply that they are fully rational. Cognitive defects, instabilities in preferences, and irrationalities in the form of wishful thinking and dynamical inconsistency play an important role in addictive behaviours. These "imperfections" in people's rationality may not have very large consequences in the case of ordinary goods, but their effect can be dramatic in relation to addictive goods. In the first part of the paper, the rational addiction theory and the empirical evidence that have been presented in support of the theory is reviewed. Regarding the conventional tests of the theory by econometric methods, it is argued that the tests are misguided, both theoretically and methodologically. Furthermore, it is claimed that the definition of addiction implicit in the rational addiction theory is unrealistic, and that the theory makes unrealistic assumptions about human nature. Some empirical evidence for these claims is reviewed. It is concluded that although the theory has its virtues, it faces serious problems and must be rejected in its original form. Secondly, the socio-cultural embeddedness of addictive behaviours, and the social roots of individual preferences, are discussed. These issues are more or less ignored in rational addiction theory. It is argued that we cannot expect to obtain a proper understanding of many addictive phenomena, unless they are seen in their proper socio-cultural context.
Explaining the effect of event valence on unrealistic optimism.
Gold, Ron S; Brown, Mark G
2009-05-01
People typically exhibit 'unrealistic optimism' (UO): they believe they have a lower chance of experiencing negative events and a higher chance of experiencing positive events than does the average person. UO has been found to be greater for negative than positive events. This 'valence effect' has been explained in terms of motivational processes. An alternative explanation is provided by the 'numerosity model', which views the valence effect simply as a by-product of a tendency for likelihood estimates pertaining to the average member of a group to increase with the size of the group. Predictions made by the numerosity model were tested in two studies. In each, UO for a single event was assessed. In Study 1 (n = 115 students), valence was manipulated by framing the event either negatively or positively, and participants estimated their own likelihood and that of the average student at their university. In Study 2 (n = 139 students), valence was again manipulated and participants again estimated their own likelihood; additionally, group size was manipulated by having participants estimate the likelihood of the average student in a small, medium-sized, or large group. In each study, the valence effect was found, but was due to an effect on estimates of own likelihood, not the average person's likelihood. In Study 2, valence did not interact with group size. The findings contradict the numerosity model, but are in accord with the motivational explanation. Implications for health education are discussed.
NASA Astrophysics Data System (ADS)
Heuzé, C.; Vivier, F.; Le Sommer, J.; Molines, J.-M.; Penduff, T.
2015-12-01
With the advent of Argo floats, it now seems feasible to study the interannual variations of upper ocean hydrographic properties of the historically undersampled Southern Ocean. To do so, scattered hydrographic profiles often first need to be mapped. To investigate biases and errors associated both with the limited space-time distribution of the profiles and with the mapping methods, we colocate the mixed-layer depth (MLD) output from a state-of-the-art 1/12° DRAKKAR simulation onto the latitude, longitude, and date of actual in situ profiles from 2005 to 2014. We compare the results obtained after remapping using a nearest neighbor (NN) interpolation and an objective analysis (OA) with different spatiotemporal grid resolutions and decorrelation scales. NN is improved with a coarser resolution. OA performs best with low decorrelation scales, avoiding too strong a smoothing, but returns values over larger areas with large decorrelation scales and low temporal resolution, as more points are available. For all resolutions OA represents better the annual extreme values than NN. Both methods underestimate the seasonal cycle in MLD. MLD biases are lower than 10 m on average but can exceed 250 m locally in winter. We argue that current Argo data should not be mapped to infer decadal trends in MLD, as all methods are unable to reproduce existing trends without creating unrealistic extra ones. We also show that regions of the subtropical Atlantic, Indian, and Pacific Oceans, and the whole ice-covered Southern Ocean, still cannot be mapped even by the best method because of the lack of observational data.
Introducing the Global Fire WEather Database (GFWED)
NASA Astrophysics Data System (ADS)
Field, R. D.
2015-12-01
The Canadian Fire Weather Index (FWI) System is the mostly widely used fire danger rating system in the world. We have developed a global database of daily FWI System calculations beginning in 1980 called the Global Fire WEather Database (GFWED) gridded to a spatial resolution of 0.5° latitude by 2/3° longitude. Input weather data were obtained from the NASA Modern Era Retrospective-Analysis for Research (MERRA), and two different estimates of daily precipitation from rain gauges over land. FWI System Drought Code calculations from the gridded datasets were compared to calculations from individual weather station data for a representative set of 48 stations in North, Central and South America, Europe, Russia, Southeast Asia and Australia. Agreement between gridded calculations and the station-based calculations tended to be most different at low latitudes for strictly MERRA-based calculations. Strong biases could be seen in either direction: MERRA DC over the Mato Grosso in Brazil reached unrealistically high values exceeding DC=1500 during the dry season but was too low over Southeast Asia during the dry season. These biases are consistent with those previously-identified in MERRA's precipitation and reinforce the need to consider alternative sources of precipitation data. GFWED is being used by researchers around the world for analyzing historical relationships between fire weather and fire activity at large scales, in identifying large-scale atmosphere-ocean controls on fire weather, and calibration of FWI-based fire prediction models. These applications will be discussed. More information on GFWED can be found at http://data.giss.nasa.gov/impacts/gfwed/
NASA Technical Reports Server (NTRS)
Duberg, John E; Wilder, Thomas W , III
1952-01-01
The significant findings of a theoretical study of column behavior in the plastic stress range are presented. When the behavior of a straight column is regarded as the limiting behavior of an imperfect column as the initial imperfection (lack of straightness) approaches zero, the departure from the straight configuration occurs at the tangent-modulus load. Without such a concept of the behavior of a straight column, one is led to the unrealistic conclusion that lateral deflection of the column can begin at any load between the tangent-modulus value and the Euler load, based on the original elastic modulus. A family of curves showing load against lateral deflection is presented for idealized h-section columns of various lengths and of various materials that have a systematic variation of their stress-strain curves.
Unrealistic Optimism: East and West?
Joshi, Mary Sissons; Carter, Wakefield
2013-01-01
Following Weinstein’s (1980) pioneering work many studies established that people have an optimistic bias concerning future life events. At first, the bulk of research was conducted using populations in North America and Northern Europe, the optimistic bias was thought of as universal, and little attention was paid to cultural context. However, construing unrealistic optimism as a form of self-enhancement, some researchers noted that it was far less common in East Asian cultures. The current study extends enquiry to a different non-Western culture. Two hundred and eighty seven middle aged and middle income participants (200 in India, 87 in England) rated 11 positive and 11 negative events in terms of the chances of each event occurring in “their own life,” and the chances of each event occurring in the lives of “people like them.” Comparative optimism was shown for bad events, with Indian participants showing higher levels of optimism than English participants. The position regarding comparative optimism for good events was more complex. In India those of higher socioeconomic status (SES) were optimistic, while those of lower SES were on average pessimistic. Overall, English participants showed neither optimism nor pessimism for good events. The results, whose clinical relevance is discussed, suggest that the expression of unrealistic optimism is shaped by an interplay of culture and socioeconomic circumstance. PMID:23407689
Debunking the Myth of Value-Neutral Virginity: Toward Truth in Scientific Advertising
Mandel, David R.; Tetlock, Philip E.
2016-01-01
The scientific community often portrays science as a value-neutral enterprise that crisply demarcates facts from personal value judgments. We argue that this depiction is unrealistic and important to correct because science serves an important knowledge generation function in all modern societies. Policymakers often turn to scientists for sound advice, and it is important for the wellbeing of societies that science delivers. Nevertheless, scientists are human beings and human beings find it difficult to separate the epistemic functions of their judgments (accuracy) from the social-economic functions (from career advancement to promoting moral-political causes that “feel self-evidently right”). Drawing on a pluralistic social functionalist framework that identifies five functionalist mindsets—people as intuitive scientists, economists, politicians, prosecutors, and theologians—we consider how these mindsets are likely to be expressed in the conduct of scientists. We also explore how the context of policymaker advising is likely to activate or de-activate scientists’ social functionalist mindsets. For instance, opportunities to advise policymakers can tempt scientists to promote their ideological beliefs and values, even if advising also brings with it additional accountability pressures. We end prescriptively with an appeal to scientists to be more circumspect in characterizing their objectivity and honesty and to reject idealized representations of scientific behavior that inaccurately portray scientists as value-neutral virgins. PMID:27064318
Debunking the Myth of Value-Neutral Virginity: Toward Truth in Scientific Advertising.
Mandel, David R; Tetlock, Philip E
2016-01-01
The scientific community often portrays science as a value-neutral enterprise that crisply demarcates facts from personal value judgments. We argue that this depiction is unrealistic and important to correct because science serves an important knowledge generation function in all modern societies. Policymakers often turn to scientists for sound advice, and it is important for the wellbeing of societies that science delivers. Nevertheless, scientists are human beings and human beings find it difficult to separate the epistemic functions of their judgments (accuracy) from the social-economic functions (from career advancement to promoting moral-political causes that "feel self-evidently right"). Drawing on a pluralistic social functionalist framework that identifies five functionalist mindsets-people as intuitive scientists, economists, politicians, prosecutors, and theologians-we consider how these mindsets are likely to be expressed in the conduct of scientists. We also explore how the context of policymaker advising is likely to activate or de-activate scientists' social functionalist mindsets. For instance, opportunities to advise policymakers can tempt scientists to promote their ideological beliefs and values, even if advising also brings with it additional accountability pressures. We end prescriptively with an appeal to scientists to be more circumspect in characterizing their objectivity and honesty and to reject idealized representations of scientific behavior that inaccurately portray scientists as value-neutral virgins.
NASA Technical Reports Server (NTRS)
Schmidt, R. C.; Patankar, S. V.
1991-01-01
The capability of two k-epsilon low-Reynolds number (LRN) turbulence models, those of Jones and Launder (1972) and Lam and Bremhorst (1981), to predict transition in external boundary-layer flows subject to free-stream turbulence is analyzed. Both models correctly predict the basic qualitative aspects of boundary-layer transition with free stream turbulence, but for calculations started at low values of certain defined Reynolds numbers, the transition is generally predicted at unrealistically early locations. Also, the methods predict transition lengths significantly shorter than those found experimentally. An approach to overcoming these deficiencies without abandoning the basic LRN k-epsilon framework is developed. This approach limits the production term in the turbulent kinetic energy equation and is based on a simple stability criterion. It is correlated to the free-stream turbulence value. The modification is shown to improve the qualitative and quantitative characteristics of the transition predictions.
Regenerative life support systems--why do we need them?
Barta, D J; Henninger, D L
1994-11-01
Human exploration of the solar system will include missions lasting years at a time. Such missions mandate extensive regeneration of life support consumables with efficient utilization of local planetary resources. As mission durations extend beyond one or two years, regenerable human life support systems which supply food and recycle air, water, and wastes become feasible; resupply of large volumes and masses of food, water, and atmospheric gases become unrealistic. Additionally, reduced dependency on resupply or self sufficiency can be an added benefit to human crews in hostile environments far from the security of Earth. Comparisons of resupply and regeneration will be discussed along with possible scenarios for developing and implementing human life support systems on the Moon and Mars.
Gent, Peter R
2016-01-01
Observations show that the Southern Hemisphere zonal wind stress maximum has increased significantly over the past 30 years. Eddy-resolving ocean models show that the resulting increase in the Southern Ocean mean flow meridional overturning circulation (MOC) is partially compensated by an increase in the eddy MOC. This effect can be reproduced in the non-eddy-resolving ocean component of a climate model, providing the eddy parameterization coefficient is variable and not a constant. If the coefficient is a constant, then the Southern Ocean mean MOC change is balanced by an unrealistically large change in the Atlantic Ocean MOC. Southern Ocean eddy compensation means that Southern Hemisphere winds cannot be the dominant mechanism driving midlatitude North Atlantic MOC variability.
Controlling percolation with limited resources.
Schröder, Malte; Araújo, Nuno A M; Sornette, Didier; Nagler, Jan
2017-12-01
Connectivity, or the lack thereof, is crucial for the function of many man-made systems, from financial and economic networks over epidemic spreading in social networks to technical infrastructure. Often, connections are deliberately established or removed to induce, maintain, or destroy global connectivity. Thus, there has been a great interest in understanding how to control percolation, the transition to large-scale connectivity. Previous work, however, studied control strategies assuming unlimited resources. Here, we depart from this unrealistic assumption and consider the effect of limited resources on the effectiveness of control. We show that, even for scarce resources, percolation can be controlled with an efficient intervention strategy. We derive such an efficient strategy and study its implications, revealing a discontinuous transition as an unintended side effect of optimal control.
Controlling percolation with limited resources
NASA Astrophysics Data System (ADS)
Schröder, Malte; Araújo, Nuno A. M.; Sornette, Didier; Nagler, Jan
2017-12-01
Connectivity, or the lack thereof, is crucial for the function of many man-made systems, from financial and economic networks over epidemic spreading in social networks to technical infrastructure. Often, connections are deliberately established or removed to induce, maintain, or destroy global connectivity. Thus, there has been a great interest in understanding how to control percolation, the transition to large-scale connectivity. Previous work, however, studied control strategies assuming unlimited resources. Here, we depart from this unrealistic assumption and consider the effect of limited resources on the effectiveness of control. We show that, even for scarce resources, percolation can be controlled with an efficient intervention strategy. We derive such an efficient strategy and study its implications, revealing a discontinuous transition as an unintended side effect of optimal control.
Coulomb effects in low-energy nuclear fragmentation
NASA Technical Reports Server (NTRS)
Wilson, John W.; Chun, Sang Y.; Badavi, Francis F.; John, Sarah
1993-01-01
Early versions of the Langley nuclear fragmentation code NUCFRAG (and a publicly released version called HZEFRG1) assumed straight-line trajectories throughout the interaction. As a consequence, NUCFRAG and HZEFRG1 give unrealistic cross sections for large mass removal from the projectile and target at low energies. A correction for the distortion of the trajectory by the nuclear Coulomb fields is used to derive fragmentation cross sections. A simple energy-loss term is applied to estimate the energy downshifts that greatly alter the Coulomb trajectory at low energy. The results, which are far more realistic than prior versions of the code, should provide the data base for future transport calculations. The systematic behavior of charge-removal cross sections compares favorably with results from low-energy experiments.
Severe wind gust thresholds for Meteoalarm derived from uniform return periods in ECA&D
NASA Astrophysics Data System (ADS)
Stepek, A.; Wijnant, I. L.; van der Schrier, G.; van den Besselaar, E. J. M.; Klein Tank, A. M. G.
2012-06-01
In this study we present an alternative wind gust warning guideline for Meteoalarm, the severe weather warning website for Europe. There are unrealistically large differences in levels and issuing frequencies of all warning levels currently in use between neighbouring Meteoalarm countries. This study provides a guide for the Meteoalarm community to review their wind gust warning thresholds. A more uniform warning system is achieved by using one pan-European return period per warning level. The associated return values will be different throughout Europe because they depend on local climate conditions, but they will not change abruptly at country borders as is currently the case for the thresholds. As return values are a measure of the possible danger of an event and its impact on society, they form an ideal basis for a warning system. Validated wind gust measurements from the European Climate Assessment and Dataset (ECA&D, http://www.ecad.eu) were used to calculate return values of the annual maximum wind gust. The current thresholds are compared with return values for 3 different return periods: 10 times a year return periods for yellow warnings, 2 yr periods for orange and 5 yr periods for red warnings. So far 10 countries provide wind gust data to ECA&D. Due to the ECA&D completeness requirements and the fact that some countries provided too few stations to be representative for that country, medians of the return values of annual maximum wind gust could be calculated for 6 of the 10 countries. Alternative guideline thresholds are presented for Norway, Ireland, The Netherlands, Germany, the Czech Republic and Spain and the need to distinguish between coastal, inland and mountainous regions is demonstrated. The new thresholds based on uniform return periods differ significantly from the current ones, particularly for coastal and mountainous areas. We are aware of other, sometimes binding factors (e.g. laws) that prevent participating counties from implementing this climatology based warning system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mills, Evan
Large numbers of commercial buildings have sought to improve their energy and environmental performance, with half of all leasable U.S. offices now designated at some level of “green”. All proper/es fall somewhere on the green/high-performance spectrum (above and below average) whether or not they bear a formal label or ra/ng.1 Variations in the level of performance can either positively or negatively influence value. This component of value can be shaped by many factors, from utility costs to tenant/owner preferences that translate into income (rent levels, vacancy rates, lease-up /mes, etc.). Occupant perceptions of indoor environmental quality are another potential influencemore » on value. While there has been little uptake of this thinking by practicing appraisers, the increased prevalence of green/HP practices combined with concerns about appraiser competency are compelling the industry to adapt their traditional techniques to this new driver of value. However, the overly narrow focus of policymakers on appraisal of labeled or rated exemplary buildings (e.g., LEED or ENERGY STAR Certified) represents a significant missed opportunity. Any level of green or energy performance can in fact influence value, including below-average performance (a.k.a. “brown discount”), irrespec/ve of whether or not the building has been formally rated. Another surmountable challenge is the limitations to non-appraisers’ understanding of the appraisal process (and constraints therein). A crucial byproduct of this is unrealistic expectations of what appraisers can and will do in the marketplace. This report identifies opportunities for catalyzing improvement of the green/HP appraisal process, which apply to all involved actors—from owner, report-ordering client, the appraiser, and the appraisal reviewer—and fostering more demand for appraisals that recognize green/HP property attributes. The intended audience is primarily the public policy community and other stakeholders outside the formal appraisal community who can contribute to the broader effort to advance professional practces. The discussion begins with a descripton of the appraisal process and the points at which green/HP consideratons can enter the analysis. A series of major barriers to better practces are identfied along with approaches to reducing them.« less
In modelling effects of global warming, invalid assumptions lead to unrealistic projections.
Lefevre, Sjannie; McKenzie, David J; Nilsson, Göran E
2018-02-01
In their recent Opinion, Pauly and Cheung () provide new projections of future maximum fish weight (W ∞ ). Based on criticism by Lefevre et al. (2017) they changed the scaling exponent for anabolism, d G . Here we find that changing both d G and the scaling exponent for catabolism, b, leads to the projection that fish may even become 98% smaller with a 1°C increase in temperature. This unrealistic outcome indicates that the current W ∞ is unlikely to be explained by the Gill-Oxygen Limitation Theory (GOLT) and, therefore, GOLT cannot be used as a mechanistic basis for model projections about fish size in a warmer world. © 2017 John Wiley & Sons Ltd.
The 'picky eater': The toddler or preschooler who does not eat.
Leung, Alexander Kc; Marchand, Valérie; Sauve, Reginald S
2012-10-01
The majority of children between one and five years of age who are brought in by their parents for refusing to eat are healthy and have an appetite that is appropriate for their age and growth rate. Unrealistic parental expectations may result in unnecessary concern, and inappropriate threats or punishments may aggravate a child's refusal to eat. A detailed history and general physical examination are necessary to rule out acute and chronic illnesses. A food diary and assessment of parental expectations about eating behaviour should be completed. Where the child's 'refusal' to eat is found to be related to unrealistic expectations, parents should be reassured and counselled about the normal growth and development of children at this age.
Botto, Lorenzo D; Mastroiacovo, Pierpaolo
2018-02-01
Preventing neural tube defects (NTDs) easily qualifies as a high-value opportunity to improve childhood survival and health: the unmet need is significant (major preventable burden), the intervention is transformative (providing sufficient folic acid), and delivery strategies (e.g., fortification) are effective in low-resource countries. Yet, NTD prevention is lagging. Can public health surveillance help fix this problem? Critics contend that surveillance is largely unnecessary, that limited resources are best spent on interventions, and that surveillance is unrealistic in developing countries. The counterargument is twofold: (1) in the absence of surveillance, interventions will provide fewer benefits and cost more and (2) effective surveillance is likely possible nearly everywhere, with appropriate strategies. As a base strategy, we propose "triple surveillance:" integrating surveillance of cause (folate insufficiency), of disease occurrence (NTD prevalence), and of health outcomes (morbidity, mortality, and disability). For better sustainability and usefulness, it is crucial to refocus and streamline surveillance activities (no recreational data collection), weave surveillance into clinical care (integrate in clinical workflow), and, later, work on including additional risk factors and pediatric outcomes (increase benefits at low marginal cost). By doing so, surveillance becomes not a roadblock but a preferential path to prevention and better care. © 2018 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of New York Academy of Sciences.
Study of tissue oxygen supply rate in a macroscopic photodynamic therapy singlet oxygen model
NASA Astrophysics Data System (ADS)
Zhu, Timothy C.; Liu, Baochang; Penjweini, Rozhin
2015-03-01
An appropriate expression for the oxygen supply rate (Γs) is required for the macroscopic modeling of the complex mechanisms of photodynamic therapy (PDT). It is unrealistic to model the actual heterogeneous tumor microvascular networks coupled with the PDT processes because of the large computational requirement. In this study, a theoretical microscopic model based on uniformly distributed Krogh cylinders is used to calculate Γs=g (1-[O]/[]0) that can replace the complex modeling of blood vasculature while maintaining a reasonable resemblance to reality; g is the maximum oxygen supply rate and [O]/[]0 is the volume-average tissue oxygen concentration normalized to its value prior to PDT. The model incorporates kinetic equations of oxygen diffusion and convection within capillaries and oxygen saturation from oxyhemoglobin. Oxygen supply to the tissue is via diffusion from the uniformly distributed blood vessels. Oxygen can also diffuse along the radius and the longitudinal axis of the cylinder within tissue. The relations of Γs to [3O2]/] are examined for a biologically reasonable range of the physiological parameters for the microvasculature and several light fluence rates (ϕ). The results show a linear relationship between Γs and [3O2]/], independent of ϕ and photochemical parameters; the obtained g ranges from 0.4 to 1390 μM/s.
Introducing GFWED: The Global Fire Weather Database
NASA Technical Reports Server (NTRS)
Field, R. D.; Spessa, A. C.; Aziz, N. A.; Camia, A.; Cantin, A.; Carr, R.; de Groot, W. J.; Dowdy, A. J.; Flannigan, M. D.; Manomaiphiboon, K.;
2015-01-01
The Canadian Forest Fire Weather Index (FWI) System is the mostly widely used fire danger rating system in the world. We have developed a global database of daily FWI System calculations, beginning in 1980, called the Global Fire WEather Database (GFWED) gridded to a spatial resolution of 0.5 latitude by 2-3 longitude. Input weather data were obtained from the NASA Modern Era Retrospective-Analysis for Research and Applications (MERRA), and two different estimates of daily precipitation from rain gauges over land. FWI System Drought Code calculations from the gridded data sets were compared to calculations from individual weather station data for a representative set of 48 stations in North, Central and South America, Europe, Russia,Southeast Asia and Australia. Agreement between gridded calculations and the station-based calculations tended to be most different at low latitudes for strictly MERRA based calculations. Strong biases could be seen in either direction: MERRA DC over the Mato Grosso in Brazil reached unrealistically high values exceeding DCD1500 during the dry season but was too low over Southeast Asia during the dry season. These biases are consistent with those previously identified in MERRAs precipitation, and they reinforce the need to consider alternative sources of precipitation data. GFWED can be used for analyzing historical relationships between fire weather and fire activity at continental and global scales, in identifying large-scale atmosphereocean controls on fire weather, and calibration of FWI-based fire prediction models.
Blandford's argument: The strongest continuous gravitational wave signal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knispel, Benjamin; Allen, Bruce
2008-08-15
For a uniform population of neutron stars whose spin-down is dominated by the emission of gravitational radiation, an old argument of Blandford states that the expected gravitational-wave amplitude of the nearest source is independent of the deformation and rotation frequency of the objects. Recent work has improved and extended this argument to set upper limits on the expected amplitude from neutron stars that also emit electromagnetic radiation. We restate these arguments in a more general framework, and simulate the evolution of such a population of stars in the gravitational potential of our galaxy. The simulations allow us to test themore » assumptions of Blandford's argument on a realistic model of our galaxy. We show that the two key assumptions of the argument (two dimensionality of the spatial distribution and a steady-state frequency distribution) are in general not fulfilled. The effective scaling dimension D of the spatial distribution of neutron stars is significantly larger than two, and for frequencies detectable by terrestrial instruments the frequency distribution is not in a steady state unless the ellipticity is unrealistically large. Thus, in the cases of most interest, the maximum expected gravitational-wave amplitude does have a strong dependence on the deformation and rotation frequency of the population. The results strengthen the previous upper limits on the expected gravitational-wave amplitude from neutron stars by a factor of 6 for realistic values of ellipticity.« less
NASA Technical Reports Server (NTRS)
Holdaway, Daniel; Kent, James
2015-01-01
The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.
Women who abuse their children: implications for pediatric practice.
Rosen, B; Stein, M T
1980-10-01
Parents who abuse their children may not accept traditional therapy but may be influenced by the child's primary care physician. A comparative study of abusive and nonabusive mothers showed abusers to have lower self-concept and higher self-concept incongruence and inconsistency than nonabusers. They were also found to value authority over others more, and conformity and benevolence less, than nonabusers. Practically applied, the data lead the pediatrician to an educative and supportive role in which he or she may enhance self-esteem and lower unrealistic expectations in the course of treating the child. In addition, there seems to be a need to develop access to support groups, day care, and other avenues for the mother's personal growth. This may be done either within a pediatric practice or through liaison with community resources.
Home culture, science, school and science learning: is reconciliation possible?
NASA Astrophysics Data System (ADS)
Tan, Aik-Ling
2011-09-01
In response to Meyer and Crawford's article on how nature of science and authentic science inquiry strategies can be used to support the learning of science for underrepresented students, I explore the possibly of reconciliation between the cultures of school, science, school science as well as home. Such reconciliation is only possible when science teachers are cognizant of the factors affecting the cultural values and belief systems of underrepresented students. Using my experience as an Asian learner of WMS, I suggest that open and honest dialogues in science classrooms will allow for greater clarity of the ideals that WMS profess and cultural beliefs of underrepresented students. This in-depth understanding will eliminate guesswork and unrealistic expectations and in the process promote tolerance and acceptance of diversity in ways of knowing.
NASA Astrophysics Data System (ADS)
Hamahashi, Mari; Screaton, Elizabeth; Tanikawa, Wataru; Hashimoto, Yoshitaka; Martin, Kylara; Saito, Saneatsu; Kimura, Gaku
2017-07-01
Subduction of the buoyant Cocos Ridge offshore the Osa Peninsula, Costa Rica substantially affects the upper plate structure through a variety of processes, including outer forearc uplift, erosion, and focused fluid flow. To investigate the nature of a major seismic reflector (MSR) developed between slope sediments (late Pliocene-late Pleistocene silty clay) and underlying higher velocity upper plate materials (late Pliocene-early Pleistocene clayey siltstone), we infer possible mechanisms of sediment removal by examining the consolidation state, microstructure, and zeolite assemblages of sediments recovered from Integrated Ocean Drilling Program Expedition 344 Site U1380. Formation of Ca-type zeolites, laumontite and heulandite, inferred to form in the presence of Ca-rich fluids, has caused porosity reduction. We adjust measured porosity values for these pore-filling zeolites and evaluated the new porosity profile to estimate how much material was removed at the MSR. Based on the composite porosity-depth curve, we infer the past burial depth of the sediments directly below the MSR. The corrected and uncorrected porosity-depth curves yield values of 800 ± 70 m and 900 ± 70 m, respectively. We argue that deposition and removal of this entire estimated thickness in 0.49 Ma would require unrealistically large sedimentation rates and suggest that normal faulting at the MSR must contribute. The porosity offset could be explained with maximum 250 ± 70 m of normal fault throw, or 350 ± 70 m if the porosity were not corrected. The porosity correction significantly reduces the amount of sediment removal needed for the combination of mass movement and normal faulting that characterize the slope in this margin.
NASA Astrophysics Data System (ADS)
Dixit, V. K.; Porwal, S.; Singh, S. D.; Sharma, T. K.; Ghosh, Sandip; Oak, S. M.
2014-02-01
Temperature dependence of the photoluminescence (PL) peak energy of bulk and quantum well (QW) structures is studied by using a new phenomenological model for including the effect of localized states. In general an anomalous S-shaped temperature dependence of the PL peak energy is observed for many materials which is usually associated with the localization of excitons in band-tail states that are formed due to potential fluctuations. Under such conditions, the conventional models of Varshni, Viña and Passler fail to replicate the S-shaped temperature dependence of the PL peak energy and provide inconsistent and unrealistic values of the fitting parameters. The proposed formalism persuasively reproduces the S-shaped temperature dependence of the PL peak energy and provides an accurate determination of the exciton localization energy in bulk and QW structures along with the appropriate values of material parameters. An example of a strained InAs0.38P0.62/InP QW is presented by performing detailed temperature and excitation intensity dependent PL measurements and subsequent in-depth analysis using the proposed model. Versatility of the new formalism is tested on a few other semiconductor materials, e.g. GaN, nanotextured GaN, AlGaN and InGaN, which are known to have a significant contribution from the localized states. A quantitative evaluation of the fractional contribution of the localized states is essential for understanding the temperature dependence of the PL peak energy of bulk and QW well structures having a large contribution of the band-tail states.
NASA Astrophysics Data System (ADS)
Heuzé, Céline; Vivier, Frédéric; Le Sommer, Julien; Molines, Jean-Marc; Penduff, Thierry
2016-04-01
With the advent of Argo floats, it now seems feasible to study the interannual variations of upper ocean hydrographic properties of the historically undersampled Southern Ocean. To do so, scattered hydrographic profiles often first need to be mapped. To investigate biases and errors associated both with the limited space-time distribution of the profiles and with the mapping methods, we colocate the mixed layer depth (MLD) output from a state-of-the-art 1/12° DRAKKAR simulation onto the latitude, longitude and date of actual in-situ profiles from 2005 to 2014. We compare the results obtained after remapping using a nearest-neighbor (NN) interpolation and an objective analysis (OA) with different spatio-temporal grid resolutions and decorrelation scales. NN is improved with a coarser resolution. OA performs best with low decorrelation scales, avoiding too strong a smoothing, but returns values over larger areas with large decorrelation scales and low temporal resolution, as more points are available. For all resolutions OA represents better the annual extreme values than NN. Both methods underestimate the seasonal cycle in MLD. MLD biases are lower than 10 m on average but can exceed 250 m locally in winter. We argue that current Argo data should not be mapped to infer decadal trends in MLD, as all methods are unable to reproduce existing trends without creating unrealistic extra ones. We also show that regions of the subtropical Atlantic, Indian and Pacific Oceans, and the whole ice-covered Southern Ocean, still cannot be mapped even by the best method because of the lack of observational data. This article is protected by copyright. All rights reserved.
NASA Astrophysics Data System (ADS)
Dodson, Jason B.
Deep convective clouds (DCCs) play an important role in regulating global climate through vertical mass flux, vertical water transport, and radiation. For general circulation models (GCMs) to simulate the global climate realistically, they must simulate DCCs realistically. GCMs have traditionally used cumulus parameterizations (CPs). Much recent research has shown that multiple persistent unrealistic behaviors in GCMs are related to limitations of CPs. Two alternatives to CPs exist: the global cloud-resolving model (GCRM), and the multiscale modeling framework (MMF). Both can directly simulate the coarser features of DCCs because of their multi-kilometer horizontal resolutions, and can simulate large-scale meteorological processes more realistically than GCMs. However, the question of realistic behavior of simulated DCCs remains. How closely do simulated DCCs resemble observed DCCs? In this study I examine the behavior of DCCs in the Nonhydrostatic Icosahedral Atmospheric Model (NICAM) and Superparameterized Community Atmospheric Model (SP-CAM), the latter with both single-moment and double-moment microphysics. I place particular emphasis on the relationship between cloud vertical structure and convective environment. I also emphasize the transition between shallow clouds and mature DCCs. The spatial domains used are the tropical oceans and the contiguous United States (CONUS), the latter of which produces frequent vigorous convection during the summer. CloudSat is used to observe DCCs, and A-Train and reanalysis data are used to represent the large-scale environment in which the clouds form. The CloudSat cloud mask and radar reflectivity profiles for CONUS cumuliform clouds (defined as clouds with a base within the planetary boundary layer) during boreal summer are first averaged and compared. Both NICAM and SP-CAM greatly underestimate the vertical growth of cumuliform clouds. Then they are sorted by three large-scale environmental variables: total preciptable water (TPW), surface air temperature (SAT), and 500hPa vertical velocity (W500), representing the dynamical and thermodynamical environment in which the clouds form. The sorted CloudSat profiles are then compared with NICAM and SP-CAM profiles simulated with the Quickbeam CloudSat simulator. Both models have considerable difficulty representing the relationship of SAT and clouds over CONUS. For TPW and W500, shallow clouds transition to DCCs at higher values than observed. This may be an indication of the models' inability to represent the formation of DCCs in marginal convective environments. NICAM develops tall DCCs in highly favorable environments, but SP-CAM appears to be incapable of developing tall DCCs in almost any environment. The use of double moment microphysics in SP-CAM improves the frequency of deep clouds and their relationship with TPW, but not SAT. Both models underpredict radar reflectivity in the upper cloud of mature DCCs. SP-CAM with single moment microphysics has a particularly unrealistic DCC reflectivity profile, but with double moment microphysics it improves substantially. SP-CAM with double-moment microphysics unexpectedly appears to weaken DCC updraft strength as TPW increases, but otherwise both NICAM and SP-CAM represent the environment-versus-DCC relationships fairly realistically.
Effects of the seasonal cycle on superrotation in planetary atmospheres
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Jonathan L.; Vallis, Geoffrey K.; Potter, Samuel F.
2014-05-20
The dynamics of dry atmospheric general circulation model simulations forced by seasonally varying Newtonian relaxation are explored over a wide range of two control parameters and are compared with the large-scale circulation of Earth, Mars, and Titan in their relevant parameter regimes. Of the parameters that govern the behavior of the system, the thermal Rossby number (Ro) has previously been found to be important in governing the spontaneous transition from an Earth-like climatology of winds to a superrotating one with prograde equatorial winds, in the absence of a seasonal cycle. This case is somewhat unrealistic as it applies only ifmore » the planet has zero obliquity or if surface thermal inertia is very large. While Venus has nearly vanishing obliquity, Earth, Mars, and Titan (Saturn) all have obliquities of ∼25° and varying degrees of seasonality due to their differing thermal inertias and orbital periods. Motivated by this, we introduce a time-dependent Newtonian cooling to drive a seasonal cycle using idealized model forcing, and we define a second control parameter that mimics non-dimensional thermal inertia of planetary surfaces. We then perform and analyze simulations across the parameter range bracketed by Earth-like and Titan-like regimes, assess the impact on the spontaneous transition to superrotation, and compare Earth, Mars, and Titan to the model simulations in the relevant parameter regime. We find that a large seasonal cycle (small thermal inertia) prevents model atmospheres with large thermal Rossby numbers from developing superrotation by the influences of (1) cross-equatorial momentum advection by the Hadley circulation and (2) hemispherically asymmetric zonal-mean zonal winds that suppress instabilities leading to equatorial momentum convergence. We also demonstrate that baroclinic instabilities must be sufficiently weak to allow superrotation to develop. In the relevant parameter regimes, our seasonal model simulations compare favorably to large-scale, seasonal phenomena observed on Earth and Mars. In the Titan-like regime the seasonal cycle in our model acts to prevent superrotation from developing, and it is necessary to increase the value of a third parameter—the atmospheric Newtonian cooling time—to achieve a superrotating climatology.« less
Johnston, Sandra; Parker, Christina N; Fox, Amanda
2017-09-01
Use of high fidelity simulation has become increasingly popular in nursing education to the extent that it is now an integral component of most nursing programs. Anecdotal evidence suggests that students have difficulty engaging with simulation manikins due to their unrealistic appearance. Introduction of the manikin as a 'real patient' with the use of an audio-visual narrative may engage students in the simulated learning experience and impact on their learning. A paucity of literature currently exists on the use of audio-visual narratives to enhance simulated learning experiences. This study aimed to determine if viewing an audio-visual narrative during a simulation pre-brief altered undergraduate nursing student perceptions of the learning experience. A quasi-experimental post-test design was utilised. A convenience sample of final year baccalaureate nursing students at a large metropolitan university. Participants completed a modified version of the Student Satisfaction with Simulation Experiences survey. This 12-item questionnaire contained questions relating to the ability to transfer skills learned in simulation to the real clinical world, the realism of the simulation and the overall value of the learning experience. Descriptive statistics were used to summarise demographic information. Two tailed, independent group t-tests were used to determine statistical differences within the categories. Findings indicated that students reported high levels of value, realism and transferability in relation to the viewing of an audio-visual narrative. Statistically significant results (t=2.38, p<0.02) were evident in the subscale of transferability of learning from simulation to clinical practice. The subgroups of age and gender although not significant indicated some interesting results. High satisfaction with simulation was indicated by all students in relation to value and realism. There was a significant finding in relation to transferability on knowledge and this is vital to quality educational outcomes. Copyright © 2017. Published by Elsevier Ltd.
Wind assistance: A requirement for migration of shorebirds?
Butler, Robert W.; Williams, Tony D.; Warnock, Nils; Bishop, Mary Anne
1997-01-01
We investigated the importance of wind-assisted flight for northward (spring) migration by Western Sandpipers (Calidris mauri) along the Pacific Coast of North America. Using current models of energy costs of flight and recent data on the phenology of migration, we estimated the energy (fat) requirements for migration in calm winds and with wind-assisted flight for different rates of fat deposition: (1) a variable rate, assuming that birds deposit the minimum amount of fat required to reach the next stopover site; (2) a constant maximum rate of 1.0 g/day; and (3) a lower constant rate of 0.4 g/day. We tested these models by comparing conservative estimates of predicted body mass along the migration route with empirical data on body mass of Western Sandpipers at different stopover sites and upon arrival at the breeding grounds. In calm conditions, birds would have to deposit unrealistically high amounts of fat (up to 330% of observed values) to maintain body mass above absolute lean mass values. Fat-deposition rates of 1.0 g/day and 0.4 g/day, in calm conditions, resulted in a steady decline in body mass along the migration route, with predicted body masses on arrival in Alaska of only 60% (13.6 g) and 26% (5.9 g) of average lean mass (22.7 g). Conversely, birds migrating with wind assistance would be able to complete migration with fat-deposition rates as low as 0.4 g/day, similar to values reported for this size bird from field studies. Our results extend the conclusion of the importance of winds for large, long-distance migrants to a small, short-distance migrant. We suggest that the migratory decisions of birds are more strongly influenced by the frequency and duration of winds aloft, i.e. by events during the flight phase, than by events during the stopover phase of migration, such as fat-deposition rate, that have been the focus of much recent migration theory.
Huang, Xinru; Roth, Connie B
2016-06-21
Recent studies have measured or predicted thickness-dependent shifts in density or specific volume of polymer films as a possible means of understanding changes in the glass transition temperature Tg(h) with decreasing film thickness with some experimental works claiming unrealistically large (25%-30%) increases in film density with decreasing thickness. Here we use ellipsometry to measure the temperature-dependent index of refraction of polystyrene (PS) films supported on silicon and investigate the validity of the commonly used Lorentz-Lorenz equation for inferring changes in density or specific volume from very thin films. We find that the density (specific volume) of these supported PS films does not vary by more than ±0.4% of the bulk value for film thicknesses above 30 nm, and that the small variations we do observe are uncorrelated with any free volume explanation for the Tg(h) decrease exhibited by these films. We conclude that the derivation of the Lorentz-Lorenz equation becomes invalid for very thin films as the film thickness approaches ∼20 nm, and that reports of large density changes greater than ±1% of bulk for films thinner than this likely suffer from breakdown in the validity of this equation or in the difficulties associated with accurately measuring the index of refraction of such thin films. For larger film thicknesses, we do observed small variations in the effective specific volume of the films of 0.4 ± 0.2%, outside of our experimental error. These shifts occur simultaneously in both the liquid and glassy regimes uniformly together starting at film thicknesses less than ∼120 nm but appear to be uncorrelated with Tg(h) decreases; possible causes for these variations are discussed.
Modeling landslide runout dynamics and hazards: crucial effects of initial conditions
NASA Astrophysics Data System (ADS)
Iverson, R. M.; George, D. L.
2016-12-01
Physically based numerical models can provide useful tools for forecasting landslide runout and associated hazards, but only if the models employ initial conditions and parameter values that faithfully represent the states of geological materials on slopes. Many models assume that a landslide begins from a heap of granular material poised on a slope and held in check by an imaginary dam. A computer instruction instantaneously removes the dam, unleashing a modeled landslide that accelerates under the influence of a large force imbalance. Thus, an unrealistically large initial acceleration influences all subsequent modeled motion. By contrast, most natural landslides are triggered by small perturbations of statically balanced effective stress states, which are commonly caused by rainfall, snowmelt, or earthquakes. Landslide motion begins with an infinitesimal force imbalance and commensurately small acceleration. However, a small initial force imbalance can evolve into a much larger imbalance if feedback causes a reduction in resisting forces. A well-documented source of such feedback involves dilatancy coupled to pore-pressure evolution, which may either increase or decrease effective Coulomb friction—contingent on initial conditions. Landslide dynamics models that account for this feedback include our D-Claw model (Proc. Roy. Soc. Lon., Ser. A, 2014, doi: 10.1098/rspa.2013.0819 and doi:10.1098/rspa.2013.0820) and a similar model presented by Bouchut et al. (J. Fluid Mech., 2016, doi:10.1017/jfm.2016.417). We illustrate the crucial effects of initial conditions and dilatancy coupled to pore-pressure feedback by using D-Claw to perform simple test calculations and also by computing alternative behaviors of the well-documented Oso, Washington, and West Salt Creek, Colorado, landslides of 2014. We conclude that realistic initial conditions and feedbacks are essential elements in numerical models used to forecast landslide runout dynamics and hazards.
NASA Astrophysics Data System (ADS)
Kunstmann, H.; Lorenz, C.
2012-12-01
The three state-of-the-art global atmospheric reanalysis models—namely, ECMWF Interim Re-Analysis (ERA-Interim), Modern-Era Retrospective Analysis for Research and Applications (MERRA; NASA), and Climate Forecast System Reanalysis (CFSR; NCEP)—are analyzed and compared with independent observations (GPCC; GPCP; CRU; CPC; DEL; HOAPS) in the period between 1989 and 2006. Comparison of precipitation and temperature estimates from the three models with gridded observations reveals large differences between the reanalyses and also of the observation datasets. A major source of uncertainty in the observations is the spatial distribution and change of the number of gauges over time. In South America for example, active measuring stations were reduced from 4267 to 390. The quality of precipitation estimates from the reanalyses strongly depends on the geographic location, as there are significant differences especially in tropical regions. The closure of the water cycle in the three reanalyses is analyzed by estimating long-term mean values for precipitation, evapotranspiration, surface runoff, and moisture flux divergence. Major shortcomings in the moisture budgets of the datasets are mainly due to inconsistencies of the net precipitation minus evaporation and evapotranspiration, respectively, (P-E) estimates over the oceans and landmasses. This imbalance largely originates from the assimilation of radiance sounding data from the NOAA-15 satellite, which results in an unrealistic increase of oceanic P-E in the MERRA and CFSR budgets. Overall, ERA-Interim shows both a comparatively reasonable closure of the terrestrial and atmospheric water balance and a reasonable agreement with the observation datasets. The limited performance of the three state-of-the-art reanalyses in reproducing the hydrological cycle, however, puts the use of these models for climate trend analyses and long-term water budget studies into question.
NASA Astrophysics Data System (ADS)
Kokkinaki, A.; Sleep, B. E.; Chambers, J. E.; Cirpka, O. A.; Nowak, W.
2010-12-01
Electrical Resistance Tomography (ERT) is a popular method for investigating subsurface heterogeneity. The method relies on measuring electrical potential differences and obtaining, through inverse modeling, the underlying electrical conductivity field, which can be related to hydraulic conductivities. The quality of site characterization strongly depends on the utilized inversion technique. Standard ERT inversion methods, though highly computationally efficient, do not consider spatial correlation of soil properties; as a result, they often underestimate the spatial variability observed in earth materials, thereby producing unrealistic subsurface models. Also, these methods do not quantify the uncertainty of the estimated properties, thus limiting their use in subsequent investigations. Geostatistical inverse methods can be used to overcome both these limitations; however, they are computationally expensive, which has hindered their wide use in practice. In this work, we compare a standard Gauss-Newton smoothness constrained least squares inversion method against the quasi-linear geostatistical approach using the three-dimensional ERT dataset of the SABRe (Source Area Bioremediation) project. The two methods are evaluated for their ability to: a) produce physically realistic electrical conductivity fields that agree with the wide range of data available for the SABRe site while being computationally efficient, and b) provide information on the spatial statistics of other parameters of interest, such as hydraulic conductivity. To explore the trade-off between inversion quality and computational efficiency, we also employ a 2.5-D forward model with corrections for boundary conditions and source singularities. The 2.5-D model accelerates the 3-D geostatistical inversion method. New adjoint equations are developed for the 2.5-D forward model for the efficient calculation of sensitivities. Our work shows that spatial statistics can be incorporated in large-scale ERT inversions to improve the inversion results without making them computationally prohibitive.
Unrealistic phylogenetic trees may improve phylogenetic footprinting.
Nettling, Martin; Treutler, Hendrik; Cerquides, Jesus; Grosse, Ivo
2017-06-01
The computational investigation of DNA binding motifs from binding sites is one of the classic tasks in bioinformatics and a prerequisite for understanding gene regulation as a whole. Due to the development of sequencing technologies and the increasing number of available genomes, approaches based on phylogenetic footprinting become increasingly attractive. Phylogenetic footprinting requires phylogenetic trees with attached substitution probabilities for quantifying the evolution of binding sites, but these trees and substitution probabilities are typically not known and cannot be estimated easily. Here, we investigate the influence of phylogenetic trees with different substitution probabilities on the classification performance of phylogenetic footprinting using synthetic and real data. For synthetic data we find that the classification performance is highest when the substitution probability used for phylogenetic footprinting is similar to that used for data generation. For real data, however, we typically find that the classification performance of phylogenetic footprinting surprisingly increases with increasing substitution probabilities and is often highest for unrealistically high substitution probabilities close to one. This finding suggests that choosing realistic model assumptions might not always yield optimal predictions in general and that choosing unrealistically high substitution probabilities close to one might actually improve the classification performance of phylogenetic footprinting. The proposed PF is implemented in JAVA and can be downloaded from https://github.com/mgledi/PhyFoo. : martin.nettling@informatik.uni-halle.de. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.
Magneto-thermal reconnection of significance to space and astrophysics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coppi, B., E-mail: coppi@psfc.mit.edu
Magnetic reconnection processes that can be excited in collisionless plasma regimes are of interest to space and astrophysics to the extent that the layers in which reconnection takes place are not rendered unrealistically small by their unfavorable dependence on relevant macroscopic distances. The equations describing new modes producing magnetic reconnection over relatively small but significant distances, unlike tearing types of mode, even when dealing with large macroscopic scale lengths, are given. The considered modes are associated with a finite electron temperature gradient and have a phase velocity in the direction of the electron diamagnetic velocity that can reverse to themore » opposite direction as relevant parameters are varied over a relatively wide range. The electron temperature perturbation has a primary role in the relevant theory. In particular, when referring to regimes in which the longitudinal (to the magnetic field) electron thermal conductivity is relatively large, the electron temperature perturbation becomes singular if the ratio of the transverse to the longitudinal electron thermal conductivity becomes negligible.« less
Assessing the value of customized birth weight percentiles.
Hutcheon, Jennifer A; Walker, Mark; Platt, Robert W
2011-02-15
Customized birth weight percentiles are weight-for-gestational-age percentiles that account for the influence of maternal characteristics on fetal growth. Although intuitively appealing, the incremental value they provide in the identification of intrauterine growth restriction (IUGR) over conventional birth weight percentiles is controversial. The objective of this study was to assess the value of customized birth weight percentiles in a simulated cohort of 100,000 infants aged 37 weeks whose IUGR status was known. A cohort of infants with a range of healthy birth weights was first simulated on the basis of the distributions of maternal/fetal characteristics observed in births at the Royal Victoria Hospital in Montreal, Canada, between 2000 and 2006. The occurrence of IUGR was re-created by reducing the observed birth weights of a small percentage of these infants. The value of customized percentiles was assessed by calculating true and false positive rates. Customizing birth weight percentiles for maternal characteristics added very little information to the identification of IUGR beyond that obtained from conventional weight-for-gestational-age percentiles (true positive rates of 61.8% and 61.1%, respectively, and false positive rates of 7.9% and 8.5%, respectively). For the process of customization to be worthwhile, maternal characteristics in the customization model were shown through simulation to require an unrealistically strong association with birth weight.
Strategic targeting of advance care planning interventions: the Goldilocks phenomenon.
Billings, J Andrew; Bernacki, Rachelle
2014-04-01
Strategically selecting patients for discussions and documentation about limiting life-sustaining treatments-choosing the right time along the end-of-life trajectory for such an intervention and identifying patients at high risk of facing end-of-life decisions-can have a profound impact on the value of advance care planning (ACP) efforts. Timing is important because the completion of an advance directive (AD) too far from or too close to the time of death can lead to end-of-life decisions that do not optimally reflect the patient's values, goals, and preferences: a poorly chosen target patient population that is unlikely to need an AD in the near future may lead to patients making unrealistic, hypothetical choices, while assessing preferences in the emergency department or hospital in the face of a calamity is notoriously inadequate. Because much of the currently studied ACP efforts have led to a disappointingly small proportion of patients eventually benefitting from an AD, careful targeting of the intervention should also improve the efficacy of such projects. A key to optimal timing and strategic selection of target patients for an ACP program is prognostication, and we briefly highlight prognostication tools and studies that may point us toward high-value AD interventions.
Stable estimate of primary OC/EC ratios in the EC tracer method
NASA Astrophysics Data System (ADS)
Chu, Shao-Hang
In fine particulate matter studies, the primary OC/EC ratio plays an important role in estimating the secondary organic aerosol contribution to PM2.5 concentrations using the EC tracer method. In this study, numerical experiments are carried out to test and compare various statistical techniques in the estimation of primary OC/EC ratios. The influence of random measurement errors in both primary OC and EC measurements on the estimation of the expected primary OC/EC ratios is examined. It is found that random measurement errors in EC generally create an underestimation of the slope and an overestimation of the intercept of the ordinary least-squares regression line. The Deming regression analysis performs much better than the ordinary regression, but it tends to overcorrect the problem by slightly overestimating the slope and underestimating the intercept. Averaging the ratios directly is usually undesirable because the average is strongly influenced by unrealistically high values of OC/EC ratios resulting from random measurement errors at low EC concentrations. The errors generally result in a skewed distribution of the OC/EC ratios even if the parent distributions of OC and EC are close to normal. When measured OC contains a significant amount of non-combustion OC Deming regression is a much better tool and should be used to estimate both the primary OC/EC ratio and the non-combustion OC. However, if the non-combustion OC is negligibly small the best and most robust estimator of the OC/EC ratio turns out to be the simple ratio of the OC and EC averages. It not only reduces random errors by averaging individual variables separately but also acts as a weighted average of ratios to minimize the influence of unrealistically high OC/EC ratios created by measurement errors at low EC concentrations. The median of OC/EC ratios ranks a close second, and the geometric mean of ratios ranks third. This is because their estimations are insensitive to questionable extreme values. A real world example is given using the ambient data collected from an Atlanta STN site during the winter of 2001-2002.
Anisotropic parton escape is the dominant source of azimuthal anisotropy in transport models
He, Liang; Edmonds, Terrence; Lin, Zi-Wei; ...
2015-12-22
We trace the development of azimuthal anisotropy (v n, n = 2, 3) via parton-parton collision history in two transport models. The parton v n is studied as a function of the number of collisions of each parton in Au+Au and d+Au collisions at √ s NN = 200 GeV. Findings show that the majority of v n comes from the anisotropic escape probability of partons, with no fundamental difference at low and high transverse momenta. The contribution to v n from hydrodynamic-type collective flow is found to be small. Only when the parton-parton cross-section is set unrealistically large doesmore » this contribution start to take over. Our findings challenge the current paradigm emerged from hydrodynamic comparisons to anisotropy data.« less
Spread of epidemic disease on networks
NASA Astrophysics Data System (ADS)
Newman, M. E.
2002-07-01
The study of social networks, and in particular the spread of disease on networks, has attracted considerable recent attention in the physics community. In this paper, we show that a large class of standard epidemiological models, the so-called susceptible/infective/removed (SIR) models can be solved exactly on a wide variety of networks. In addition to the standard but unrealistic case of fixed infectiveness time and fixed and uncorrelated probability of transmission between all pairs of individuals, we solve cases in which times and probabilities are nonuniform and correlated. We also consider one simple case of an epidemic in a structured population, that of a sexually transmitted disease in a population divided into men and women. We confirm the correctness of our exact solutions with numerical simulations of SIR epidemics on networks.
78 FR 5444 - Submission for OMB Review; Payments
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-25
... challenging that the agency's methodology for calculating it is insufficient and inadequate and does not... hours of burden per response is unrealistically low given the level of documentation required to support...
Talking to Your Child's Doctor
... doctor is unrealistic expectations or an unwillingness to trust a doctor's diagnosis or treatment of a minor ... communication by letting the doctor know that you trust him or her to care for your child. ...
Nuijens, Louise; Medeiros, Brian; Sandu, Irina; ...
2015-11-06
We present patterns of covariability between low-level cloudiness and the trade-wind boundary layer structure using long-term measurements at a site representative of dynamical regimes with moderate subsidence or weak ascent. We compare these with ECMWF’s Integrated Forecast System and 10 CMIP5 models. By using single-time step output at a single location, we find that models can produce a fairly realistic trade-wind layer structure in long-term means, but with unrealistic variability at shorter-time scales. The unrealistic variability in modeled cloudiness near the lifting condensation level (LCL) is due to stronger than observed relationships with mixed-layer relative humidity (RH) and temperature stratificationmore » at the mixed-layer top. Those relationships are weak in observations, or even of opposite sign, which can be explained by a negative feedback of convection on cloudiness. Cloudiness near cumulus tops at the tradewind inversion instead varies more pronouncedly in observations on monthly time scales, whereby larger cloudiness relates to larger surface winds and stronger trade-wind inversions. However, these parameters appear to be a prerequisite, rather than strong controlling factors on cloudiness, because they do not explain submonthly variations in cloudiness. Models underestimate the strength of these relationships and diverge in particular in their responses to large-scale vertical motion. No model stands out by reproducing the observed behavior in all respects. As a result, these findings suggest that climate models do not realistically represent the physical processes that underlie the coupling between trade-wind clouds and their environments in present-day climate, which is relevant for how we interpret modeled cloud feedbacks.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nuijens, Louise; Medeiros, Brian; Sandu, Irina
We present patterns of covariability between low-level cloudiness and the trade-wind boundary layer structure using long-term measurements at a site representative of dynamical regimes with moderate subsidence or weak ascent. We compare these with ECMWF’s Integrated Forecast System and 10 CMIP5 models. By using single-time step output at a single location, we find that models can produce a fairly realistic trade-wind layer structure in long-term means, but with unrealistic variability at shorter-time scales. The unrealistic variability in modeled cloudiness near the lifting condensation level (LCL) is due to stronger than observed relationships with mixed-layer relative humidity (RH) and temperature stratificationmore » at the mixed-layer top. Those relationships are weak in observations, or even of opposite sign, which can be explained by a negative feedback of convection on cloudiness. Cloudiness near cumulus tops at the tradewind inversion instead varies more pronouncedly in observations on monthly time scales, whereby larger cloudiness relates to larger surface winds and stronger trade-wind inversions. However, these parameters appear to be a prerequisite, rather than strong controlling factors on cloudiness, because they do not explain submonthly variations in cloudiness. Models underestimate the strength of these relationships and diverge in particular in their responses to large-scale vertical motion. No model stands out by reproducing the observed behavior in all respects. As a result, these findings suggest that climate models do not realistically represent the physical processes that underlie the coupling between trade-wind clouds and their environments in present-day climate, which is relevant for how we interpret modeled cloud feedbacks.« less
Marital Expectations in Strong African American Marriages.
Vaterlaus, J Mitchell; Skogrand, Linda; Chaney, Cassandra; Gahagan, Kassandra
2017-12-01
The current exploratory study utilized a family strengths framework to identify marital expectations in 39 strong African American heterosexual marriages. Couples reflected on their marital expectations over their 10 or more years of marriage. Three themes emerged through qualitative analysis and the participants' own words were used in the presentation of the themes. African Americans indicated that there was growth in marital expectations over time, with marital expectations often beginning with unrealistic expectations that grew into more realistic expectations as their marriages progressed. Participants also indicated that core expectations in strong African American marriages included open communication, congruent values, and positive treatment of spouse. Finally, participants explained there is an "I" in marriage as they discussed the importance of autonomy within their marital relationships. Results are discussed in association with existing research and theory. © 2016 Family Process Institute.
NASA Astrophysics Data System (ADS)
Scherstjanoi, M.; Kaplan, J. O.; Thürig, E.; Lischke, H.
2013-09-01
Models of vegetation dynamics that are designed for application at spatial scales larger than individual forest gaps suffer from several limitations. Typically, either a population average approximation is used that results in unrealistic tree allometry and forest stand structure, or models have a high computational demand because they need to simulate both a series of age-based cohorts and a number of replicate patches to account for stochastic gap-scale disturbances. The detail required by the latter method increases the number of calculations by two to three orders of magnitude compared to the less realistic population average approach. In an effort to increase the efficiency of dynamic vegetation models without sacrificing realism, we developed a new method for simulating stand-replacing disturbances that is both accurate and faster than approaches that use replicate patches. The GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) method works by postprocessing the output of deterministic, undisturbed simulations of a cohort-based vegetation model by deriving the distribution of patch ages at any point in time on the basis of a disturbance probability. With this distribution, the expected value of any output variable can be calculated from the output values of the deterministic undisturbed run at the time corresponding to the patch age. To account for temporal changes in model forcing (e.g., as a result of climate change), GAPPARD performs a series of deterministic simulations and interpolates between the results in the postprocessing step. We integrated the GAPPARD method in the vegetation model LPJ-GUESS, and evaluated it in a series of simulations along an altitudinal transect of an inner-Alpine valley. We obtained results very similar to the output of the original LPJ-GUESS model that uses 100 replicate patches, but simulation time was reduced by approximately the factor 10. Our new method is therefore highly suited for rapidly approximating LPJ-GUESS results, and provides the opportunity for future studies over large spatial domains, allows easier parameterization of tree species, faster identification of areas of interesting simulation results, and comparisons with large-scale datasets and results of other forest models.
ERIC Educational Resources Information Center
Bakunas, Boris
2001-01-01
Procrastinators harbor unrealistic attitudes that promote needless delay and engender obstructive feelings such as performance anxiety, low frustration tolerance, and resentment. Common procrastination pitfalls include escapism, overpreparation, overwork, overcommitment, poor work conditions, and rationalization. Planning and prioritizing helps…
Black hole formation from the gravitational collapse of a nonspherical network of structures
NASA Astrophysics Data System (ADS)
Delgado Gaspar, Ismael; Hidalgo, Juan Carlos; Sussman, Roberto A.; Quiros, Israel
2018-05-01
We examine the gravitational collapse and black hole formation of multiple nonspherical configurations constructed from Szekeres dust models with positive spatial curvature that smoothly match to a Schwarzschild exterior. These configurations are made of an almost spherical central core region surrounded by a network of "pancake-like" overdensities and voids with spatial positions prescribed through standard initial conditions. We show that a full collapse into a focusing singularity, without shell crossings appearing before the formation of an apparent horizon, is not possible unless the full configuration becomes exactly or almost spherical. Seeking for black hole formation, we demand that shell crossings are covered by the apparent horizon. This requires very special fine-tuned initial conditions that impose very strong and unrealistic constraints on the total black hole mass and full collapse time. As a consequence, nonspherical nonrotating dust sources cannot furnish even minimally realistic toy models of black hole formation at astrophysical scales: demanding realistic collapse time scales yields huge unrealistic black hole masses, while simulations of typical astrophysical black hole masses collapse in unrealistically small times. We note, however, that the resulting time-mass constraint is compatible with early Universe models of primordial black hole formation, suitable in early dust-like environments. Finally, we argue that the shell crossings appearing when nonspherical dust structures collapse are an indicator that such structures do not form galactic mass black holes but virialize into stable stationary objects.
Evaluating and improving the representation of heteroscedastic errors in hydrological models
NASA Astrophysics Data System (ADS)
McInerney, D. J.; Thyer, M. A.; Kavetski, D.; Kuczera, G. A.
2013-12-01
Appropriate representation of residual errors in hydrological modelling is essential for accurate and reliable probabilistic predictions. In particular, residual errors of hydrological models are often heteroscedastic, with large errors associated with high rainfall and runoff events. Recent studies have shown that using a weighted least squares (WLS) approach - where the magnitude of residuals are assumed to be linearly proportional to the magnitude of the flow - captures some of this heteroscedasticity. In this study we explore a range of Bayesian approaches for improving the representation of heteroscedasticity in residual errors. We compare several improved formulations of the WLS approach, the well-known Box-Cox transformation and the more recent log-sinh transformation. Our results confirm that these approaches are able to stabilize the residual error variance, and that it is possible to improve the representation of heteroscedasticity compared with the linear WLS approach. We also find generally good performance of the Box-Cox and log-sinh transformations, although as indicated in earlier publications, the Box-Cox transform sometimes produces unrealistically large prediction limits. Our work explores the trade-offs between these different uncertainty characterization approaches, investigates how their performance varies across diverse catchments and models, and recommends practical approaches suitable for large-scale applications.
Psychological support for orthognathic patients -- what do orthodontists want?
Juggins, K J; Feinmann, C; Shute, J; Cunningham, S J
2006-06-01
(1) To evaluate consultant orthodontist opinion on referral of orthognathic patients to a liaison psychiatrist or psychologist and (2) To investigate the value of training orthodontic specialists in recognition of patients with psychological profiles that might affect orthognathic outcome. Questionnaire-based study. A structured questionnaire was distributed to all consultant orthodontists in the UK. Approximately 40% of consultants thought that up to 10% of their orthognathic patients would benefit from psychological assessment by appropriately trained personnel. Twenty per cent of consultants were not certain what proportion of their patients would benefit from referral and over half the respondents said they do not refer any orthognathic patients for assessment. The most common reasons for referral were past/current psychiatric history (36%), unrealistic expectations (32%), 'gut instinct' (14%), no significant clinical problem (13%). Reasons not to refer were: nobody to refer to (30.5%), fear of patient reacting badly (15.8%), not sure who to refer to (14.7%), response from mental health team not useful (12.4%), waiting list too long (9.6%). The majority of clinicians felt they would benefit from training in this field (84.7%), as over 80% reported no teaching or training in psychological assessment/management. Although we have no evidence to prove that interdisciplinary care is better for patients, clinical experience and reports from clinicians working in large centres, tells us there are probable advantages. The development of a training programme for both orthodontists and mental health teams would seem to be beneficial for both clinicians and patients.
Development of a Global Fire Weather Database
NASA Technical Reports Server (NTRS)
Field, R. D.; Spessa, A. C.; Aziz, N. A.; Camia, A.; Cantin, A.; Carr, R.; de Groot, W. J.; Dowdy, A. J.; Flannigan, M. D.; Manomaiphiboon, K.;
2015-01-01
The Canadian Forest Fire Weather Index (FWI) System is the mostly widely used fire danger rating system in the world. We have developed a global database of daily FWI System calculations, beginning in 1980, called the Global Fire WEather Database (GFWED) gridded to a spatial resolution of 0.5 latitude by 2/3 longitude. Input weather data were obtained from the NASA Modern Era Retrospective- Analysis for Research and Applications (MERRA), and two different estimates of daily precipitation from rain gauges over land. FWI System Drought Code calculations from the gridded data sets were compared to calculations from individual weather station data for a representative set of 48 stations in North, Central and South America, Europe, Russia, Southeast Asia and Australia. Agreement between gridded calculations and the station-based calculations tended to be most different at low latitudes for strictly MERRA based calculations. Strong biases could be seen in either direction: MERRA DC over the Mato Grosso in Brazil reached unrealistically high values exceeding DCD1500 during the dry season but was too low over Southeast Asia during the dry season. These biases are consistent with those previously identified in MERRA's precipitation, and they reinforce the need to consider alternative sources of precipitation data. GFWED can be used for analyzing historical relationships between fire weather and fire activity at continental and global scales, in identifying large-scale atmosphere-ocean controls on fire weather, and calibration of FWI-based fire prediction models.
The once and future application of cost-effectiveness analysis.
Berger, M L
1999-09-01
Cost-effectiveness analysis (CEA) is used by payers to make coverage decisions, by providers to make formulary decisions, and by large purchasers/employers and policymakers to choose health care performance measures. However, it continues to be poorly utilized in the marketplace because of overriding financial imperatives to control costs and a low apparent willingness to pay for quality. There is no obvious relationship between the cost-effectiveness of life-saving interventions and their application. Health care decision makers consider financial impact, safety, and effectiveness before cost-effectiveness. WHY IS CEA NOT MORE WIDELY APPLIED? Most health care providers have a short-term parochial financial perspective, whereas CEA takes a long-term view that captures all costs, benefits, and hazards, regardless of to whom they accrue. In addition, a history of poor standardization of methods, unrealistic expectations that CEA could answer fundamental ethical and political issues, and society's failure to accept the need for allocating scarce resources more judiciously, have contributed to relatively little use of the method by decision makers. HOW WILL CEA FIND GREATER UTILITY IN THE FUTURE? As decision makers take a longer-term view and understand that CEA can provide a quantitative perspective on important resource allocation decisions, including the distributional consequences of alternative choices, CEA is likely to find greater use. However, it must be embedded within a framework that promotes confidence in the social justice of health care decision making through ongoing dialogue about how the value of health and health care are defined.
On the performance of satellite precipitation products in riverine flood modeling: A review
NASA Astrophysics Data System (ADS)
Maggioni, Viviana; Massari, Christian
2018-03-01
This work is meant to summarize lessons learned on using satellite precipitation products for riverine flood modeling and to propose future directions in this field of research. Firstly, the most common satellite precipitation products (SPPs) during the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Mission (GPM) eras are reviewed. Secondly, we discuss the main errors and uncertainty sources in these datasets that have the potential to affect streamflow and runoff model simulations. Thirdly, past studies that focused on using SPPs for predicting streamflow and runoff are analyzed. As the impact of floods depends not only on the characteristics of the flood itself, but also on the characteristics of the region (population density, land use, geophysical and climatic factors), a regional analysis is required to assess the performance of hydrologic models in monitoring and predicting floods. The performance of SPP-forced hydrological models was shown to largely depend on several factors, including precipitation type, seasonality, hydrological model formulation, topography. Across several basins around the world, the bias in SPPs was recognized as a major issue and bias correction methods of different complexity were shown to significantly reduce streamflow errors. Model re-calibration was also raised as a viable option to improve SPP-forced streamflow simulations, but caution is necessary when recalibrating models with SPP, which may result in unrealistic parameter values. From a general standpoint, there is significant potential for using satellite observations in flood forecasting, but the performance of SPP in hydrological modeling is still inadequate for operational purposes.
Ethics in goal planning for rehabilitation: a utilitarian perspective.
Levack, William M M
2009-04-01
Past debate on ethics in goal planning for rehabilitation has tended to focus on tensions that can arise between ethical principles; in particular the principles of autonomy and beneficence. When setting goals, clinicians tend to prioritize the wishes of patients, justifying this from the perspective of maximizing patient autonomy. This is tempered by consideration of what is ;realistic' and what the pursuit of ;unrealistic goals' might be on patient well-being. In this paper it is argued that clinicians also have an ethical obligation to take into account the resource implications of goal planning. Utilitarianism provides one perspective on addressing such issues. A utilitarian approach to goal planning would necessitate a focus on maximizing the benefits of rehabilitation to the whole community served when negotiating goals with individual patients. Clinicians may, however, have a number of concerns about utilitarianism. One assumption is that the quality of life of people with severe disability will be judged as being intrinsically low, and therefore valued less from a utilitarian perspective. A second assumption is that for people with severe disability the large effort expended in rehabilitation to achieve small gains cannot possibly repay itself in a utilitarian equation, specifically in financial terms. Evidence from the literature however has demonstrated that in fact both of these assumptions are probably false. Rehabilitation professionals should not be hesitant to consider utilitarianism as an ethical framework for rehabilitation. In fact, rehabilitation may well gain if people were to use this approach.
NASA Astrophysics Data System (ADS)
Wang, Zhen; Yu, Chao; Cui, Guang-Hai; Li, Ya-Peng; Li, Ming-Chu
2016-02-01
The spatial Iterated Prisoner's Dilemma game has been widely studied in order to explain the evolution of cooperation. Considering the large strategy space size and infinite interaction times, it is unrealistic to adopt the common imitate-best updating rule, which assumes that the human players have much stronger abilities to recognize their neighbors' strategies than they do in the one-shot game. In this paper, a novel localized extremal dynamic system is proposed, in which each player only needs to recognize the payoff of his neighbors and changes his strategy randomly when he receives the lowest payoff in his neighborhood. The evolution of cooperation is here explored under this updating rule for neighborhoods of different sizes, which are characterized by their corresponding radiuses r. The results show that when r = 1, the system is trapped in a checkerboard-like state, where half of the players consistently use AllD-like strategies and the other half constantly change their strategies. When r = 2, the system first enters an AllD-like state, from which it escapes, and finally evolves to a TFT-like state. When r is larger, the system locks in a situation with similar low average fitness as r = 1. The number of active players and the ability to form clusters jointly distinguish the evolutionary processes for different values of r from each other. The current findings further provide some insight into the evolution of cooperation and collective behavior in biological and social systems.
Applicability and Limitations of Reliability Allocation Methods
NASA Technical Reports Server (NTRS)
Cruz, Jose A.
2016-01-01
Reliability allocation process may be described as the process of assigning reliability requirements to individual components within a system to attain the specified system reliability. For large systems, the allocation process is often performed at different stages of system design. The allocation process often begins at the conceptual stage. As the system design develops, more information about components and the operating environment becomes available, different allocation methods can be considered. Reliability allocation methods are usually divided into two categories: weighting factors and optimal reliability allocation. When properly applied, these methods can produce reasonable approximations. Reliability allocation techniques have limitations and implied assumptions that need to be understood by system engineers. Applying reliability allocation techniques without understanding their limitations and assumptions can produce unrealistic results. This report addresses weighting factors, optimal reliability allocation techniques, and identifies the applicability and limitations of each reliability allocation technique.
NASA Technical Reports Server (NTRS)
Mielonen, T.; Levy, R. C.; Aaltonen, V.; Komppula, M.; de Leeuw, G.; Huttunen, J.; Lihavainen, H.; Kolmonen, P.; Lehtinen, K. E. J.; Arola, A.
2011-01-01
Aerosol Optical Depth (AOD) and Angstrom exponent (AE) values derived with the MODIS retrieval algorithm over land (Collection 5) are compared with ground based sun photometer measurements at eleven sites spanning the globe. Although, in general, total AOD compares well at these sites (R2 values generally over 0.8), there are cases (from 2 to 67% of the measurements depending on the site) where MODIS clearly retrieves the wrong spectral dependence, and hence, an unrealistic AE value. Some of these poor AE retrievals are due to the aerosol signal being too small (total AOD<0.3) but in other cases the AOD should have been high enough to derive accurate AE. However, in these cases, MODIS indicates AE values close to 0.6 and zero fine model weighting (FMW), i.e. dust model provides the best fitting to the MODIS observed reflectance. Yet, according to evidence from the collocated sun photometer measurements and back-trajectory analyses, there should be no dust present. This indicates that the assumptions about aerosol model and surface properties made by the MODIS algorithm may have been incorrect. Here we focus on problems related to parameterization of the land-surface optical properties in the algorithm, in particular the relationship between the surface reflectance at 660 and 2130 nm.
Kemer, Gülşah; Çetinkaya Yıldız, Evrim; Bulgan, Gökçe
2016-11-02
In this study, we examined married individuals' relationship satisfaction in relation to their emotional dependency and dysfunctional relationship beliefs. Our participants consisted of 203 female and 181 male, a total of 384 married individuals from urban cities of Turkey. Controlling the effects of gender and length of marriage, we performed a hierarchical regression analysis. Results revealed that married Turkish individuals' relationship satisfaction was significantly explained by their emotional dependency (sr 2 = .300, p < .001), and perceptions of interpersonal rejection (sr 2 = .075, p < .001) and unrealistic relationship expectations (sr 2 = .028, p .05). When compared to perceptions of interpersonal rejection and unrealistic relationship expectations, emotional dependency had the largest role in explaining participants' satisfaction with their marriages. We discuss the results in light of current literature as well as cultural relevance. We also provide implications for future research and mental health practices.
The relationship between interpersonal problems and occupational stress in physicians.
Falkum, Erik; Vaglum, Per
2005-01-01
This article examined the associations between occupational stress and interpersonal problems in physicians. A nationwide representative sample of Norwegian physicians received the 64-item version of the Inventory of Interpersonal Problems (IIP-64) (N=862, response rate=70%) and six instruments measuring occupational stress. Comparison of means, correlation and reliability statistics and multiple regression analyses were applied. The IIP-64 total score had a significant impact on job satisfaction, perceived unrealistic expectancies, communication with colleagues and nurses and on stress from interaction with patients. Being overly subassertive was related to low job satisfaction. Being overly expressive was linked to the experience of unrealistic expectancies from others and lack of positive feedback, whereas overly competitive physicians tended to have poorer relationships with both colleagues and nurses. Addressing interpersonal problems in medical school and postgraduate training may be a valuable measure to prevent job stress and promote quality of care.
Rejecting the equilibrium-point hypothesis.
Gottlieb, G L
1998-01-01
The lambda version of the equilibrium-point (EP) hypothesis as developed by Feldman and colleagues has been widely used and cited with insufficient critical understanding. This article offers a small antidote to that lack. First, the hypothesis implicitly, unrealistically assumes identical transformations of lambda into muscle tension for antagonist muscles. Without that assumption, its definitions of command variables R, C, and lambda are incompatible and an EP is not defined exclusively by R nor is it unaffected by C. Second, the model assumes unrealistic and unphysiological parameters for the damping properties of the muscles and reflexes. Finally, the theory lacks rules for two of its three command variables. A theory of movement should offer insight into why we make movements the way we do and why we activate muscles in particular patterns. The EP hypothesis offers no unique ideas that are helpful in addressing either of these questions.
The use of vignettes to empower effective responses to attempted sexual assault.
Allen, Kaylie T; Meadows, Elizabeth A
2017-01-01
Women assertively resisting sexual aggression have the best chances of avoiding completed rape. Especially with acquaintances, there are significant social and psychological barriers to resistance. Novel vignettes depicting acquaintance rape were designed to enhance self-efficacy, reduce unrealistic optimism, and empower assertive resistance. The data were collected using a Web-based survey of 449 female college students from multiple universities in August-October 2014. Between-subjects mixed-methods design. Participants were randomly assigned to read one of four vignettes and complete self-report measures of personal vulnerability, self-efficacy, and beliefs and intention about resistance. Although vignettes did not impact self-efficacy, one vignette enhanced perceived controllability and decreased unrealistic optimism. Women who read about completed acquaintance rape described intention to use physically assertive responses at double the rate of women reading about successful resistance. As low-cost, easily disseminated materials, vignettes about sexual assault may enhance campus prevention efforts.
NASA Astrophysics Data System (ADS)
Kwang, Jeffrey S.; Parker, Gary
2017-12-01
Landscape evolution models often utilize the stream power incision model to simulate river incision: E = KAmSn, where E is the vertical incision rate, K is the erodibility constant, A is the upstream drainage area, S is the channel gradient, and m and n are exponents. This simple but useful law has been employed with an imposed rock uplift rate to gain insight into steady-state landscapes. The most common choice of exponents satisfies m / n = 0.5. Yet all models have limitations. Here, we show that when hillslope diffusion (which operates only on small scales) is neglected, the choice m / n = 0.5 yields a curiously unrealistic result: the predicted landscape is invariant to horizontal stretching. That is, the steady-state landscape for a 10 km2 horizontal domain can be stretched so that it is identical to the corresponding landscape for a 1000 km2 domain.
Barlett, Christopher P; Rodeheffer, Christopher
2009-01-01
Previous research has shown that playing violent video game exposure can increase aggressive thoughts, aggressive feelings, and physiological arousal. This study compared the effects that playing a realistic violent, unrealistic violent, or nonviolent video game for 45 min has on such variables. For the purpose of this study, realism was defined as the probability of seeing an event in real life. Participants (N=74; 39 male, 35 female) played either a realistic violent, unrealistic violent, or nonviolent video game for 45 min. Aggressive thoughts and aggressive feelings were measured four times (every 15 min), whereas arousal was measured continuously. The results showed that, though playing any violent game stimulated aggressive thoughts, playing a more realistic violent game stimulated significantly more aggressive feelings and arousal over the course of play. Copyright 2009 Wiley-Liss, Inc.
Diedrichs, Phillippa C; Lee, Christina
2010-06-01
Increasing body size and shape diversity in media imagery may promote positive body image. While research has largely focused on female models and women's body image, men may also be affected by unrealistic images. We examined the impact of average-size and muscular male fashion models on men's and women's body image and perceived advertisement effectiveness. A sample of 330 men and 289 women viewed one of four advertisement conditions: no models, muscular, average-slim or average-large models. Men and women rated average-size models as equally effective in advertisements as muscular models. For men, exposure to average-size models was associated with more positive body image in comparison to viewing no models, but no difference was found in comparison to muscular models. Similar results were found for women. Internalisation of beauty ideals did not moderate these effects. These findings suggest that average-size male models can promote positive body image and appeal to consumers. 2010 Elsevier Ltd. All rights reserved.
Large Eddy Simulation of a Film Cooling Technique with a Plenum
NASA Astrophysics Data System (ADS)
Dharmarathne, Suranga; Sridhar, Narendran; Araya, Guillermo; Castillo, Luciano; Parameswaran, Sivapathasund
2012-11-01
Factors that affect the film cooling performance have been categorized into three main groups: (i) coolant & mainstream conditions, (ii) hole geometry & configuration, and (iii) airfoil geometry Bogard et al. (2006). The present study focuses on the second group of factors, namely, the modeling of coolant hole and the plenum. It is required to simulate correct physics of the problem to achieve more realistic numerical results. In this regard, modeling of cooling jet hole and the plenum chamber is highly important Iourokina et al. (2006). Substitution of artificial boundary conditions instead of correct plenum design would yield unrealistic results Iourokina et al. (2006). This study attempts to model film cooling technique with a plenum using a Large Eddy Simulation.Incompressible coolant jet ejects to the surface of the plate at an angle of 30° where it meets compressible turbulent boundary layer which simulates the turbine inflow conditions. Dynamic multi-scale approach Araya (2011) is introduced to prescribe turbulent inflow conditions. Simulations are carried out for two different blowing ratios and film cooling effectiveness is calculated for both cases. Results obtained from LES will be compared with experimental results.
NASA Astrophysics Data System (ADS)
Yoeli, Erez
At least since Veblen (1899), economists have proposed that people do good because they desire "social approval" and want to look good in front of others. Evidence from the laboratory supports this claim, but is difficult to generalize due to the unrealistic degree of scrutiny in a laboratory environment. I administer a field experiment to test the potency of social approval in a realistic and policy relevant setting. In the experiment I solicit 7893 customers of a large electric utility for a program that helps prevent blackouts. I vary whether their decision to participate in the program is revealed to their neighbors. Customers whose decision is revealed are 1.5% more likely to sign up than those whose decision is anonymous when their decision is framed as a contribution to a public good. Social approval increases participation more than offering subjects a $25 incentive, and its effect is large relative to the mean sign-up rate of 4.1%. I explore whether social approval contributes to crowding out and conditionally cooperative behavior, but the evidence is inconclusive.
Universality, Limits and Predictability of Gold-Medal Performances at the Olympic Games
Radicchi, Filippo
2012-01-01
Inspired by the Games held in ancient Greece, modern Olympics represent the world’s largest pageant of athletic skill and competitive spirit. Performances of athletes at the Olympic Games mirror, since 1896, human potentialities in sports, and thus provide an optimal source of information for studying the evolution of sport achievements and predicting the limits that athletes can reach. Unfortunately, the models introduced so far for the description of athlete performances at the Olympics are either sophisticated or unrealistic, and more importantly, do not provide a unified theory for sport performances. Here, we address this issue by showing that relative performance improvements of medal winners at the Olympics are normally distributed, implying that the evolution of performance values can be described in good approximation as an exponential approach to an a priori unknown limiting performance value. This law holds for all specialties in athletics–including running, jumping, and throwing–and swimming. We present a self-consistent method, based on normality hypothesis testing, able to predict limiting performance values in all specialties. We further quantify the most likely years in which athletes will breach challenging performance walls in running, jumping, throwing, and swimming events, as well as the probability that new world records will be established at the next edition of the Olympic Games. PMID:22808137
NASA Astrophysics Data System (ADS)
Dimopoulos, Thomas; Moulas, Alexandros
2016-01-01
Property tax in Greece is levied since 1985 not on Market Values but on the "objective value" of the properties as it is defined by the Ministry of Economics. It forms a non-flexible system, with market-irrelevant and unrealistic values, inducing land-policy practices and potential political cost to each periodical update. Furthermore, instead of adjusting taxation levels to the current economic reality, the real estate market is experiencing further burdening through approximately 40 different property taxes and levies, leading to further shrinking and depreciation. The authors believe that a fairer taxation system could significantly assist the property sector in Greece. Thus, through this paper and by studying and analyzing best practices from other countries, they propose models that can be applied with the use of existing data in Greece. This work aims to identify the critical parameters that affecting property values in Thessaloniki to create a Market Value forecasting tool for a fairer taxation system, to highlight the importance of a GIS system for this purpose and to compare the results of MRA with the use of SPSS with those of GWR in ArcGIS environment. For the purposes of this study, the Municipality of Thessaloniki was chosen due to its very well organized portal with significant and well organized geographical data and because authors manage to access some data from the Central Bank of Greece, regarding property valuations.
NASA Astrophysics Data System (ADS)
Dimopoulos, Thomas; Moulas, Alexandros
2017-01-01
Property tax in Greece is levied since 1985 not on Market Values but on the "objective value" of the properties as it is defined by the Ministry of Economics. It forms a non-flexible system, with market-irrelevant and unrealistic values, inducing land-policy practices and potential political cost to each periodical update. Furthermore, instead of adjusting taxation levels to the current economic reality, the real estate market is experiencing further burdening through approximately 40 different property taxes and levies, leading to further shrinking and depreciation. The authors believe that a fairer taxation system could significantly assist the property sector in Greece. Thus, through this paper and by studying and analyzing best practices from other countries, they propose models that can be applied with the use of existing data in Greece. This work aims to identify the critical parameters that affecting property values in Thessaloniki to create a Market Value forecasting tool for a fairer taxation system, to highlight the importance of a GIS system for this purpose and to compare the results of MRA with the use of SPSS with those of GWR in ArcGIS environment. For the purposes of this study, the Municipality of Thessaloniki was chosen due to its very well organized portal with significant and well organized geographical data and because authors manage to access some data from the Central Bank of Greece, regarding property valuations.
Expectations and patients’ experiences of obesity prior to bariatric surgery: a qualitative study
Homer, Catherine Verity; Thompson, Andrew R; Allmark, Peter; Goyder, Elizabeth
2016-01-01
Objectives This study aimed to understand the experiences and expectations of people seeking bariatric surgery in England and identify implications for behavioural and self-management interventions. Design A qualitative study using modified photovoice methods, triangulating photography with semistructured indepth interviews analysed using framework techniques. Setting Areas served by two bariatric surgery multidisciplinary teams in the north of England. Participants 18 adults (14 women and 4 men) who accepted for bariatric surgery, and were aged between 30 and 61 years. Participants were recruited through hospital-based tier 4 bariatric surgery multidisciplinary teams. Results The experiences of participants indicates the nature and extent of the burden of obesity. Problems included stigmatisation, shame, poor health, physical function and reliance on medications. Participants expected surgery to result in major physical and psychological improvement. They described how this expectation was rooted in their experiences of stigma and shame. These feelings were reinforced by previous unsuccessful weight loss attempts. Participants expected extreme and sometimes unrealistic levels of sustained weight loss, as well as improvements to physical and mental health. The overall desire and expectation of bariatric surgery was of ‘normality’. Participants had received previous support from clinicians and in weight management services. However, they reported that their expectations of surgery had not been reviewed by services, and expectations appeared to be unrealistic. Likewise, their experience of stigmatisation had not been addressed. Conclusions The unrealistic expectations identified here may negatively affect postoperative outcomes. The findings indicate the importance of services addressing feelings of shame and stigmatisation, and modifying patient's expectations and goals for the postoperative period. PMID:26857104
Computation of Asteroid Proper Elements: Recent Advances
NASA Astrophysics Data System (ADS)
Knežević, Z.
2017-12-01
The recent advances in computation of asteroid proper elements are briefly reviewed. Although not representing real breakthroughs in computation and stability assessment of proper elements, these advances can still be considered as important improvements offering solutions to some practical problems encountered in the past. The problem of getting unrealistic values of perihelion frequency for very low eccentricity orbits is solved by computing frequencies using the frequency-modified Fourier transform. The synthetic resonant proper elements adjusted to a given secular resonance helped to prove the existence of Astraea asteroid family. The preliminary assessment of stability with time of proper elements computed by means of the analytical theory provides a good indication of their poorer performance with respect to their synthetic counterparts, and advocates in favor of ceasing their regular maintenance; the final decision should, however, be taken on the basis of more comprehensive and reliable direct estimate of their individual and sample average deviations from constancy.
Thermal maturity patterns of Cretaceous and Tertiary rocks, San Juan Basin, Colorado and New Mexico
Law, B.E.
1992-01-01
Horizontal and vertical thermal maturity patterns and time-temperature modeling indicate that the high levels of thermal maturity in the northern part of the basin are due to either: 1) convective heat transfer associated with a deeply buried heat source located directly below the northern part of the basin or 2) the circulation of relatively hot fluids into the basin from a heat source north of the basin located near the San Juan Mountains. Time-temperature and kinetic modeling of nonlinear Rm profiles indicates that present-day heat flow is insufficient to account for the measured levels of thermal maturity. Furthermore, in order to match nonlinear Rm profiles, it is necessary to assign artifically high thermal-conductivity values to some of the stratigraphic units. These unrealistically high thermal conductivities are interpreted as evidence of convective heat transfer. -from Author
Enthalpy of Formation of N 2 H 4 (Hydrazine) Revisited
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feller, David; Bross, David H.; Ruscic, Branko
2017-08-02
In order to address the accuracy of the long-standing experimental enthalpy of formation of gas-phase hydrazine, fully confirmed in earlier versions of Active Thermochemical Tables (ATcT), the provenance of that value is re-examined in light of new high-end calculations of the Feller-Peterson-Dixon (FPD) variety. An overly optimistic determination of the vaporization enthalpy of hydrazine, which created an unrealistically strong connection between the gas phase thermochemistry and the calorimetric results defining the thermochemistry of liquid hydrazine was identified as the probable culprit. The new enthalpy of formation of gas-phase hydrazine, based on balancing all available knowledge, was determined to be 111.57more » ± 0.47 kJ/mol at 0 K (97.41 kJ/mol at 298.15 K). Close agreement was found between the ATcT (even excluding the latest theoretical result) and FPD enthalpies.« less
Enthalpy of Formation of N2H4 (Hydrazine) Revisited.
Feller, David; Bross, David H; Ruscic, Branko
2017-08-17
In order to address the accuracy of the long-standing experimental enthalpy of formation of gas-phase hydrazine, fully confirmed in earlier versions of Active Thermochemical Tables (ATcT), the provenance of that value is re-examined in light of new high-end calculations of the Feller-Peterson-Dixon (FPD) variety. An overly optimistic determination of the vaporization enthalpy of hydrazine, which created an unrealistically strong connection between the gas phase thermochemistry and the calorimetric results defining the thermochemistry of liquid hydrazine, was identified as the probable culprit. The new enthalpy of formation of gas-phase hydrazine, based on balancing all available knowledge, was determined to be 111.57 ± 0.47 kJ/mol at 0 K (97.42 ± 0.47 kJ/mol at 298.15 K). Close agreement was found between the ATcT (even excluding the latest theoretical result) and the FPD enthalpy.
Children: Special Consumers but Not Special Treatment.
ERIC Educational Resources Information Center
Doran, Lee Anne; Dolan, Elizabeth M.
1989-01-01
Reviews several child-related consumer protection issues subject to industry short-sightedness and regulatory manipulation. Indicates that although it may be unrealistic to believe that child-related consumer protection should receive priority, children do merit special consideration as consumers. (JOW)
NASA Astrophysics Data System (ADS)
Dehghani, H.; Ataee-Pour, M.
2012-12-01
The block economic value (EV) is one of the most important parameters in mine evaluation. This parameter can affect significant factors such as mining sequence, final pit limit and net present value. Nowadays, the aim of open pit mine planning is to define optimum pit limits and an optimum life of mine production scheduling that maximizes the pit value under some technical and operational constraints. Therefore, it is necessary to calculate the block economic value at the first stage of the mine planning process, correctly. Unrealistic block economic value estimation may cause the mining project managers to make the wrong decision and thus may impose inexpiable losses to the project. The effective parameters such as metal price, operating cost, grade and so forth are always assumed certain in the conventional methods of EV calculation. While, obviously, these parameters have uncertain nature. Therefore, usually, the conventional methods results are far from reality. In order to solve this problem, a new technique is used base on an invented binomial tree which is developed in this research. This method can calculate the EV and project PV under economic uncertainty. In this paper, the EV and project PV were initially determined using Whittle formula based on certain economic parameters and a multivariate binomial tree based on the economic uncertainties such as the metal price and cost uncertainties. Finally the results were compared. It is concluded that applying the metal price and cost uncertainties causes the calculated block economic value and net present value to be more realistic than certain conditions.
Mentoring Emotionally Sensitive Individuals.
ERIC Educational Resources Information Center
Shaughnessy, Michael F.; Self, Elizabeth
Mentoring individuals who are gifted, talented, and creative, but somewhat emotionally sensitive is a challenging and provocative arena. Several reasons individuals experience heightened sensitivity include: lack of nurturing, abuse, alcoholism in the family, low self-esteem, unrealistic parental expectations, and parental pressure to achieve.…
Is it possible to identify a trend in problem/failure data
NASA Technical Reports Server (NTRS)
Church, Curtis K.
1990-01-01
One of the major obstacles in identifying and interpreting a trend is the small number of data points. Future trending reports will begin with 1983 data. As the problem/failure data are aggregated by year, there are just seven observations (1983 to 1989) for the 1990 reports. Any statistical inferences with a small amount of data will have a large degree of uncertainty. Consequently, a regression technique approach to identify a trend is limited. Though trend determination by failure mode may be unrealistic, the data may be explored for consistency or stability and the failure rate investigated. Various alternative data analysis procedures are briefly discussed. Techniques that could be used to explore problem/failure data by failure mode are addressed. The data used are taken from Section One, Space Shuttle Main Engine, of the Calspan Quarterly Report dated April 2, 1990.
Comments on 'Extinct radioactivities: Trapped residuals of presolar grains'
NASA Technical Reports Server (NTRS)
Trivedi, B. M. P.
1977-01-01
It has recently been suggested that extinct I-129 and Pu-244 were trapped in primitive-solar-nebula ('presolar') grains and decayed into radiogenic Xe-129 and fission Xe before the grains were incorporated into meteorite bodies. This idea is reconsidered in light of the thermal and metamorphic history of meteorites. The criteria that parent and daughter species should never separate and that minerals or grains containing the anomalous xenon should not be subjected to temperatures exceeding 500 C are applied to iron meteorites, achondrites, and chondrites to determine whether presolar grains could be the carriers of rare-gas anomalies to meteorites. The results strongly indicate that the xenon anomaly could not have originated in presolar grains. Other difficulties with the presolar-grain model are discussed, including insufficiently small grain sizes, large variations in Xe-129/I-127 ratios in various meteorites, and apparently unrealistic meteorite formation times and locations.
Enabling Genomic-Phenomic Association Discovery without Sacrificing Anonymity
Heatherly, Raymond D.; Loukides, Grigorios; Denny, Joshua C.; Haines, Jonathan L.; Roden, Dan M.; Malin, Bradley A.
2013-01-01
Health information technologies facilitate the collection of massive quantities of patient-level data. A growing body of research demonstrates that such information can support novel, large-scale biomedical investigations at a fraction of the cost of traditional prospective studies. While healthcare organizations are being encouraged to share these data in a de-identified form, there is hesitation over concerns that it will allow corresponding patients to be re-identified. Currently proposed technologies to anonymize clinical data may make unrealistic assumptions with respect to the capabilities of a recipient to ascertain a patients identity. We show that more pragmatic assumptions enable the design of anonymization algorithms that permit the dissemination of detailed clinical profiles with provable guarantees of protection. We demonstrate this strategy with a dataset of over one million medical records and show that 192 genotype-phenotype associations can be discovered with fidelity equivalent to non-anonymized clinical data. PMID:23405076
Scene recognition based on integrating active learning with dictionary learning
NASA Astrophysics Data System (ADS)
Wang, Chengxi; Yin, Xueyan; Yang, Lin; Gong, Chengrong; Zheng, Caixia; Yi, Yugen
2018-04-01
Scene recognition is a significant topic in the field of computer vision. Most of the existing scene recognition models require a large amount of labeled training samples to achieve a good performance. However, labeling image manually is a time consuming task and often unrealistic in practice. In order to gain satisfying recognition results when labeled samples are insufficient, this paper proposed a scene recognition algorithm named Integrating Active Learning and Dictionary Leaning (IALDL). IALDL adopts projective dictionary pair learning (DPL) as classifier and introduces active learning mechanism into DPL for improving its performance. When constructing sampling criterion in active learning, IALDL considers both the uncertainty and representativeness as the sampling criteria to effectively select the useful unlabeled samples from a given sample set for expanding the training dataset. Experiment results on three standard databases demonstrate the feasibility and validity of the proposed IALDL.
Optimizing an Actuator Array for the Control of Multi-Frequency Noise in Aircraft Interiors
NASA Technical Reports Server (NTRS)
Palumbo, D. L.; Padula, S. L.
1997-01-01
Techniques developed for selecting an optimized actuator array for interior noise reduction at a single frequency are extended to the multi-frequency case. Transfer functions for 64 actuators were obtained at 5 frequencies from ground testing the rear section of a fully trimmed DC-9 fuselage. A single loudspeaker facing the left side of the aircraft was the primary source. A combinatorial search procedure (tabu search) was employed to find optimum actuator subsets of from 2 to 16 actuators. Noise reduction predictions derived from the transfer functions were used as a basis for evaluating actuator subsets during optimization. Results indicate that it is necessary to constrain actuator forces during optimization. Unconstrained optimizations selected actuators which require unrealistically large forces. Two methods of constraint are evaluated. It is shown that a fast, but approximate, method yields results equivalent to an accurate, but computationally expensive, method.
Gene transfer into the kidney: current status and limitations.
Moullier, P; Salvetti, A; Champion-Arnaud, P; Ronco, P M
1997-01-01
Gene therapy is obviously a controversial issue and a wave of suspicion has dampened the initial enthusiasm raised by this new therapeutic approach. It has now become fashionable to downplay the potential for gene therapy in most fields including kidney-related diseases. In our opinion, this is an unfair and unrealistic view of the future. In fact, gene therapy of well-selected kidney diseases will certainly become feasible, but a large data base on vectors and transfer methods both in the normal kidney and in disease models has first to be collected. Any significant progress in the biology of the vectors, in the cellular interactions of the newly introduced DNA, and in the regulation and persistency of the transgene should be rapidly translated to the kidney in relevant experimental models. Herein, we present the use and current limitations of gene transfer to the kidney and the potential therapeutic perspectives.
Large-scale transportation network congestion evolution prediction using deep learning theory.
Ma, Xiaolei; Yu, Haiyang; Wang, Yunpeng; Wang, Yinhai
2015-01-01
Understanding how congestion at one location can cause ripples throughout large-scale transportation network is vital for transportation researchers and practitioners to pinpoint traffic bottlenecks for congestion mitigation. Traditional studies rely on either mathematical equations or simulation techniques to model traffic congestion dynamics. However, most of the approaches have limitations, largely due to unrealistic assumptions and cumbersome parameter calibration process. With the development of Intelligent Transportation Systems (ITS) and Internet of Things (IoT), transportation data become more and more ubiquitous. This triggers a series of data-driven research to investigate transportation phenomena. Among them, deep learning theory is considered one of the most promising techniques to tackle tremendous high-dimensional data. This study attempts to extend deep learning theory into large-scale transportation network analysis. A deep Restricted Boltzmann Machine and Recurrent Neural Network architecture is utilized to model and predict traffic congestion evolution based on Global Positioning System (GPS) data from taxi. A numerical study in Ningbo, China is conducted to validate the effectiveness and efficiency of the proposed method. Results show that the prediction accuracy can achieve as high as 88% within less than 6 minutes when the model is implemented in a Graphic Processing Unit (GPU)-based parallel computing environment. The predicted congestion evolution patterns can be visualized temporally and spatially through a map-based platform to identify the vulnerable links for proactive congestion mitigation.
Large-Scale Transportation Network Congestion Evolution Prediction Using Deep Learning Theory
Ma, Xiaolei; Yu, Haiyang; Wang, Yunpeng; Wang, Yinhai
2015-01-01
Understanding how congestion at one location can cause ripples throughout large-scale transportation network is vital for transportation researchers and practitioners to pinpoint traffic bottlenecks for congestion mitigation. Traditional studies rely on either mathematical equations or simulation techniques to model traffic congestion dynamics. However, most of the approaches have limitations, largely due to unrealistic assumptions and cumbersome parameter calibration process. With the development of Intelligent Transportation Systems (ITS) and Internet of Things (IoT), transportation data become more and more ubiquitous. This triggers a series of data-driven research to investigate transportation phenomena. Among them, deep learning theory is considered one of the most promising techniques to tackle tremendous high-dimensional data. This study attempts to extend deep learning theory into large-scale transportation network analysis. A deep Restricted Boltzmann Machine and Recurrent Neural Network architecture is utilized to model and predict traffic congestion evolution based on Global Positioning System (GPS) data from taxi. A numerical study in Ningbo, China is conducted to validate the effectiveness and efficiency of the proposed method. Results show that the prediction accuracy can achieve as high as 88% within less than 6 minutes when the model is implemented in a Graphic Processing Unit (GPU)-based parallel computing environment. The predicted congestion evolution patterns can be visualized temporally and spatially through a map-based platform to identify the vulnerable links for proactive congestion mitigation. PMID:25780910
Emergency Notification Strategy
ERIC Educational Resources Information Center
Katsouros, Mark
2014-01-01
In higher education, the IT department is often the service provider for the institution's emergency notification system (ENS). For many institutions, the complexity of providing emergency notification to students, faculty, and staff makes using a local, on-premise solution unrealistic. But finding the right commercially hosted technical solution…
Teaching the Modern Presidency.
ERIC Educational Resources Information Center
Rimmerman, Craig A.
1989-01-01
Stresses the need to challenge students' unrealistic ideas and heightened expectations of U.S. presidential performance. Maintains that college courses about the presidency must stress the role of the president within the context of democratic accountability, Madisonian principles, and the separation of powers doctrine. Discusses presidential…
The Futile Search for a Theory of Learning Disabilities.
ERIC Educational Resources Information Center
Blachman, Benita A.
1988-01-01
In response to a previous article, the paper suggests it is unrealistic to expect one theory or even multiple theories within one paradigm to explain learning disabilities. The emphasis on reaching a consensus regarding theory or paradigm is seen as unproductive. (DB)
Plagiarism and Responsibility.
ERIC Educational Resources Information Center
Martin, Brian
1984-01-01
There are several kinds of plagiarism, and its significance varies with its circumstances. College administrations seem to avoid responsibility for examining allegations of academic plagiarism, and few procedures exist for addressing them. Until standard and open procedures are established and accepted, rigid and unrealistic attitudes will prevail…
Future Asian Education: The Challenge of Numbers.
ERIC Educational Resources Information Center
Adiseshiah, Malcolm S.
1980-01-01
Probes educational problems and needs in the developing nations of Asia. The major problems are overpopulation, poor educational facilities in rural areas, insufficient financial resources, inappropriate educational models and objectives, and unrealistically high expectations. More recent educational models stress equalizing educational access and…
ERIC Educational Resources Information Center
Rank, Hugh
Too many people become disillusioned by political language because they start with illusions about it--erroneous ideas and unrealistic expectations. It is better to start with realistic attitudes, practical information, and not-so-great expectations. When dealing with political language, expect (1) conflict, arguments and disagreements, and…
2006-03-08
holodomor – a mass famine artificially created by imposed unrealistically high quotas on grain production that was turned over to the central Soviet...15 Ibid., 9. 16 The Ukrainian Congress Committee of America, Inc., The Voice of the Ukrainian American Community, “ Holodomor - the Famine Genocide in
Preventing Marketing Efforts That Bomb.
ERIC Educational Resources Information Center
Sevier, Robert A.
2000-01-01
In a marketplace overwhelmed with messages, too many institutions waste money on ineffective marketing. Highlights five common marketing errors: limited definition of marketing; unwillingness to address strategic issues; no supporting data; fuzzy goals and directions; and unrealistic expectations, time lines, and budgets. Though trustees are not…
Ooeda, Hiroki; Terashima, Ichiro; Taneda, Haruhiko
2017-02-01
Two hypotheses have been proposed to explain the mechanism preventing the refilling vessel water from being drained to the neighboring functional vessels under negative pressure. The pit membrane osmosis hypothesis proposes that the xylem parenchyma cells release polysaccharides that are impermeable to the intervessel pit membranes into the refilling vessel; this osmotically counteracts the negative pressure, thereby allowing the vessel to refill. The pit valve hypothesis proposes that gas trapped within intervessel bordered pits isolates the refilling vessel water from the surrounding functional vessels. Here, using the single-vessel method, we assessed these hypotheses in shoots of mulberry (Morus australis Poir.). First, we confirmed the occurrence of xylem refilling under negative pressure in the potted mulberry saplings. To examine the pit membrane osmosis hypothesis, we estimated the semi-permeability of pit membranes for molecules of various sizes and found that the pit membranes were not semi-permeable to polyethylene glycol of molecular mass <20,000. For the pit valve hypothesis, we formed pit valves in the intervessel pits in the short stem segments and measured the maximum liquid pressure up to which gases in bordered pits were retained. The threshold pressure ranged from 0.025 to 0.10 MPa. These values matched the theoretical value calculated from the geometry of the pit chamber (0.0692-0.101 MPa). Our results suggest that gas in the pits is retained by surface tension, even under substantial positive pressure to resolve gases in the refilling vessel, whereas the molecule size required for the pit membrane osmosis mechanism in mulberry would be unrealistically large. © The Author 2016. Published by Oxford University Press on behalf of Japanese Society of Plant Physiologists. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Suwanabol, Pasithorn A; Reichstein, Ari C; Suzer-Gurtekin, Z Tuba; Forman, Jane; Silveira, Maria J; Mody, Lona; Morris, Arden M
2018-06-01
Nearly 20% of colorectal cancer (CRC) patients present with potentially incurable (Stage IV) disease, yet their physicians do not integrate cancer treatment with palliative care. Compared with patients treated by primary providers, surgical patients with terminal diseases are significantly less likely to receive palliative or end-of-life care. To describe surgeon perspectives on palliative and end-of-life care for patients with Stage IV CRCs. This is a convergent mixed methods study using a validated survey instrument from the Critical Care Peer Workgroup of the Robert Wood Johnson Foundation's Promoting Excellence in End-of-Life Care Project with additional qualitative questions. Participants were all current, nonretired members of the American Society of Colon and Rectal Surgeons. Surgeon-perceived barriers to palliative and end-of-life care for patients with Stage IV CRCs were identified. Among 131 Internet survey respondents (response rate 16.5%), 76.1% reported no formal education in palliative care, and specifically noted inadequate training in techniques to forgo life-sustaining measures (37.9%) and communication (42.7%). Over half (61.8%) of surgeons cited unrealistic expectations among patients and families as a barrier to care, which also limited discussion of palliation. At the system level, absence of documentation, appropriate processes, and culture hindered the initiation of palliative care. Thematic analysis of open-ended questions confirmed and extended these findings through the following major barriers to palliative and end-of-life care: (1) surgeon knowledge and training; (2) communication challenges; (3) difficulty with prognostication; (4) patient and family factors encompassing unrealistic expectations and discordant preferences; and (5) systemic issues including culture and lack of documentation and appropriate resources. Generalizability is limited by the small sample size inherent to Internet surveys, which may contribute to selection bias. Surgeons valued palliative and end-of-life care but reported multilevel barriers to its provision. These data will inform strategies to reduce these perceived barriers.
Beyond Corroboration: Strengthening Model Validation by Looking for Unexpected Patterns
Chérel, Guillaume; Cottineau, Clémentine; Reuillon, Romain
2015-01-01
Models of emergent phenomena are designed to provide an explanation to global-scale phenomena from local-scale processes. Model validation is commonly done by verifying that the model is able to reproduce the patterns to be explained. We argue that robust validation must not only be based on corroboration, but also on attempting to falsify the model, i.e. making sure that the model behaves soundly for any reasonable input and parameter values. We propose an open-ended evolutionary method based on Novelty Search to look for the diverse patterns a model can produce. The Pattern Space Exploration method was tested on a model of collective motion and compared to three common a priori sampling experiment designs. The method successfully discovered all known qualitatively different kinds of collective motion, and performed much better than the a priori sampling methods. The method was then applied to a case study of city system dynamics to explore the model’s predicted values of city hierarchisation and population growth. This case study showed that the method can provide insights on potential predictive scenarios as well as falsifiers of the model when the simulated dynamics are highly unrealistic. PMID:26368917
Bistability, non-ergodicity, and inhibition in pairwise maximum-entropy models
Grün, Sonja; Helias, Moritz
2017-01-01
Pairwise maximum-entropy models have been used in neuroscience to predict the activity of neuronal populations, given only the time-averaged correlations of the neuron activities. This paper provides evidence that the pairwise model, applied to experimental recordings, would produce a bimodal distribution for the population-averaged activity, and for some population sizes the second mode would peak at high activities, that experimentally would be equivalent to 90% of the neuron population active within time-windows of few milliseconds. Several problems are connected with this bimodality: 1. The presence of the high-activity mode is unrealistic in view of observed neuronal activity and on neurobiological grounds. 2. Boltzmann learning becomes non-ergodic, hence the pairwise maximum-entropy distribution cannot be found: in fact, Boltzmann learning would produce an incorrect distribution; similarly, common variants of mean-field approximations also produce an incorrect distribution. 3. The Glauber dynamics associated with the model is unrealistically bistable and cannot be used to generate realistic surrogate data. This bimodality problem is first demonstrated for an experimental dataset from 159 neurons in the motor cortex of macaque monkey. Evidence is then provided that this problem affects typical neural recordings of population sizes of a couple of hundreds or more neurons. The cause of the bimodality problem is identified as the inability of standard maximum-entropy distributions with a uniform reference measure to model neuronal inhibition. To eliminate this problem a modified maximum-entropy model is presented, which reflects a basic effect of inhibition in the form of a simple but non-uniform reference measure. This model does not lead to unrealistic bimodalities, can be found with Boltzmann learning, and has an associated Glauber dynamics which incorporates a minimal asymmetric inhibition. PMID:28968396
Unrealistic optimism and 'nosognosia': illness recognition in the healthy brain.
McKay, Ryan; Buchmann, Andreas; Germann, Nicole; Yu, Shancong; Brugger, Peter
2014-12-01
At the centenary of research on anosognosia, the time seems ripe to supplement work in anosognosic patients with empirical studies on nosognosia in healthy participants. To this end, we adopted a signal detection framework to investigate the lateralized recognition of illness words--an operational measure of nosognosia--in healthy participants. As positively biased reports about one's current health status (anosognosia) and future health status (unrealistic optimism) have both been associated with deficient right hemispheric functioning, and conversely with undisturbed left hemispheric functioning, we hypothesised that more optimistic participants would adopt a more conservative response criterion, and/or display less sensitivity, when identifying illnesses in our nosognosia task; especially harmful illnesses presented to the left hemisphere via the right visual field. Thirty-two healthy right-handed men estimated their own relative risk of contracting a series of illnesses in the future, and then completed a novel computer task assessing their recognition of illness names presented to the left or right visual field. To check that effects were specific to the recognition of illness (rather than reflecting recognition of lexical items per se), we also administered a standard lateralized lexical decision task. Highly optimistic participants tended to be more conservative in detecting illnesses, especially harmful illnesses presented to the right visual field. Contrary to expectation, they were also more sensitive to illness names in this half-field. We suggest that, in evolutionary terms, unrealistic optimism may be an adaptive trait that combines a high perceptual sensitivity to threat with a high threshold for acknowledging its presence. The signal detection approach to nosognosia developed here may open up new avenues for the understanding of anosognosia in neurological patients. Copyright © 2014 Elsevier Ltd. All rights reserved.
Bistability, non-ergodicity, and inhibition in pairwise maximum-entropy models.
Rostami, Vahid; Porta Mana, PierGianLuca; Grün, Sonja; Helias, Moritz
2017-10-01
Pairwise maximum-entropy models have been used in neuroscience to predict the activity of neuronal populations, given only the time-averaged correlations of the neuron activities. This paper provides evidence that the pairwise model, applied to experimental recordings, would produce a bimodal distribution for the population-averaged activity, and for some population sizes the second mode would peak at high activities, that experimentally would be equivalent to 90% of the neuron population active within time-windows of few milliseconds. Several problems are connected with this bimodality: 1. The presence of the high-activity mode is unrealistic in view of observed neuronal activity and on neurobiological grounds. 2. Boltzmann learning becomes non-ergodic, hence the pairwise maximum-entropy distribution cannot be found: in fact, Boltzmann learning would produce an incorrect distribution; similarly, common variants of mean-field approximations also produce an incorrect distribution. 3. The Glauber dynamics associated with the model is unrealistically bistable and cannot be used to generate realistic surrogate data. This bimodality problem is first demonstrated for an experimental dataset from 159 neurons in the motor cortex of macaque monkey. Evidence is then provided that this problem affects typical neural recordings of population sizes of a couple of hundreds or more neurons. The cause of the bimodality problem is identified as the inability of standard maximum-entropy distributions with a uniform reference measure to model neuronal inhibition. To eliminate this problem a modified maximum-entropy model is presented, which reflects a basic effect of inhibition in the form of a simple but non-uniform reference measure. This model does not lead to unrealistic bimodalities, can be found with Boltzmann learning, and has an associated Glauber dynamics which incorporates a minimal asymmetric inhibition.
NASA Astrophysics Data System (ADS)
Peng, L.; Sheffield, J.; Verbist, K. M. J.
2016-12-01
Hydrological predictions at regional-to-global scales are often hampered by the lack of meteorological forcing data. The use of large-scale gridded meteorological data is able to overcome this limitation, but these data are subject to regional biases and unrealistic values at local scale. This is especially challenging in regions such as Chile, where climate exhibits high spatial heterogeneity as a result of long latitude span and dramatic elevation changes. However, regional station-based observational datasets are not fully exploited and have the potential of constraining biases and spatial patterns. This study aims at adjusting precipitation and temperature estimates from the Princeton University global meteorological forcing (PGF) gridded dataset to improve hydrological simulations over Chile, by assimilating 982 gauges from the Dirección General de Aguas (DGA). To merge station data with the gridded dataset, we use a state-space estimation method to produce optimal gridded estimates, considering both the error of the station measurements and the gridded PGF product. The PGF daily precipitation, maximum and minimum temperature at 0.25° spatial resolution are adjusted for the period of 1979-2010. Precipitation and temperature gauges with long and continuous records (>70% temporal coverage) are selected, while the remaining stations are used for validation. The leave-one-out cross validation verifies the robustness of this data assimilation approach. The merged dataset is then used to force the Variable Infiltration Capacity (VIC) hydrological model over Chile at daily time step which are compared to the observations of streamflow. Our initial results show that the station-merged PGF precipitation effectively captures drizzle and the spatial pattern of storms. Overall the merged dataset has significant improvements compared to the original PGF with reduced biases and stronger inter-annual variability. The invariant spatial pattern of errors between the station data and the gridded product opens up the possibility of merging real-time satellite and intermittent gauge observations to produce more accurate real-time hydrological predictions.
Petrologic Constraints on Magma Plumbing Systems Beneath Hawaiian Volcanoes
NASA Astrophysics Data System (ADS)
Li, Y.; Peterman, K. J.; Scott, J. L.; Barton, M.
2016-12-01
We have calculated the pressures of partial crystalliztion of basaltic magmas from Hawaii using a petrological method. A total of 1576 major oxide analyses of glasses from four volcanoes (Kilauea and the Puna Ridge, Loihi, Mauna Loa, and Mauna Kea, on the Big Island) were compiled and used as input data. Glasses represent quenched liquid compositions and are ideal for calculation of pressures of partial crystallization. The results were filtered to exclude samples that yielded unrealistic high errors associated with the calculated pressure or negative value of pressure, and to exclude samples with non-basaltic compositions. Calculated pressures were converted to depths of partial crystallization. The majority (68.2%) of pressures for the shield-stage subaerial volcanoes Kilauea, Mauna Loa, and Mauna Kea, fall in the range 0-140 MPa, corresponding to depths of 0-5 km. Glasses from the Puna Ridge yield pressures ranging from 18 to 126 MPa and are virtually identical to pressures determined from glasses from Kilauea (0 to 129 MPa). These results are consistent with the presence of magma reservoirs at depths of 0-5 km beneath the large shield volcanoes. The inferred depth of the magma reservoir beneath the summit of Kilauea (average = 1.8 km, maximum = 5 km) agrees extremely well with depths ( 2-6 km) estimated from seismic studies. The results for Kilauea and Mauna Kea indicate that significant partial crystallization also occurs beneath the summit reservoirs at depths up to 11 km. These results are consistent with seismic evidence for the presence of a magma reservoir at 8-11 km beneath Kilauea at the base of the volcanic pile. The results for Loihi indicate crystallization at higher average pressures (100-400 MPa) and depths (3-14 km) than the large shield volcanoes, suggesting that the plumbing system is not yet fully developed, and that the Hawaiian volcanic plumbing systems evolve over time.
NASA Astrophysics Data System (ADS)
Marjani, A.; Allahdadi, M.
2016-02-01
Sitka, AK is included in Region X of FEMA Flood Hazard Mapping. The scoped shoreline is located east of the Sitka Sound connecting Sitka to the Pacific waters through a semi-narrow continental shelf. Wave hindcast is a fundamental component of Coastal Flood Risk Study Process. SWAN model on an unstructured mesh was used to determine the characteristics of waves along the Sitka shoreline. This area is substantially affected by a combination of both offshore waves (swells) and waves generated by severe local winds. The bathymetry inside the Sitka Sound and the nearshore areas along the Sitka coastline is very complex and includes many abrupt deepening as a result of geological characteristics or large tidal currents. The present study provides a brief review of the steps and challenges for a reliable wave modeling over this area. The requirement for running the model in non-stationary mode in combination with the mentioned complexities initiated instabilities regarding intense refractions that cause unrealistic large values for the peak period and the wave height. Refining the computational mesh over the areas with great depth gradients as well as increasing the spectral grid resolution and decreasing time steps did not satisfactorily resolve the above issue. Choosing an appropriate CFL Limiters on Spectral Propagation Velocities in SWAN setup (which is not considered in the default settings) could properly treat this instability (See attached Figure). The model offshore boundary was prescribed using wave data obtained from the WIS buoys, while wind forcing was resulted as a combination of Sitka airport and offshore Buoy wind data. Model performance in transformation of swells from the open boundary was evaluated using two more offshore WIS buoy data. A 1D model transferred the extracted wave data from SWAN to the surfzone along each selected transect for each storm event. The the final production was runup with different recurrence periods along the shoreline.
Smith, D.E.; Aagaard, Brad T.; Heaton, T.H.
2005-01-01
We investigate whether a shallow-dipping thrust fault is prone to waveslip interactions via surface-reflected waves affecting the dynamic slip. If so, can these interactions create faults that are opaque to radiated energy? Furthermore, in this case of a shallow-dipping thrust fault, can incorrectly assuming a transparent fault while using dislocation theory lead to underestimates of seismic moment? Slip time histories are generated in three-dimensional dynamic rupture simulations while allowing for varying degrees of wave-slip interaction controlled by fault-friction models. Based on the slip time histories, P and SH seismograms are calculated for stations at teleseismic distances. The overburdening pressure caused by gravity eliminates mode I opening except at the tip of the fault near the surface; hence, mode I opening has no effect on the teleseismic signal. Normalizing by a Haskell-like traditional kinematic rupture, we find teleseismic peak-to-peak displacement amplitudes are approximately 1.0 for both P and SH waves, except for the unrealistic case of zero sliding friction. Zero sliding friction has peak-to-peak amplitudes of 1.6 for P and 2.0 for SH waves; the fault slip oscillates about its equilibrium value, resulting in a large nonzero (0.08 Hz) spectral peak not seen in other ruptures. These results indicate wave-slip interactions associated with surface-reflected phases in real earthquakes should have little to no effect on teleseismic motions. Thus, Haskell-like kinematic dislocation theory (transparent fault conditions) can be safety used to simulate teleseismic waveforms in the Earth.
Free-tropospheric BrO investigations based on GOME
NASA Astrophysics Data System (ADS)
Post, P.; van Roozendael, M.; Backman, L.; Damski, J.; Thölix, L.; Fayt, C.; Taalas, P.
2003-04-01
Bromine compounds contribute significantly to the stratospheric ozone depletion. However measurements of most bromine compounds are sparse or non-existent, and experimental studies essentially rely on BrO observations. The differences between balloon and ground based measurements of stratospheric BrO columns and satellite total column measurements are too large to be explained by measurement uncertainties. Therefore, it has been assumed that there is a concentration of BrO in the free troposphere of about 1-3 ppt. In a previous work, we have calculated the tropospheric BrO abundance as the difference between total BrO and stratospheric BrO columns. The total vertical column densities of BrO are extracted from GOME measurements using IASB-BIRA algorithms. The stratospheric amount has been calculated using chemical transport models (CTM). Results from SLIMCAT and FinROSE simulations are used for this purpose. SLIMCAT is a widely used 3D CTM that has been tested against balloon measurements. FinROSE is a 3D CTM developed at FMI. We have tried several different tropospheric BrO profiles. Our results show that a profile with high BrO concentrations in the boundary layer usually gives unrealistically high tropospheric column values over areas of low albedo (like oceans). This suggests that the tropospheric BrO would be predominantly distributed in the free troposphere. In this work, attempts are made to identify the signature of a free tropospheric BrO content when comparing cloudy and non-cloudy scenes. The possible impact of orography on measured BrO columns is also investigated.
Specific Non-Local Interactions Are Not Necessary for Recovering Native Protein Dynamics
Dasgupta, Bhaskar; Kasahara, Kota; Kamiya, Narutoshi; Nakamura, Haruki; Kinjo, Akira R.
2014-01-01
The elastic network model (ENM) is a widely used method to study native protein dynamics by normal mode analysis (NMA). In ENM we need information about all pairwise distances, and the distance between contacting atoms is restrained to the native value. Therefore ENM requires O(N2) information to realize its dynamics for a protein consisting of N amino acid residues. To see if (or to what extent) such a large amount of specific structural information is required to realize native protein dynamics, here we introduce a novel model based on only O(N) restraints. This model, named the ‘contact number diffusion’ model (CND), includes specific distance restraints for only local (along the amino acid sequence) atom pairs, and semi-specific non-local restraints imposed on each atom, rather than atom pairs. The semi-specific non-local restraints are defined in terms of the non-local contact numbers of atoms. The CND model exhibits the dynamic characteristics comparable to ENM and more correlated with the explicit-solvent molecular dynamics simulation than ENM. Moreover, unrealistic surface fluctuations often observed in ENM were suppressed in CND. On the other hand, in some ligand-bound structures CND showed larger fluctuations of buried protein atoms interacting with the ligand compared to ENM. In addition, fluctuations from CND and ENM show comparable correlations with the experimental B-factor. Although there are some indications of the importance of some specific non-local interactions, the semi-specific non-local interactions are mostly sufficient for reproducing the native protein dynamics. PMID:24625758
Robust isotropic super-resolution by maximizing a Laplace posterior for MRI volumes
NASA Astrophysics Data System (ADS)
Han, Xian-Hua; Iwamoto, Yutaro; Shiino, Akihiko; Chen, Yen-Wei
2014-03-01
Magnetic resonance imaging can only acquire volume data with finite resolution due to various factors. In particular, the resolution in one direction (such as the slice direction) is much lower than others (such as the in-plane direction), yielding un-realistic visualizations. This study explores to reconstruct MRI isotropic resolution volumes from three orthogonal scans. This proposed super- resolution reconstruction is formulated as a maximum a posterior (MAP) problem, which relies on the generation model of the acquired scans from the unknown high-resolution volumes. Generally, the deviation ensemble of the reconstructed high-resolution (HR) volume from the available LR ones in the MAP is represented as a Gaussian distribution, which usually results in some noise and artifacts in the reconstructed HR volume. Therefore, this paper investigates a robust super-resolution by formulating the deviation set as a Laplace distribution, which assumes sparsity in the deviation ensemble based on the possible insight of the appeared large values only around some unexpected regions. In addition, in order to achieve reliable HR MRI volume, we integrates the priors such as bilateral total variation (BTV) and non-local mean (NLM) into the proposed MAP framework for suppressing artifacts and enriching visual detail. We validate the proposed robust SR strategy using MRI mouse data with high-definition resolution in two direction and low-resolution in one direction, which are imaged in three orthogonal scans: axial, coronal and sagittal planes. Experiments verifies that the proposed strategy can achieve much better HR MRI volumes than the conventional MAP method even with very high-magnification factor: 10.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roots, R; Okada, S
1975-11-01
We have used a mammalian tissue culture system to calculate the life times and diffusion distances in DNA scissions as well as cell killing for the three main products of water radiolysis: OH, H, and e$sup -$/sub aq/. Using various alcohols as radical scavengers, the average life time for OH in DNA single-strand breaks was calculated to be about 4 x 10$sup -9$ sec. Using the same data and published rate constants, the apparent life time of H atoms was calculated to vary from about 2 x 10$sup -7$ to 4 x 10$sup -6$ sec and, similarly, the calculated lifemore » time of the hydrated electron was found to vary more than was the case for OH. From these life times, the radical diffusion distances were estimated to be approximately 60 A for OH, which is reasonable, but the values for both H and e$sup -$/sub aq/ were unrealistically large, i.e., 880 to 4040 A for H and 9590 to 19,810 A for e$sup -$/sub aq/. In cell killing, the OH radical life time was estimated to be about 8.7 x 10$sup -9$ sec which gives an average diffusion distance for this radical of about 93 A. Our data support the idea that OH is the radical species primarily responsible for the indirect effect in radiation injury measured as DNA single-strand breaks or cell killing, and that H and e$sup -$/sub aq/ are not significantly involved. (auth)« less
To the Question about the Quality of Economic Education
ERIC Educational Resources Information Center
Dyshaeva, Lyudmila
2015-01-01
The article discusses the shortcomings of the methodology of neoclassical theory as a basic theory determining the content of contemporary economic theory course at Russian educational institutions namely unrealistic conditions of perfect competition, rationality of economic behavior of business entities, completeness and authenticity of…
A Study of Children's Interests in Words.
ERIC Educational Resources Information Center
Frymier, Jack
1987-01-01
Examines the Macmillan "Spelling" books in terms of the concept of "hyperrationalization": the imposing of unproven techniques and/or the setting of unrealistic expectations. Explicates the assumptions and beliefs implicit in the "Spelling" books and studies children's interests in the words and concepts in the…
Sociological Factors in the Development of Eating Disorders.
ERIC Educational Resources Information Center
Nagel, K. L.; Jones, Karen H.
1992-01-01
Reviews sociocultural, socioeconomic, and sex-related factors which contribute to development of eating disorders. Recommends that professionals help adolescents resist societal pressure to conform to unrealistic standards of appearance and provide guidance on nutrition, realistic body ideals, and achievement of self-esteem, self-efficacy,…
Pupils Today: Greater Stress and Frustrations.
ERIC Educational Resources Information Center
Neuber, Manfred, Ed.
1988-01-01
This article reports on a survey of West German elementary and secondary school students undertaken to discover their opinions of the West German educational system. Their primary criticism was directed toward: (1) scholastic performance pressure; (2) unrealistic learning objectives; and (3) unsympathetic teachers. Older students were more…
Marketing: Educators New Buzz Word.
ERIC Educational Resources Information Center
Cotoia, Anthony M.
Many concerned academic administrators are turning to marketing as the cure for shrinking enrollments. These administrators often have unrealistic expectations of what marketing techniques can achieve. Marketing cannot cover up for programs of poor quality, create customers in an over-harvested market, or overcome high attrition when students…
Sport without Frontiers: Just an Unrealistic Wish?
ERIC Educational Resources Information Center
Koch, Mathias
1990-01-01
Describes and recommends the project, Sports without Frontiers, as a useful tool in the integration of foreign immigrants. Suggests that employment of foreigners in West Germany's job market means the needs of foreigners must be addressed. Argues this program promotes social integration through sports. (NL)
While laboratory toxicology tests are generally easy to perform, cost effective and readily interpreted, they have been criticized for being unrealistic. In contrast, field tests are considered realistic while producing results that are difficult to interpret and expensive. To ...
Will Your Walls Come Crumbling Down?
ERIC Educational Resources Information Center
Dunn, John A., Jr.
1990-01-01
The magnitude of funds needed to rectify the college deferred maintenance crisis make a quick fix unrealistic. However, by increasing funding levels and shifting spending patterns over time, institutions can address the problem without jeopardizing other goals and projects. A consortium of higher education associations recommends a…
Personality, Self-Regulated Learning, and Academic Entitlement
ERIC Educational Resources Information Center
McLellan, Chelsea K.; Jackson, Dennis L.
2017-01-01
The current study explored the relation between the Big-Five personality domains, self-regulated learning, and academic entitlement. Academic entitlement is defined as the tendency to possess expectations of unearned academic success, unearned/undeserved academic services, and/or the expectation of unrealistic accommodation (Chowning and Campbell…
Humanizing Blindness through Public Education.
ERIC Educational Resources Information Center
Augusto, C. R.; McGraw, J. M.
1990-01-01
Public attitudes toward blindness are shaped by limited contacts with visually impaired people and unrealistic portrayals of blind people in the media. Proactive efforts including national and local public education programs are needed to change stereotyped thinking, humanize blindness, and lead to greater opportunities for fuller participation in…
Preschoolers can infer general rules governing fantastical events in fiction.
Van de Vondervoort, Julia W; Friedman, Ori
2014-05-01
Young children are frequently exposed to fantastic fiction. How do they make sense of the unrealistic and impossible events that occur in such fiction? Although children could view such events as isolated episodes, the present experiments suggest that children use such events to infer general fantasy rules. In 2 experiments, 2- to 4-year-olds were shown scenarios in which 2 animals behaved unrealistically (N = 78 in Experiment 1, N = 94 in Experiment 2). When asked to predict how other animals in the fiction would behave, children predicted novel behaviors consistent with the nature of the fiction. These findings suggest that preschoolers can infer the general rules that govern the events and entities in fantastic fiction and can use these rules to predict what events will happen in the fiction. The findings also provide evidence that children may infer fantasy rules at a more superordinate level than the basic level. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Li, Ziguang; Lin, Xiaopei; Cai, Wenju
2017-07-10
El Niño-Southern Oscillation (ENSO) and the Indian Ocean Dipole (IOD) tend to exert an offsetting impact on Indian summer monsoon rainfall (ISMR), with an El Niño event tending to lower, whereas a positive IOD tending to increase ISMR. Simulation of these relationships in Phase Five of the Coupled Model Intercomparison Project has not been fully assessed, nor is their impact on the response of ISMR to greenhouse warming. Here we show that the majority of models simulate an unrealistic present-day IOD-ISMR correlation due to an overly strong control by ENSO. As such, a positive IOD is associated with an ISMR reduction in the simulated present-day climate. This unrealistic present-day correlation is relevant to future ISMR projection, inducing an underestimation in the projected ISMR increase. Thus uncertainties in ISMR projection can be in part induced by present-day simulation of ENSO, the IOD, their relationship and their rainfall correlations.
Accident frequency and unrealistic optimism: Children's assessment of risk.
Joshi, Mary Sissons; Maclean, Morag; Stevens, Claire
2018-02-01
Accidental injury is a major cause of mortality and morbidity among children, warranting research on their risk perceptions. Three hundred and seven children aged 10-11 years assessed the frequency, danger and personal risk likelihood of 8 accidents. Two social-cognitive biases were manifested. The frequency of rare accidents (e.g. drowning) was overestimated, and the frequency of common accidents (e.g. bike accidents) underestimated; and the majority of children showed unrealistic optimism tending to see themselves as less likely to suffer these accidents in comparison to their peers, offering superior skills or parental control of the environment as an explanation. In the case of pedestrian accidents, children recognised their seriousness, underestimated the frequency of this risk and regarded their own road crossing skill as protection. These findings highlight the challenging task facing safety educators who, when teaching conventional safety knowledge and routines, also need to alert children to the danger of over-confidence without disabling them though fear. Copyright © 2017 Elsevier Ltd. All rights reserved.
The role of elasticity in simulating long-term tectonic extension
NASA Astrophysics Data System (ADS)
Olive, Jean-Arthur; Behn, Mark D.; Mittelstaedt, Eric; Ito, Garrett; Klein, Benjamin Z.
2016-05-01
While elasticity is a defining characteristic of the Earth's lithosphere, it is often ignored in numerical models of long-term tectonic processes in favour of a simpler viscoplastic description. Here we assess the consequences of this assumption on a well-studied geodynamic problem: the growth of normal faults at an extensional plate boundary. We conduct 2-D numerical simulations of extension in elastoplastic and viscoplastic layers using a finite difference, particle-in-cell numerical approach. Our models simulate a range of faulted layer thicknesses and extension rates, allowing us to quantify the role of elasticity on three key observables: fault-induced topography, fault rotation, and fault life span. In agreement with earlier studies, simulations carried out in elastoplastic layers produce rate-independent lithospheric flexure accompanied by rapid fault rotation and an inverse relationship between fault life span and faulted layer thickness. By contrast, models carried out with a viscoplastic lithosphere produce results that may qualitatively resemble the elastoplastic case, but depend strongly on the product of extension rate and layer viscosity U × ηL. When this product is high, fault growth initially generates little deformation of the footwall and hanging wall blocks, resulting in unrealistic, rigid block-offset in topography across the fault. This configuration progressively transitions into a regime where topographic decay associated with flexure is fully accommodated within the numerical domain. In addition, high U × ηL favours the sequential growth of multiple short-offset faults as opposed to a large-offset detachment. We interpret these results by comparing them to an analytical model for the fault-induced flexure of a thin viscous plate. The key to understanding the viscoplastic model results lies in the rate-dependence of the flexural wavelength of a viscous plate, and the strain rate dependence of the force increase associated with footwall and hanging wall bending. This behaviour produces unrealistic deformation patterns that can hinder the geological relevance of long-term rifting models that assume a viscoplastic rheology.
Climate Model Ensemble Methodology: Rationale and Challenges
NASA Astrophysics Data System (ADS)
Vezer, M. A.; Myrvold, W.
2012-12-01
A tractable model of the Earth's atmosphere, or, indeed, any large, complex system, is inevitably unrealistic in a variety of ways. This will have an effect on the model's output. Nonetheless, we want to be able to rely on certain features of the model's output in studies aiming to detect, attribute, and project climate change. For this, we need assurance that these features reflect the target system, and are not artifacts of the unrealistic assumptions that go into the model. One technique for overcoming these limitations is to study ensembles of models which employ different simplifying assumptions and different methods of modelling. One then either takes as reliable certain outputs on which models in the ensemble agree, or takes the average of these outputs as the best estimate. Since the Intergovernmental Panel on Climate Change's Fourth Assessment Report (IPCC AR4) modellers have aimed to improve ensemble analysis by developing techniques to account for dependencies among models, and to ascribe unequal weights to models according to their performance. The goal of this paper is to present as clearly and cogently as possible the rationale for climate model ensemble methodology, the motivation of modellers to account for model dependencies, and their efforts to ascribe unequal weights to models. The method of our analysis is as follows. We will consider a simpler, well-understood case of taking the mean of a number of measurements of some quantity. Contrary to what is sometimes said, it is not a requirement of this practice that the errors of the component measurements be independent; one must, however, compensate for any lack of independence. We will also extend the usual accounts to include cases of unknown systematic error. We draw parallels between this simpler illustration and the more complex example of climate model ensembles, detailing how ensembles can provide more useful information than any of their constituent models. This account emphasizes the epistemic importance of considering degrees of model dependence, and the practice of ascribing unequal weights to models of unequal skill.
Determination techniques of Archie’s parameters: a, m and n in heterogeneous reservoirs
NASA Astrophysics Data System (ADS)
Mohamad, A. M.; Hamada, G. M.
2017-12-01
The determination of water saturation in a heterogeneous reservoir is becoming more challenging, as Archie’s equation is only suitable for clean homogeneous formation and Archie’s parameters are highly dependent on the properties of the rock. This study focuses on the measurement of Archie’s parameters in carbonate and sandstone core samples around Malaysian heterogeneous carbonate and sandstone reservoirs. Three techniques for the determination of Archie’s parameters a, m and n will be implemented: the conventional technique, core Archie parameter estimation (CAPE) and the three-dimensional regression technique (3D). By using the results obtained by the three different techniques, water saturation graphs were produced to observe the symbolic difference of Archie’s parameter and its relevant impact on water saturation values. The difference in water saturation values can be primarily attributed to showing the uncertainty level of Archie’s parameters, mainly in carbonate and sandstone rock samples. It is obvious that the accuracy of Archie’s parameters has a profound impact on the calculated water saturation values in carbonate sandstone reservoirs due to regions of high stress reducing electrical conduction resulting from the raised electrical heterogeneity of the heterogeneous carbonate core samples. Due to the unrealistic assumptions involved in the conventional method, it is better to use either the CAPE or 3D method to accurately determine Archie’s parameters in heterogeneous as well as homogeneous reservoirs.
NASA Astrophysics Data System (ADS)
Scherstjanoi, M.; Kaplan, J. O.; Thürig, E.; Lischke, H.
2013-02-01
Models of vegetation dynamics that are designed for application at spatial scales larger than individual forest gaps suffer from several limitations. Typically, either a population average approximation is used that results in unrealistic tree allometry and forest stand structure, or models have a high computational demand because they need to simulate both a series of age-based cohorts and a number of replicate patches to account for stochastic gap-scale disturbances. The detail required by the latter method increases the number of calculations by two to three orders of magnitude compared to the less realistic population average approach. In an effort to increase the efficiency of dynamic vegetation models without sacrificing realism, and to explore patterns of spatial scaling in forests, we developed a new method for simulating stand-replacing disturbances that is both accurate and 10-50x faster than approaches that use replicate patches. The GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) method works by postprocessing the output of deterministic, undisturbed simulations of a cohort-based vegetation model by deriving the distribution of patch ages at any point in time on the basis of a disturbance probability. With this distribution, the expected value of any output variable can be calculated from the output values of the deterministic undisturbed run at the time corresponding to the patch age. To account for temporal changes in model forcing, e.g., as a result of climate change, GAPPARD performs a series of deterministic simulations and interpolates between the results in the postprocessing step. We integrated the GAPPARD method in the forest models LPJ-GUESS and TreeM-LPJ, and evaluated these in a series of simulations along an altitudinal transect of an inner-alpine valley. With GAPPARD applied to LPJ-GUESS results were insignificantly different from the output of the original model LPJ-GUESS using 100 replicate patches, but simulation time was reduced by approximately the factor 10. Our new method is therefore highly suited rapidly approximating LPJ-GUESS results, and provides the opportunity for future studies over large spatial domains, allows easier parameterization of tree species, faster identification of areas of interesting simulation results, and comparisons with large-scale datasets and forest models.
The Real World of the Beginning Teacher.
ERIC Educational Resources Information Center
National Education Association, Washington, DC. National Commission on Teacher Education and Professional Standards.
Problems and goals of beginning teachers are the subject of these speeches presented by both experienced and beginning teachers at the 1965 national conference of the National Commission on Teacher Education and Professional Standards. The problems include the differences between teacher expectations and encounters, unrealistic teaching and…
Cognitive Restructuring and a Collaborative Set in Couples' Work.
ERIC Educational Resources Information Center
Huber, Charles H.; Milstein, Barbara
1985-01-01
Investigated effects of cognitive restructuring efforts to modify unrealistic beliefs of marital partners in 17 couples. Treatment program sought to impact proactively upon positive therapeutic expectations and relationship goals and enhanced base level of marital satisfaction. On all outcome measures, treatment group (N=9 couples) showed…
Evaluation of air-liquid interface exposure systems for in vitro assessment of airborne pollutants
Exposure of cells to airborne pollutants at the air-liquid interface (ALI) is a more realistic approach than exposures of submerged cells. The published literature, however, describes irreproducible and/or unrealistic experimental conditions using ALI systems. We have compared fi...
Interpreting the New Sexual Harassment Guidelines.
ERIC Educational Resources Information Center
Hoyman, Michele; Robinson, Ronda
1980-01-01
Discusses sexual harrassment guidelines which legally define the term as sex discrimination. While this is good for those facing harassment, it unrealistically places a socially-based problem on the shoulders of personnel managers. Points out the long-term benefits of a workplace free from harassment and intimidation. (JOW)
The Celebrity Family: A Clinical Perspective.
ERIC Educational Resources Information Center
Mitchell, Gary; Cronson, Harold
1987-01-01
Identifies characteristics of the celebrity family, examining issues of confidentiality and trust, family boundaries, parenting roles, unrealistic expectations of and for the children, and family isolation. Describes the family as entering into a tacit contract enabling the celebrity to pursue his/her career by relinquishing the parental role to…
The Use of Vignettes to Empower Effective Responses to Attempted Sexual Assault
ERIC Educational Resources Information Center
Allen, Kaylie T.; Meadows, Elizabeth A.
2017-01-01
Objective: Women assertively resisting sexual aggression have the best chances of avoiding completed rape. Especially with acquaintances, there are significant social and psychological barriers to resistance. Novel vignettes depicting acquaintance rape were designed to enhance self-efficacy, reduce unrealistic optimism, and empower assertive…
Andronis, Lazaros; Barton, Pelham M
2016-04-01
Value of information (VoI) calculations give the expected benefits of decision making under perfect information (EVPI) or sample information (EVSI), typically on the premise that any treatment recommendations made in light of this information will be implemented instantly and fully. This assumption is unlikely to hold in health care; evidence shows that obtaining further information typically leads to "improved" rather than "perfect" implementation. To present a method of calculating the expected value of further research that accounts for the reality of improved implementation. This work extends an existing conceptual framework by introducing additional states of the world regarding information (sample information, in addition to current and perfect information) and implementation (improved implementation, in addition to current and optimal implementation). The extension allows calculating the "implementation-adjusted" EVSI (IA-EVSI), a measure that accounts for different degrees of implementation. Calculations of implementation-adjusted estimates are illustrated under different scenarios through a stylized case study in non-small cell lung cancer. In the particular case study, the population values for EVSI and IA-EVSI were £ 25 million and £ 8 million, respectively; thus, a decision assuming perfect implementation would have overestimated the expected value of research by about £ 17 million. IA-EVSI was driven by the assumed time horizon and, importantly, the specified rate of change in implementation: the higher the rate, the greater the IA-EVSI and the lower the difference between IA-EVSI and EVSI. Traditionally calculated measures of population VoI rely on unrealistic assumptions about implementation. This article provides a simple framework that accounts for improved, rather than perfect, implementation and offers more realistic estimates of the expected value of research. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Terray, P.; Sooraj, K. P.; Masson, S.; Krishna, R. P. M.; Samson, G.; Prajeesh, A. G.
2017-07-01
State-of-the-art global coupled models used in seasonal prediction systems and climate projections still have important deficiencies in representing the boreal summer tropical rainfall climatology. These errors include prominently a severe dry bias over all the Northern Hemisphere monsoon regions, excessive rainfall over the ocean and an unrealistic double inter-tropical convergence zone (ITCZ) structure in the tropical Pacific. While these systematic errors can be partly reduced by increasing the horizontal atmospheric resolution of the models, they also illustrate our incomplete understanding of the key mechanisms controlling the position of the ITCZ during boreal summer. Using a large collection of coupled models and dedicated coupled experiments, we show that these tropical rainfall errors are partly associated with insufficient surface thermal forcing and incorrect representation of the surface albedo over the Northern Hemisphere continents. Improving the parameterization of the land albedo in two global coupled models leads to a large reduction of these systematic errors and further demonstrates that the Northern Hemisphere subtropical deserts play a seminal role in these improvements through a heat low mechanism.
Static Analysis of Large-Scale Multibody System Using Joint Coordinates and Spatial Algebra Operator
Omar, Mohamed A.
2014-01-01
Initial transient oscillations inhibited in the dynamic simulations responses of multibody systems can lead to inaccurate results, unrealistic load prediction, or simulation failure. These transients could result from incompatible initial conditions, initial constraints violation, and inadequate kinematic assembly. Performing static equilibrium analysis before the dynamic simulation can eliminate these transients and lead to stable simulation. Most exiting multibody formulations determine the static equilibrium position by minimizing the system potential energy. This paper presents a new general purpose approach for solving the static equilibrium in large-scale articulated multibody. The proposed approach introduces an energy drainage mechanism based on Baumgarte constraint stabilization approach to determine the static equilibrium position. The spatial algebra operator is used to express the kinematic and dynamic equations of the closed-loop multibody system. The proposed multibody system formulation utilizes the joint coordinates and modal elastic coordinates as the system generalized coordinates. The recursive nonlinear equations of motion are formulated using the Cartesian coordinates and the joint coordinates to form an augmented set of differential algebraic equations. Then system connectivity matrix is derived from the system topological relations and used to project the Cartesian quantities into the joint subspace leading to minimum set of differential equations. PMID:25045732
Modelling of loading, stress relaxation and stress recovery in a shape memory polymer.
Sweeney, J; Bonner, M; Ward, I M
2014-09-01
A multi-element constitutive model for a lactide-based shape memory polymer has been developed that represents loading to large tensile deformations, stress relaxation and stress recovery at 60, 65 and 70°C. The model consists of parallel Maxwell arms each comprising neo-Hookean and Eyring elements. Guiu-Pratt analysis of the stress relaxation curves yields Eyring parameters. When these parameters are used to define the Eyring process in a single Maxwell arm, the resulting model yields at too low a stress, but gives good predictions for longer times. Stress dip tests show a very stiff response on unloading by a small strain decrement. This would create an unrealistically high stress on loading to large strain if it were modelled by an elastic element. Instead it is modelled by an Eyring process operating via a flow rule that introduces strain hardening after yield. When this process is incorporated into a second parallel Maxwell arm, there results a model that fully represents both stress relaxation and stress dip tests at 60°C. At higher temperatures a third arm is required for valid predictions. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Omar, Mohamed A
2014-01-01
Initial transient oscillations inhibited in the dynamic simulations responses of multibody systems can lead to inaccurate results, unrealistic load prediction, or simulation failure. These transients could result from incompatible initial conditions, initial constraints violation, and inadequate kinematic assembly. Performing static equilibrium analysis before the dynamic simulation can eliminate these transients and lead to stable simulation. Most exiting multibody formulations determine the static equilibrium position by minimizing the system potential energy. This paper presents a new general purpose approach for solving the static equilibrium in large-scale articulated multibody. The proposed approach introduces an energy drainage mechanism based on Baumgarte constraint stabilization approach to determine the static equilibrium position. The spatial algebra operator is used to express the kinematic and dynamic equations of the closed-loop multibody system. The proposed multibody system formulation utilizes the joint coordinates and modal elastic coordinates as the system generalized coordinates. The recursive nonlinear equations of motion are formulated using the Cartesian coordinates and the joint coordinates to form an augmented set of differential algebraic equations. Then system connectivity matrix is derived from the system topological relations and used to project the Cartesian quantities into the joint subspace leading to minimum set of differential equations.
Communicating to Build Trust and Confidence
ERIC Educational Resources Information Center
Olefson, Jeff; Arum, Ed
2012-01-01
Working effectively with stakeholders is a common challenge for school business officials (SBOs) and noninstructional school leaders. SBOs are often frustrated by unrealistic expectations and a lack of appreciation for their efforts. What's more, they are often blamed for things over which they have no control. One might blame lack of…
Cultural Representations in Foreign Language Teaching: A Critique of Four BBC Courses.
ERIC Educational Resources Information Center
Mar-Molinero, Clare
1992-01-01
Examines four well-known BBC language courses, "Deutsch Direkt!""A Vous la France!""Viva Espana," and "Buongiorno Italia!" It is suggested that these courses focus on tourism, stereotyping both learners and the target culture and that the presentation of the cultural materials is unrealistic. (VWL)
MMPI Profiles of Adolescents Charged with Homicide.
ERIC Educational Resources Information Center
Cornell, Dewey G.; And Others
Violent individuals are a heterogeneous group, making it unrealistic to think that a single psychological profile can classify them. Adolescents (N=72) at the Michigan Center for Forensic Psychiatry who had committed homicides were studied in an effort to distinguish clinically meaningful subtypes based on the motives and circumstance of their…
Group Treatment of Eating Disorders in a University Counseling Center.
ERIC Educational Resources Information Center
Snodgrass, Gregory; And Others
Sociocultural pressures to pursue an unrealistic ideal of thinness have contributed to an increasing number of students seeking help at a university counseling center for the eating disorders of anorexia nervosa and bulimia. To help these students, a group treatment technique was developed using a cognitive-behavioral approach. Treatment…
Using Webcams to Show Change and Movement in the Physical Environment
ERIC Educational Resources Information Center
Sawyer, Carol F.; Butler, David R.; Curtis, Mary
2010-01-01
Environmental change is ideally taught through field observations; however, leaving the classroom is often unrealistic due to financial and logistical constraints. The Internet offers several feasible alternatives using webcams that instructors can use to illustrate a variety of geographic examples and exercises for students. This article explores…
The "You Owe Me!" Mentality: A Student Entitlement Perception Paradox
ERIC Educational Resources Information Center
Schaefer, Thomas; Barta, Marguerite; Whitley, William; Stogsdill, Margie
2013-01-01
College and University faculty members routinely share stories and anecdotes about students who appear to have an unrealistic expectation of entitlement when it comes to following the requirements and dictates of classroom and collegiate rigor (Gill, 2009; Lippman, Bulanda, Wagenaar, 2009; Roosevelt, 2009). Faculty stories and discussions include…
The Consumption Aspirations of Adolescents: Determinants and Implications.
ERIC Educational Resources Information Center
Freedman, Deborah S.; Thornton, Arland
1990-01-01
Examines the determinants of the consumption aspirations of adolescents, with a major emphasis on the influence of the family. Finds that the ability of adolescents to purchase substantial consumer durable goods with their own earnings while being supported in the parental household may lead to unrealistic future consumption goals. (FMW)
Developing Sustainable Leadership
ERIC Educational Resources Information Center
Davies, Brent
2007-01-01
The emphasis on short-term accountability measures has often prioritised the use of short-term management strategies to meet test and Ofsted measures. Moving away from short-termism to more fundamental consideration of sustainable leadership is the focus of this paper. While it would be naive or unrealistic to suggest that school leaders do not…
ERIC Educational Resources Information Center
Yan, Duanli; Lewis, Charles; Stocking, Martha
It is unrealistic to suppose that standard item response theory (IRT) models will be appropriate for all new and currently considered computer-based tests. In addition to developing new models, researchers will need to give some attention to the possibility of constructing and analyzing new tests without the aid of strong models. Computerized…
ERIC Educational Resources Information Center
Hinchliffe, Geoff
2006-01-01
The current dominant concept of lifelong learning has arisen from the pressures of globalisation, economic change and the needs of the "knowledge economy". Its importance is not disputed in this paper. However, its proponents often advocate it in a form which places unrealistic demands on the individual without at the same time addressing their…
Can We Cross the Street in Time?
ERIC Educational Resources Information Center
Greenes, Carole E.; Cavanagh, Mary C.; Tsankova, Jenny K.; Glanfield, Florence A.
2013-01-01
In the authors' examination of various instructional programs, they observed that most provide all the necessary data to solve proportion problems, employ compatible numbers that are usually unrealistic, present numbers (data) in the order in which they are to be manipulated, discuss contexts that cannot be easily replicated, and present…
[The Effect of Desegregation on the Minority Child.
ERIC Educational Resources Information Center
Siggers, Kathleen
Measurements of self worth show that children in segregated schools, both white and black, have unrealistically high aspirations. Mexican-Americans measure lower than other major ethnic groups in feelings of self worth. There is evidence from social investigations, however, that segregation produces feelings of "imposed inferiority" among minority…
Investigation of "Skyscraper's" Feat
ERIC Educational Resources Information Center
Efthimiou, Costas
2018-01-01
"Skyscraper" is a Hollywood action film ("Skyscraper" official film site: www.skyscrapermovie.com) directed and written by Thurber scheduled to be released on 13 July 2018. We present an analysis of the feat shown in the recently released teaser poster and trailer of the film. Although the feat appears to be unrealistic at…
Water yields from forests: an agnostic view
Robert R. Ziemer
1987-01-01
Abstract - Although experimental watershed studies have consistently shown that water yield can be increased by removing trees and shrubs, programs to increase water yield on an operational scale have consistently failed. Failure has been related to overstated goals and benefits, unrealistic assumptions, political naivete, and the emergence of new interest groups....
ERIC Educational Resources Information Center
Roehling, Patricia Vincent; Robin, Arthur L.
1986-01-01
Evaluated the criterion-related validity of the Family Beliefs Inventory, a new self-report measure of unreasonable beliefs regarding parent-adolescent relationships. Distressed fathers displayed more unreasonable beliefs concerning ruination, obedience, perfectionism, and malicious intent than nondistressed fathers. Distressed adolescents…
If You're So Smart, Why Aren't You Rich?
ERIC Educational Resources Information Center
Woloshin, Phyllis
1986-01-01
Argues that salary differentials among community college faculty are unrealistic and based on the inaccurate assumptions that people are motivated primarily by money; that high-demand disciplines will remain constant over time; that attitudes toward other disciplines and faculty members are unaffected by salary differentials; and that teaching is…
Engine non-containment: UK risk assessment methods
NASA Technical Reports Server (NTRS)
Wallin, J. C.
1977-01-01
More realistic guideline data must be developed for use in aircraft design in order to comply with recent changes in British civil airworthiness requirements. Unrealistically pessimistic results were obtained when the methodology developed during the Concorde SST certification program was extended to assess catastrophic risks resulting from uncontained engine rotors.
Literature for Children: An Engagement with Life.
ERIC Educational Resources Information Center
Allen, Arthur T.
1967-01-01
The two complementary questions--"What does literature do to young readers?" and "Can literature be taught?"--are not easily answered. Youth should not employ literature as an exclusive guide to life since they will encounter numerous unrealistic situations. Instead, literature should entice them to deal vicariously with vivid, new experiences and…
What Does NCATE Have to Say to Future History Teachers?
ERIC Educational Resources Information Center
Wineburg, Sam
2005-01-01
According to current NCATE standards, social studies teachers must be well versed in economics, history, sociology, political science, psychology, anthropology, science and technology, and the arts. In short, they must know everything. In this article, the author dismisses these standards as totally unrealistic and argues that prospective social…
Improve Math Teaching with Incremental Improvements
ERIC Educational Resources Information Center
Star, Jon R.
2016-01-01
Past educational reforms have failed because they didn't meet teachers where they were. They expected major changes in practices that may have been unrealistic for many teachers even under ideal professional learning conditions. Instead of promoting broad scale changes, improvement may be more likely if they are composed of small yet powerful…
Education and Primitive Accumulation in Sub-Saharan Africa.
ERIC Educational Resources Information Center
Emoungu, Paul-Albert N.
1992-01-01
The 1988 World Bank report on education in sub-Saharan Africa overstates the regional "crisis" in educational quality and recommends unrealistic strategies, ignoring the fact that basic human needs such as education are unmet because political elites corruptly privatize much of the wealth generated by their nations' economies. (SV)
ERIC Educational Resources Information Center
Foeman, Anita
Too often African American students in communication courses are confronted with communication principles which to them seem inappropriate, unrealistic, and simply false. Current conceptualizations of organizational communication suggest that the organization consists of one culture (often depicted as predominantly white and predominantly male).…
Students' Perceptions toward Academic Competencies: The Case of German First-Year Students
ERIC Educational Resources Information Center
Mah, Dana-Kristin; Ifenthaler, Dirk
2018-01-01
Students often enter higher education academically unprepared and with unrealistic perceptions and expectations regarding academic competencies for their studies. However, preparedness and realistic perceptions are important factors for student retention. With regard to a proposed model of five academic competencies (time management, learning…
Improving High School Transition with CAT Camp
ERIC Educational Resources Information Center
Geltner, Jill; Law, Brian; Forehand, Amanda; Miles, Dinah Amber
2011-01-01
Transition to a new school for adolescent students can be challenging. Students who have difficulty navigating the transition to ninth grade are at an increased disadvantage academically and personally. Even students who approach the move with excitement often have unrealistic expectations of what is necessary for success. Overall, those who do…
NASA Astrophysics Data System (ADS)
Zamuriano, Marcelo; Brönnimann, Stefan
2017-04-01
It's known that some extremes such as heavy rainfalls, flood events, heatwaves and droughts depend largely on the atmospheric circulation and local features. Bolivia is no exception and while the large scale dynamics over the Amazon has been largely investigated, the local features driven by the Andes Cordillera and the Altiplano is still poorly documented. New insights on the regional atmospheric dynamics preceding heavy precipitation and flood events over the complex topography of the Andes-Amazon interface are added through numerical investigations of several case events: flash flood episodes over La Paz city and the extreme 2014 flood in south-western Amazon basin. Large scale atmospheric water transport is dynamically downscaled in order to take into account the complex topography forcing and local features as modulators of these events. For this purpose, a series of high resolution numerical experiments with the WRF-ARW model is conducted using various global datasets and parameterizations. While several mechanisms have been suggested to explain the dynamics of these episodes, they have not been tested yet through numerical modelling experiments. The simulations captures realistically the local water transport and the terrain influence over atmospheric circulation, even though the precipitation intensity is in general unrealistic. Nevertheless, the results show that Dynamical Downscaling over the tropical Andes' complex terrain provides useful meteorological data for a variety of studies and contributes to a better understanding of physical processes involved in the configuration of these events.
Further Discussion: Parametric Study of Wind Generated Supermicron Particle Effects in Large Fires
NASA Technical Reports Server (NTRS)
Toon, O. B.; Ackerman, T. P.
1987-01-01
In their reply (Porch et al., 1987) to our comments (Turco et al., 1987) on their smoke-scavenging-by-dust paper, Porch et al. attempt to justify a number of parameter assumptions in their original article, again revealing the extreme nature of those assumptions, particularly in the situation where all are taken simultaneously. In critiquing Porch et al.'s calculations, have not applied "opinion", but rather physical reality and common sense expressed through basic experimental results and logical physical bounds. A few examples of the unrealistic conditions required by the Porch et al. scavenging scheme, as described in their paper and comments, should suffice here. ) Porch et al. have fabricated a "fetch" region for dust particles in large fire plumes that logically must extend over an area up to 50 times greater than the fire area itself. Alternatively, they have invoked significant "necking down' of the fire plume, so that its cross-sectional area is at most a few percent of the fire area. Such severe constriction is seen only in very small fires with strong, organized vorticity, and then only over a limited plume rise region. No "fetch" has ever been noted in any large-scale fires we have observed, or for which accounts are available. Indeed, as we deduced in our original comments, complete dust scavenging even within the fire zone would probably occur less than 10% of the time for large urban fires.
Layers of protection analysis in the framework of possibility theory.
Ouazraoui, N; Nait-Said, R; Bourareche, M; Sellami, I
2013-11-15
An important issue faced by risk analysts is how to deal with uncertainties associated with accident scenarios. In industry, one often uses single values derived from historical data or literature to estimate events probability or their frequency. However, both dynamic environments of systems and the need to consider rare component failures may make unrealistic this kind of data. In this paper, uncertainty encountered in Layers Of Protection Analysis (LOPA) is considered in the framework of possibility theory. Data provided by reliability databases and/or experts judgments are represented by fuzzy quantities (possibilities). The fuzzy outcome frequency is calculated by extended multiplication using α-cuts method. The fuzzy outcome is compared to a scenario risk tolerance criteria and the required reduction is obtained by resolving a possibilistic decision-making problem under necessity constraint. In order to validate the proposed model, a case study concerning the protection layers of an operational heater is carried out. Copyright © 2013 Elsevier B.V. All rights reserved.
Potential acidification impacts on zooplankton in CCS leakage scenarios.
Halsband, Claudia; Kurihara, Haruko
2013-08-30
Carbon capture and storage (CCS) technologies involve localized acidification of significant volumes of seawater, inhabited mainly by planktonic species. Knowledge on potential impacts of these techniques on the survival and physiology of zooplankton, and subsequent consequences for ecosystem health in targeted areas, is scarce. The recent literature has a focus on anthropogenic greenhouse gas emissions into the atmosphere, leading to enhanced absorption of CO2 by the oceans and a lowered seawater pH, termed ocean acidification. These studies explore the effects of changes in seawater chemistry, as predicted by climate models for the end of this century, on marine biota. Early studies have used unrealistically severe CO2/pH values in this context, but are relevant for CCS leakage scenarios. Little studied meso- and bathypelagic species of the deep sea may be especially vulnerable, as well as vertically migrating zooplankton, which require significant residence times at great depths as part of their life cycle. Copyright © 2013 Elsevier Ltd. All rights reserved.
Treating Patients as Persons: A Capabilities Approach to Support Delivery of Person-Centered Care
Entwistle, Vikki A.; Watt, Ian S.
2013-01-01
Health services internationally struggle to ensure health care is “person-centered” (or similar). In part, this is because there are many interpretations of “person-centered care” (and near synonyms), some of which seem unrealistic for some patients or situations and obscure the intrinsic value of patients’ experiences of health care delivery. The general concern behind calls for person-centered care is an ethical one: Patients should be “treated as persons.” We made novel use of insights from the capabilities approach to characterize person-centered care as care that recognizes and cultivates the capabilities associated with the concept of persons. This characterization unifies key features from previous characterisations and can render person-centered care applicable to diverse patients and situations. By tying person-centered care to intrinsically valuable capability outcomes, it incorporates a requirement for responsiveness to individuals and explains why person-centered care is required independently of any contribution it may make to health gain. PMID:23862598
Temperature and initial curvature effects in low-density panel flutter
NASA Technical Reports Server (NTRS)
Resende, Hugo B.
1992-01-01
The panel flutter phenomenon is studied assuming free-molecule flow. This kind of analysis is relevant in the case of hypersonic flight vehicles traveling at high altitudes, especially in the leeward portion of the vehicle. In these conditions the aerodynamic shear can be expected to be considerably larger than the pressure at a given point, so that the effects of such a loading are incorporated into the structural model. Both the pressure and shear loadings are functions of the panel temperature, which can lead to great variations on the location of the stability boundaries for parametric studies. Different locations can, however, be 'collapsed' onto one another by using as ordinate an appropriately normalized dynamic pressure parameter. This procedure works better for higher values of the panel temperature for a fixed undisturbed flow temperature. Finally, the behavior of the system is studied when the panel has some initial curvature. This leads to the conclusion that it may be unrealistic to try to distinguish between a parabolic or sinusoidal initial shape.
On the Significance of Fuzzification of the N and M in Cancer Staging
Yones, Sara A; Moussa, Ahmed S; Hassan, Hesham; Alieldin, Nelly H
2014-01-01
The tumor, node, metastasis (TNM) staging system has been regarded as one of the most widely used staging systems for solid cancer. The “T” is assigned a value according to the primary tumor size, whereas the “N” and “M” are dependent on the number of regional lymph nodes and the presence of distant metastasis, respectively. The current TNM model classifies stages into five crisp classes. This is unrealistic since the drastic modification in treatment that is based on a change in one class may be based on a slight shift around the class boundary. Moreover, the system considers any tumor that has distant metastasis as stage 4, disregarding the metastatic lesion concentration and size. We had handled the problem of T staging in previous studies using fuzzy logic. In this study, we focus on the fuzzification of N and M staging for more accurate and realistic modeling which may, in turn, lead to better treatment and medical decisions. PMID:25089089
A new simple /spl infin/OH neuron model as a biologically plausible principal component analyzer.
Jankovic, M V
2003-01-01
A new approach to unsupervised learning in a single-layer neural network is discussed. An algorithm for unsupervised learning based upon the Hebbian learning rule is presented. A simple neuron model is analyzed. A dynamic neural model, which contains both feed-forward and feedback connections between the input and the output, has been adopted. The, proposed learning algorithm could be more correctly named self-supervised rather than unsupervised. The solution proposed here is a modified Hebbian rule, in which the modification of the synaptic strength is proportional not to pre- and postsynaptic activity, but instead to the presynaptic and averaged value of postsynaptic activity. It is shown that the model neuron tends to extract the principal component from a stationary input vector sequence. Usually accepted additional decaying terms for the stabilization of the original Hebbian rule are avoided. Implementation of the basic Hebbian scheme would not lead to unrealistic growth of the synaptic strengths, thanks to the adopted network structure.
A mass weighted chemical elastic network model elucidates closed form domain motions in proteins
Kim, Min Hyeok; Seo, Sangjae; Jeong, Jay Il; Kim, Bum Joon; Liu, Wing Kam; Lim, Byeong Soo; Choi, Jae Boong; Kim, Moon Ki
2013-01-01
An elastic network model (ENM), usually Cα coarse-grained one, has been widely used to study protein dynamics as an alternative to classical molecular dynamics simulation. This simple approach dramatically saves the computational cost, but sometimes fails to describe a feasible conformational change due to unrealistically excessive spring connections. To overcome this limitation, we propose a mass-weighted chemical elastic network model (MWCENM) in which the total mass of each residue is assumed to be concentrated on the representative alpha carbon atom and various stiffness values are precisely assigned according to the types of chemical interactions. We test MWCENM on several well-known proteins of which both closed and open conformations are available as well as three α-helix rich proteins. Their normal mode analysis reveals that MWCENM not only generates more plausible conformational changes, especially for closed forms of proteins, but also preserves protein secondary structures thus distinguishing MWCENM from traditional ENMs. In addition, MWCENM also reduces computational burden by using a more sparse stiffness matrix. PMID:23456820
Reconsidering Fairness: A Matter of Social and Ethical Priorities.
ERIC Educational Resources Information Center
Gottfredson, Linda S.
1988-01-01
Argues on basis of research on importance of "g" (intelligence) factor and racial differences in "g" that many valid, unbiased tests can be expected to produce high levels of adverse impact when used in race-neutral manner, especially in high-level jobs. Argues that unrealistic expectation regarding racial parity often leads employers to adopt…
Open top chambers (OTC) have been used to study effects of air pollutants on plants for over 25 years. Many of those studies have used potted plants, producing environmental and ecological rooting conditions that are unrealistic. Others have placed OTC over existing vegetation, ...
Entitlement Attitudes Predict Students' Poor Performance in Challenging Academic Conditions
ERIC Educational Resources Information Center
Anderson, Donna; Halberstadt, Jamin; Aitken, Robert
2013-01-01
Excessive entitlement--an exaggerated or unrealistic belief about what one deserves--has been associated with a variety of maladaptive behaviors, including a decline in motivation and effort. In the context of tertiary education, we reasoned that if students expend less effort to obtain positive outcomes to which they feel entitled, this should…
ERIC Educational Resources Information Center
Blake-Beard, Stacy D.
2001-01-01
Comparison of women in formal and informal mentoring relationships showed that formal mentoring often led to unrealistic expectations; unbalanced focus on proteges; difficulty managing relationships among supervisors, proteges, and mentors; and damage from gossip. Informal mentoring may provide psychosocial and career support without these…
What's the Right Answer? Team Problem-Solving in Environments of Uncertainty
ERIC Educational Resources Information Center
Jameson, Daphne A.
2009-01-01
Whether in the workplace or the classroom, many teams approach problem-solving as a search for certainty--even though certainty rarely exists in business. This search for the one right answer to a problem creates unrealistic expectations and often undermines teams' effectiveness. To help teams manage their problem-solving process and communication…
Consequences of KPIs and Performance Management in Higher Education
ERIC Educational Resources Information Center
Kairuz, Therése; Andriés, Lynn; Nickloes, Tracy; Truter, Ilse
2016-01-01
Purpose: The core business of universities is learning. Cognitive thinking is critical for learning and the development of new knowledge which are essential in higher education. Creative, reflective and critical thinking are negatively affected by unrealistic demands and stress. The purpose of this paper is to argue that key performance indicators…
What's Really Blocking School Desegregation? Equal Opportunity Review, July 1973.
ERIC Educational Resources Information Center
Sobel, Morton J.
There is little question that the primary element regarding school desegregation is the latent and overt racism pervading American society. Perhaps it is unrealistic to suggest that the school, the transmission belt of American mores from one generation to the next, is likely to intervene in the already existing pattern. Moreover, statements and…
The impact of incongruous lake temperatures is demonstrated using the Weather Research and Forecasting (WRF) Model to downscale global climate fields. Unrealistic lake temperatures prescribed by the default WRF configuration cause obvious biases near the lakes and also affect pre...
Management of U.S. Coast Guard Information Security Program Using Management by Objectives.
1979-09-01
conducted. These men described their jobs and the attendant problems with obvious complete frankness and in the most lucid way. Thirdly, the security...scenario is not an unrealistic dream but a statement of the conditions that would exist if the organization arrived at some future state successfully
ERIC Educational Resources Information Center
Coe, Betsy
2007-01-01
Recent literature debates the effectiveness of the senior year, stating that it is an unproductive and unrealistic time for many students--either because they are, by that point in their school careers, disengaged and waiting to graduate, or because they are so stressed from taking Advanced Placement (AP) exams and writing college essays that they…
The Culture-Area Concept: Does it Diffract Anthropological Light?
ERIC Educational Resources Information Center
Howard, James H.
1975-01-01
It is contended that the consistent employment of the culture area theory by several generations of anthropologists, historians, and writers on American Indian art has tended to create a highly unrealistic and rigid picture of American Indian Cultures in the eyes of both the public and the Indian people themselves. (JC)
ERIC Educational Resources Information Center
Klymkowsky, Michael W.; Rentsch, Jeremy D.; Begovic, Emina; Cooper, Melanie M.
2016-01-01
Many introductory biology courses amount to superficial surveys of disconnected topics. Often, foundational observations and the concepts derived from them and students' ability to use these ideas appropriately are overlooked, leading to unrealistic expectations and unrecognized learning obstacles. The result can be a focus on memorization at the…
ERIC Educational Resources Information Center
Uchitelle, Susan
2000-01-01
South Africa and the United States face similar problems: teachers' inadequacies in educating an increasingly diverse population; a culture of poverty undermining public support; urban decay and declining tax bases; insufficient resources; totally inadequate school facilities; and unrealistic expectations, considering allotted resources, faculty,…
Too Many Institutions Still Taking Band-Aid Approach to Minority Student Retention, Experts Say.
ERIC Educational Resources Information Center
Phillip, Mary-Christine
1993-01-01
Retention of minority college students is elusive because many institutions have not made a full commitment to it, as is apparent in faculty, curriculum, and corporate culture. Unrealistic expectations, racism, funding, college environment, faculty interaction with students, and parental role must be addressed before a solution is found. (MSE)
Throwing Caution to the Wind: Rationales for Risky Behavior.
ERIC Educational Resources Information Center
de La Rue, Denise; Ruback, R. Barry
There appears to be a tendency for people who have not been victimized by negative life events to perceive themselves as less vulnerable to victimization than others. Research has revealed this unrealistic optimism in risk perception. A study on rationales for risky behaviors was conducted to identify reasons other than this illusion of…
Perceptions of Risk Factors for School Violence: Concordance with FBI Risk Profile
ERIC Educational Resources Information Center
Wetterneck, Chad; Sass, Daniel A.; Davies, W. Hobart
2004-01-01
The Federal Bureau of Investigation (FBI, 2000) recently released a report on common background characteristics of school shooters, which also stressed the importance of evaluating the reality of threat. The present study evaluated respondents' ability to discriminate between an unrealistic and a realistic threat and between a low and high risk…
Perceptions of Risk Factors for School Violence: Concordance with FBI Risk Profile
ERIC Educational Resources Information Center
Wetterneck, Chad; Sass, Daniel A.; Davies, W. Hobart
2005-01-01
The Federal Bureau of Investigation (FBI, 2000) recently released a report on common background characteristics of school shooters, which also stressed the importance of evaluating the reality of threat. The present study evaluated respondents' ability to discriminate between an unrealistic and a realistic threat and between a low and high risk…
ERIC Educational Resources Information Center
Riddile, Mel
2010-01-01
Doing more with less is a familiar theme across the country. For many schools and districts, counting on the adoption of costly, expensive programs is unrealistic. Schools are expected to continue to raise student achievement regardless of the state of the economy. The bad news is that schools must improve with the same resources and the same…
Kellie J. Carim; Lisa A. Eby; Craig A. Barfoot; Matthew C. Boyer
2016-01-01
Fragmentation and isolation of wildlife populations has reduced genetic diversity worldwide, leaving many populations vulnerable to inbreeding depression and local extinction. Nonetheless, isolation is protecting many native aquatic species from interactions with invasive species, often making reconnection an unrealistic conservation strategy. Isolation management is...
IELTS: Global Implications of Curriculum and Materials Design.
ERIC Educational Resources Information Center
Wallace, Craig
1997-01-01
Questions the removal of a link between reading and writing tasks in the International English Language Testing System (IELTS) examinations on two grounds: that this removal is prejudicial to those students whose native cultures may not provide the appropriate schemata to effectively write; and that it is unrealistic in terms of the measurement of…
ERIC Educational Resources Information Center
Berman, Naomi; White, Alexandra
2013-01-01
The media plays a significant role in shaping cultural norms and attitudes, concomitantly reinforcing "body" and "beauty" ideals and gender stereotypes. Unrealistic, photoshopped and stereotyped images used by the media, advertising and fashion industries influence young people's body image and impact on their feelings of body…
More Than Just Style and Delivery: Recasting Public Speaking Courses for African American Students.
ERIC Educational Resources Information Center
Nance, Teresa A.
Recognizing that too often, African American students in communication courses are confronted with communication principles which to them seem inappropriate, unrealistic, and simply false, this paper analyzes the conceptual foundation of the public speaking course and suggests how it might be made more relevant for African American students. The…
Brain-Based Education in Action
ERIC Educational Resources Information Center
Jensen, Eric
2012-01-01
An essential understanding about brain-based education is that most neuroscientists don't teach and most teachers don't do research. It's unrealistic to expect neuroscientists to reveal which classroom strategies will work best. That's not appropriate for neuroscientists, and most don't do that. Many critics could cite this as a weakness, but it's…
Moral Self-Cultivation East and West: A Critique
ERIC Educational Resources Information Center
Slote, Michael
2016-01-01
Moral Self-Cultivation plays an important, even a central role, in the Confucian philosophical tradition, but philosophers in the West, most notably Aristotle and Kant, also hold that moral self-cultivation or self-shaping is possible and morally imperative. This paper argues that these traditions are psychologically unrealistic in what they say…
ERIC Educational Resources Information Center
Delamarter, Jeremy
2015-01-01
Pre-service teachers often have unrealistic expectations of teaching. They often create an inspiration/content dichotomy in which they expect relational activities to trump content delivery. Unchecked, these misaligned expectations can lead to practice shock, the disorienting and sometimes traumatic identity crisis that often occurs during the…
Working with Unrealistic or Unshared Hope in the Counselling Session
ERIC Educational Resources Information Center
Larsen, Denise Joy; Stege, Rachel; Edey, Wendy; Ewasiw, Joan
2014-01-01
Hope has long been identified as an important therapeutic factor in counselling. Further, research evidence for the importance of hope to counselling practice and outcome is abundant. However, the field is only beginning to explicitly consider how hope can be effectively and intentionally practised. One of the most challenging dilemmas encountered…
Increasing Parents' Child Development Knowledge and Use of Effective Discipline.
ERIC Educational Resources Information Center
Handforth, K. Clare
Interviews with professionals, a literature search, and a parent survey indicated that parents of toddlers had a need for knowledge about child development. This lack of knowledge was believed to be one factor in the reported use of non-effective discipline techniques, with the linking factor identified as unrealistic expectations. For this…
More Optimism About Future Events with Relative Left Hemisphere Activation.
ERIC Educational Resources Information Center
Drake, Roger A.
Unrealistic personal optimism is the perception that undesirable events are less likely and desirable events are more likely to happen to oneself than they are to happen to other similar people. Three experiments were performed to study the relationships among personal optimism, perceived control, and selective activation of the cerebral…
Bayesian Analysis of Multilevel Probit Models for Data with Friendship Dependencies
ERIC Educational Resources Information Center
Koskinen, Johan; Stenberg, Sten-Ake
2012-01-01
When studying educational aspirations of adolescents, it is unrealistic to assume that the aspirations of pupils are independent of those of their friends. Considerable attention has also been given to the study of peer influence in the educational and behavioral literature. Typically, in empirical studies, the friendship networks have either been…
Causal Models with Unmeasured Variables: An Introduction to LISREL.
ERIC Educational Resources Information Center
Wolfle, Lee M.
Whenever one uses ordinary least squares regression, one is making an implicit assumption that all of the independent variables have been measured without error. Such an assumption is obviously unrealistic for most social data. One approach for estimating such regression models is to measure implied coefficients between latent variables for which…
Reconciling (Seemingly) Discrepant Findings: Implications for Practice and Future Research
ERIC Educational Resources Information Center
Bowman, Nicholas A.; Herzog, Serge
2011-01-01
Decades of research in survey methodology and psychology have yielded important insights about how to create effective and valid survey instruments. As Porter (in press) has argued convincingly, college student surveys often fall well short of these standards by placing unrealistic demands on students' memory and by assuming that students readily…
Making a Great First Impression
ERIC Educational Resources Information Center
Evenson, Renee
2007-01-01
Managers and business owners often base hiring decisions on first impressions. That is why it is so important to teach students to make a great first impression--before they go on that first job interview. Managers do not have unrealistic expectations, they just want to hire people who they believe can develop into valuable employees. A nice…
ERIC Educational Resources Information Center
Vosgerau, Joachim
2010-01-01
People appear to be unrealistically optimistic about their future prospects, as reflected by theory and research in the fields of psychology, organizational behavior, behavioral economics, and behavioral finance. Many real-world examples (e.g., consumer behavior during economic recessions), however, suggest that people are not always overly…
How the Myth of the Dumb Jock Becomes Fact: A Developmental View for Counselors.
ERIC Educational Resources Information Center
Nelson, Eileen S.
1983-01-01
Uses Erikson's five stages of psychosocial development to examine the socialization of athletes. Because athletic ability is so heavily rewarded by adults and peers, athletes may prematurely commit themselves to unrealistic objectives and aspirations. Concerned counselors can help athletes define realistic educational and career goals. (JAC)
The New WWW: Whatever, Whenever, Wherever
ERIC Educational Resources Information Center
March, Tom
2006-01-01
We are entering an age of instant media gratification, in which a "multimedia aura" will "accompany us wherever we go," writes Tom March. The New WWW, says March, offers us whatever we want, whenever and wherever we want it. The effect on our children may be unrealistic expectations, premature disillusionment, and unhappiness. To counterbalance…
Another Crisis Can We Count on the Reserves?
1994-05-16
the " marshmallow " of volunteerism is something that is too soft of a concept to be counted upon. It only makes sense to develop a more well-defined...unrealistic. Involuntary activation has proven to be a hollow concept in practice, and planners are reluctant to count on the " marshmallow " of
Debunking a Video on Youtube as an Authentic Research Experience
ERIC Educational Resources Information Center
Davidowsky, Philip; Rogers, Michael
2015-01-01
Students are exposed to a variety of unrealistic physical experiences seen in movies, video games, and short online videos. A popular classroom activity has students examine footage to identify what aspects of physics are correctly and incorrectly represented. Some of the physical phenomena pictured might be tricks or illusions made easier to…
Elder Abuse: What's a Clinician To Do?
ERIC Educational Resources Information Center
Reis, Bruce E.
Incidence rates are critically examined in light of varying definitions of what constitutes elder abuse. It is suggested that the clinician's position of mandatory reporting is an unrealistic response in many cases of elder abuse due to the lack of adequate support services for either the abuser or the elder. Outcome studies are used to support…
A Missing Link: People, Practice and Some Precarious Research!
ERIC Educational Resources Information Center
Higdon, Carolyn Wiles; Higdon, Lawrence W.
2004-01-01
The field of augmentative and alternative communication's (AAC) missing link is the discrepancy between what the research community identifies as needs and what the clinical community, including the AAC user, believes to be the AAC user's needs. An unrealistic picture of the AAC user occurs, developing a top-down effect of limited outcomes,…
Managing Relative Decline: An Analysis of Foreign Policy Alternatives for the United States
2015-02-17
ineffective and exorbitantly expensive.39 Americans are more likely to be killed by a plethora of domestic threats (police, car accidents, and poverty ...decades with no sign of correction. It is unrealistic to expect that foreign policy will ever be divorced from domestic politics; thus, if not
Unrealistic Optimism in the Pursuit of Academic Success
ERIC Educational Resources Information Center
Lewine, Rich; Sommers, Alison A.
2016-01-01
Although the ability to evaluate one's own knowledge and performance is critical to learning, the correlation between students' self-evaluation and actual performance measures is modest at best. In this study we examine the effect of offering extra credit for students' accurate prediction (self-accuracy) of their performance on four exams in two…
Doing Harm in the Name of Protection: Menstruation as a Topic for Sex Education.
ERIC Educational Resources Information Center
Diorio, Joseph A.; Munro, Jennifer A.
2000-01-01
Pubertal changes in girls and boys are treated differently in New Zealand schools. Girls learn about menstruation in a scientific, bleak manner, getting an unrealistic picture of growing up. Boys receive positive information about exciting, powerful bodily changes. By protecting girls from problems associated with menstruation, schools risk…
NASA Astrophysics Data System (ADS)
Wilson, Colin J. N.; Seward, Terry M.; Allan, Aidan S. R.; Charlier, Bruce L. A.; Bello, Léa
2012-08-01
Trace concentrations of Ti in quartz are used to indicate the pressure and temperature conditions of crystallization in the `TitaniQ' geothermobarometer of Thomas et al. (Contrib Miner Petrol 160:743-759, 2010). It utilises the partitioning of Ti into quartz as an indicator of the pressures and/or temperatures of crystal growth. For a given value of TiO2 activity in the system, if temperatures are inferred to ±20 °C, pressure is constrained to ±1 kbar and vice versa. There are significant contrasts, however, between the conclusions from TitaniQ and those for natural quartz (as well as other mineral phases) in volcanic rocks. Application of the TitaniQ model to quartz from the 27 ka Oruanui and 760 ka Bishop high-silica rhyolites, where the values of T, P and TiO2 activity are constrained by other means (Fe-Ti oxide equilibria, melt inclusion entrapment pressures in gas-saturated melts, melt and amphibole compositions), yields inconsistent results. If realistic values are given to any two of these three parameters, then the value of the third is wholly unrealistic. The model yields growth temperatures at or below the granite solidus, pressures in the lower crust or upper mantle, or TiO2 activities inconsistent with the mineralogical and chemical compositions of the magmas. CL imagery and measurements of Ti (and other elements) in quartz are of great value in showing the growth histories and changes in conditions experienced by crystals, but direct linkages to P, T conditions during crystal growth cannot be achieved.
The link between judgments of comparative risk and own risk: further evidence.
Gold, Ron S
2007-03-01
Individuals typically believe that they are less likely than the average person to experience negative events, a phenomenon termed "unrealistic optimism". The direct method of assessing unrealistic optimism employs a question of the form, "Compared with the average person, what is the chance that X will occur to you?". However, it has been proposed that responses to such a question (direct-estimates) are based essentially just on estimates that X will occur to the self (self-estimates). If this is so, any factors that affect one of these estimates should also affect the other. This prediction was tested in two experiments. In each, direct- and self-estimates for an unfamiliar health threat - homocysteine-related heart problems - were recorded. It was found that both types of estimate were affected in the same way by varying the stated probability of having unsafe levels of homocysteine (Study 1, N=149) and varying the stated probability that unsafe levels of homocysteine will lead to heart problems (Study 2, N=111). The results are consistent with the proposal that direct-estimates are constructed just from self-estimates.
Age differences in memory for meaningful and arbitrary associations: A memory retrieval account.
Amer, Tarek; Giovanello, Kelly S; Grady, Cheryl L; Hasher, Lynn
2018-02-01
Older adults typically show poor associative memory performance relative to younger adults. This age-related effect, however, is mediated by the meaningfulness of the materials used, such that age differences are minimized with the use of information that is consistent with prior knowledge. While this effect has been interpreted as facilitative learning through schematic support, the role of memory retrieval on this effect has yet to be explored. Using an associative memory paradigm that varied the extent of controlled retrieval for previously studied meaningful or arbitrary associations, older and younger adults in the present study retrieved realistic and unrealistic grocery item prices in a speeded, or in a slow, more control-based retrieval condition. There were no age differences in memory for realistic (meaningful) prices in either condition; however, younger adults showed better memory than older adults for unrealistic prices in the controlled retrieval condition only. These results suggest that age differences in memory for arbitrary associations can, at least partly, be accounted for by age reductions in strategic, controlled retrieval. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Global Assessment of Land Surface Temperature From Geostationary Satellites and Model Estimates
NASA Technical Reports Server (NTRS)
Reichle, Rolf H.; Liu, Q.; Minnis, P.; daSilva, A. M., Jr.; Palikonda, R.; Yost, C. R.
2012-01-01
Land surface (or 'skin') temperature (LST) lies at the heart of the surface energy balance and is a key variable in weather and climate models. In this research we compare two global and independent data sets: (i) LST retrievals from five geostationary satellites generated at the NASA Langley Research Center (LaRC) and (ii) LST estimates from the quasi-operational NASA GEOS-5 global modeling and assimilation system. The objective is to thoroughly understand both data sets and their systematic differences in preparation for the assimilation of the LaRC LST retrievals into GEOS-5. As expected, mean differences (MD) and root-mean-square differences (RMSD) between modeled and retrieved LST vary tremendously by region and time of day. Typical (absolute) MD values range from 1-3 K in Northern Hemisphere mid-latitude regions to near 10 K in regions where modeled clouds are unrealistic, for example in north-eastern Argentina, Uruguay, Paraguay, and southern Brazil. Typically, model estimates of LST are higher than satellite retrievals during the night and lower during the day. RMSD values range from 1-3 K during the night to 2-5 K during the day, but are larger over the 50-120 W longitude band where the LST retrievals are derived from the FY2E platform
Improving SWAT for simulating water and carbon fluxes of forest ecosystems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Qichun; Zhang, Xuesong
2016-11-01
As a widely used watershed model for assessing impacts of anthropogenic and natural disturbances on water quantity and quality, the Soil and Water Assessment Tool (SWAT) has not been extensively tested in simulating water and carbon fluxes of forest ecosystems. Here, we examine SWAT simulations of evapotranspiration (ET), net primary productivity (NPP), net ecosystem exchange (NEE), and plant biomass at ten AmeriFlux forest sites across the U.S. We identify unrealistic radiation use efficiency (Bio_E), large leaf to biomass fraction (Bio_LEAF), and missing phosphorus supply from parent material weathering as the primary causes for the inadequate performance of the default SWATmore » model in simulating forest dynamics. By further revising the relevant parameters and processes, SWAT’s performance is substantially improved. Based on the comparison between the improved SWAT simulations and flux tower observations, we discuss future research directions for further enhancing model parameterization and representation of water and carbon cycling for forests.« less
Leading through partnering: from bedside to community.
Crockett, Anita B
2004-01-01
Partnering as a means of leading requires a particular focus and has particular characteristics. It is unrealistic to think that every person that participates in a partnership would have honed the skills to provide guidance, strength, and support for the process. It is not likely that every partner understands the collaborative process well enough to engage all partners with tact, openness, fairness, and critical, but respectful, reflection. The characteristics depicted in the Leading Through Partnering dome reflect those leaders who have integrated partnering into a coherent framework of action. Stern (2003), in describing her grounded theory research on "attentive partnering" among colleagues, determined that conditions for partnering seem to require the presence of "determined, persuasive leaders who foster growth-enhancing collegial relationships" (pg. 271). The concept of partnering continues to take hold in many forms. Leading Through Partnering as a variant form, whether occurring on a small scale at the bedside or a large scale in the community, is likely to be more than just a passing trend.
Self-consistent perturbation theory for two dimensional twisted bilayers
NASA Astrophysics Data System (ADS)
Shirodkar, Sharmila N.; Tritsaris, Georgios A.; Kaxiras, Efthimios
Theoretical modeling and ab-initio simulations of two dimensional heterostructures with arbitrary angles of rotation between layers involve unrealistically large and expensive calculations. To overcome this shortcoming, we develop a methodology for weakly interacting heterostructures that treats the effect of one layer on the other as perturbation, and restricts the calculations to their primitive cells. Thus, avoiding computationally expensive supercells. We start by approximating the interaction potential between the twisted bilayers to that of a hypothetical configuration (viz. ideally stacked untwisted layers), which produces band structures in reasonable agreement with full-scale ab-initio calculations for commensurate and twisted bilayers of graphene (Gr) and Gr/hexagonal boron nitride (h-BN) heterostructures. We then self-consistently calculate the charge density and hence, interaction potential of the heterostructures. In this work, we test our model for bilayers of various combinations of Gr, h-BN and transition metal dichalcogenides, and discuss the advantages and shortcomings of the self-consistently calculated interaction potential. Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USA.
A Low-Cost and Energy-Efficient Multiprocessor System-on-Chip for UWB MAC Layer
NASA Astrophysics Data System (ADS)
Xiao, Hao; Isshiki, Tsuyoshi; Khan, Arif Ullah; Li, Dongju; Kunieda, Hiroaki; Nakase, Yuko; Kimura, Sadahiro
Ultra-wideband (UWB) technology has attracted much attention recently due to its high data rate and low emission power. Its media access control (MAC) protocol, WiMedia MAC, promises a lot of facilities for high-speed and high-quality wireless communication. However, these benefits in turn involve a large amount of computational load, which challenges the traditional uniprocessor architecture based implementation method to provide the required performance. However, the constrained cost and power budget, on the other hand, makes using commercial multiprocessor solutions unrealistic. In this paper, a low-cost and energy-efficient multiprocessor system-on-chip (MPSoC), which tackles at once the aspects of system design, software migration and hardware architecture, is presented for the implementation of UWB MAC layer. Experimental results show that the proposed MPSoC, based on four simple RISC processors and shared-memory infrastructure, achieves up to 45% performance improvement and 65% power saving, but takes 15% less area than the uniprocessor implementation.
Groundwater connectivity of upland-embedded wetlands in the Prairie Pothole Region
Neff, Brian; Rosenberry, Donald O.
2018-01-01
Groundwater connections from upland-embedded wetlands to downstream waterbodies remain poorly understood. In principle, water from upland-embedded wetlands situated high in a landscape should flow via groundwater to waterbodies situated lower in the landscape. However, the degree of groundwater connectivity varies across systems due to factors such as geologic setting, hydrologic conditions, and topography. We use numerical models to evaluate the conditions suitable for groundwater connectivity between upland-embedded wetlands and downstream waterbodies in the prairie pothole region of North Dakota (USA). Results show groundwater connectivity between upland-embedded wetlands and other waterbodies is restricted when these wetlands are surrounded by a mounding water table. However, connectivity exists among adjacent upland-embedded wetlands where water–table mounds do not form. In addition, the presence of sand layers greatly facilitates groundwater connectivity of upland-embedded wetlands. Anisotropy can facilitate connectivity via groundwater flow, but only if it becomes unrealistically large. These findings help consolidate previously divergent views on the significance of local and regional groundwater flow in the prairie pothole region.
Unexpected but Incidental Positive Outcomes Predict Real-World Gambling.
Otto, A Ross; Fleming, Stephen M; Glimcher, Paul W
2016-03-01
Positive mood can affect a person's tendency to gamble, possibly because positive mood fosters unrealistic optimism. At the same time, unexpected positive outcomes, often called prediction errors, influence mood. However, a linkage between positive prediction errors-the difference between expected and obtained outcomes-and consequent risk taking has yet to be demonstrated. Using a large data set of New York City lottery gambling and a model inspired by computational accounts of reward learning, we found that people gamble more when incidental outcomes in the environment (e.g., local sporting events and sunshine) are better than expected. When local sports teams performed better than expected, or a sunny day followed a streak of cloudy days, residents gambled more. The observed relationship between prediction errors and gambling was ubiquitous across the city's socioeconomically diverse neighborhoods and was specific to sports and weather events occurring locally in New York City. Our results suggest that unexpected but incidental positive outcomes influence risk taking. © The Author(s) 2016.
LIQHYSMES—size, loss and cost considerations for the SMES—a conceptual analysis
NASA Astrophysics Data System (ADS)
Sander, Michael; Neumann, Holger
2011-10-01
A new energy storage concept for variable renewable energy, LIQHYSMES, has been proposed which combines the use of liquid hydrogen (LH2) with superconducting magnetic energy storage (SMES). LH2 with its high volumetric energy density and, compared with compressed hydrogen, increased operational safety is the prime energy carrier for large scale stationary energy storage. But balancing load or supply fluctuations with hydrogen alone is unrealistic due to the response times of the flow control. To operate the hydrogen part more steadily, additional short-term electrical energy storage is needed. For this purpose a SMES based on coated conductors or magnesium diboride MgB2 operated in the LH2 bath, is proposed. Different solenoidal and toroidal SMES designs for the 10 GJ range are compared in terms of size and ramping losses. Cost targets for different power levels and supply periods are addressed, taking into account current developments in competing short-term storage devices like super-capacitors, batteries and flywheels.
Dealing with Processing Chapter 10 Files from Multiple Vendors
NASA Technical Reports Server (NTRS)
Knudtson, Kevin Mark
2011-01-01
This presentation discusses the experiences of the NASA Dryden Flight Research Center's (DFRC) Western Aeronautical Test Range (WATR) in dealing with the problems encountered while performing post flight data processing using the WATR's data collection/processing system on Chapter 10 files from different Chapter 10 recorders. The transition to Chapter 10 recorders has brought Vvith it an assortment of issues that must be addressed: the ambiguities of language in the Chapter 10 standard, the unrealistic near-term expectations of the Chapter 10 standard, the incompatibility of data products generated from Chapter 10 recorders, and the unavailability of mature Chapter 10 applications. Some of these issues properly belong to the users of Chapter 10 recorders, some to the manufacturers, and some to the flight test community at large. The goal of this presentation is to share the WATR's lesson learned in processing data products from various Chapter 10 recorder vendors. The WATR could benefit greatly in the open forum Vvith lessons learned discussions with other members of the flight test community.
Schoenfisch, Ashley L; Lipscomb, Hester; Sinyai, Clayton; Adams, Darrin
2017-01-01
Despite the size and breadth of OSHA's Outreach Training program for construction, information on its impact on work-related injury rates is limited. In a 9-year dynamic cohort of 17,106 union carpenters in Washington State, the effectiveness of OSHA Outreach Training on workers' compensation claims rate was explored. Injury rates were calculated by training status overall and by carpenters' demographic and work characteristics using Poisson regression. OSHA Outreach Training resulted in a 13% non-significant reduction in injury claims rates overall. The protective effect was more pronounced for carpenters in their apprenticeship years, drywall installers, and with increasing time since training. In line with these observed effects and prior research, it is unrealistic to expect OSHA Outreach Training alone to have large effects on union construction workers' injury rates. Standard construction industry practice should include hazard awareness and protection training, coupled with more efficient approaches to injury control. Am. J. Ind. Med. 60:45-57, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Scheduling for Parallel Supercomputing: A Historical Perspective of Achievable Utilization
NASA Technical Reports Server (NTRS)
Jones, James Patton; Nitzberg, Bill
1999-01-01
The NAS facility has operated parallel supercomputers for the past 11 years, including the Intel iPSC/860, Intel Paragon, Thinking Machines CM-5, IBM SP-2, and Cray Origin 2000. Across this wide variety of machine architectures, across a span of 10 years, across a large number of different users, and through thousands of minor configuration and policy changes, the utilization of these machines shows three general trends: (1) scheduling using a naive FIFO first-fit policy results in 40-60% utilization, (2) switching to the more sophisticated dynamic backfilling scheduling algorithm improves utilization by about 15 percentage points (yielding about 70% utilization), and (3) reducing the maximum allowable job size further increases utilization. Most surprising is the consistency of these trends. Over the lifetime of the NAS parallel systems, we made hundreds, perhaps thousands, of small changes to hardware, software, and policy, yet, utilization was affected little. In particular these results show that the goal of achieving near 100% utilization while supporting a real parallel supercomputing workload is unrealistic.
The Medawar Lecture 2004 The truth about science
Lipton, Peter
2005-01-01
The attitudes of scientists towards the philosophy of science is mixed and includes considerable indifference and some hostility. This may be due in part to unrealistic expectation and to misunderstanding. Philosophy is unlikely directly to improve scientific practices, but scientists may find the attempt to explain how science works and what it achieves of considerable interest nevertheless. The present state of the philosophy of science is illustrated by recent work on the ‘truth hypothesis’, according to which, science is generating increasingly accurate representations of a mind-independent and largely unobservable world. According to Karl Popper, although truth is the aim of science, it is impossible to justify the truth hypothesis. According to Thomas Kuhn, the truth hypothesis is false, because scientists can only describe a world that is partially constituted by their own theories and hence not mind-independent. The failure of past scientific theories has been used to argue against the truth hypothesis; the success of the best current theories has been used to argue for it. Neither argument is sound. PMID:16147521
Han, Seung Seog; Park, Gyeong Hun; Lim, Woohyung; Kim, Myoung Shin; Na, Jung Im; Park, Ilwoo; Chang, Sung Eun
2018-01-01
Although there have been reports of the successful diagnosis of skin disorders using deep learning, unrealistically large clinical image datasets are required for artificial intelligence (AI) training. We created datasets of standardized nail images using a region-based convolutional neural network (R-CNN) trained to distinguish the nail from the background. We used R-CNN to generate training datasets of 49,567 images, which we then used to fine-tune the ResNet-152 and VGG-19 models. The validation datasets comprised 100 and 194 images from Inje University (B1 and B2 datasets, respectively), 125 images from Hallym University (C dataset), and 939 images from Seoul National University (D dataset). The AI (ensemble model; ResNet-152 + VGG-19 + feedforward neural networks) results showed test sensitivity/specificity/ area under the curve values of (96.0 / 94.7 / 0.98), (82.7 / 96.7 / 0.95), (92.3 / 79.3 / 0.93), (87.7 / 69.3 / 0.82) for the B1, B2, C, and D datasets. With a combination of the B1 and C datasets, the AI Youden index was significantly (p = 0.01) higher than that of 42 dermatologists doing the same assessment manually. For B1+C and B2+ D dataset combinations, almost none of the dermatologists performed as well as the AI. By training with a dataset comprising 49,567 images, we achieved a diagnostic accuracy for onychomycosis using deep learning that was superior to that of most of the dermatologists who participated in this study.
Roemelt, Michael; Krewald, Vera; Pantazis, Dimitrios A
2018-01-09
The accurate description of magnetic level energetics in oligonuclear exchange-coupled transition-metal complexes remains a formidable challenge for quantum chemistry. The density matrix renormalization group (DMRG) brings such systems for the first time easily within reach of multireference wave function methods by enabling the use of unprecedentedly large active spaces. But does this guarantee systematic improvement in predictive ability and, if so, under which conditions? We identify operational parameters in the use of DMRG using as a test system an experimentally characterized mixed-valence bis-μ-oxo/μ-acetato Mn(III,IV) dimer, a model for the oxygen-evolving complex of photosystem II. A complete active space of all metal 3d and bridge 2p orbitals proved to be the smallest meaningful starting point; this is readily accessible with DMRG and greatly improves on the unrealistic metal-only configuration interaction or complete active space self-consistent field (CASSCF) values. Orbital optimization is critical for stabilizing the antiferromagnetic state, while a state-averaged approach over all spin states involved is required to avoid artificial deviations from isotropic behavior that are associated with state-specific calculations. Selective inclusion of localized orbital subspaces enables probing the relative contributions of different ligands and distinct superexchange pathways. Overall, however, full-valence DMRG-CASSCF calculations fall short of providing a quantitative description of the exchange coupling owing to insufficient recovery of dynamic correlation. Quantitatively accurate results can be achieved through a DMRG implementation of second order N-electron valence perturbation theory (NEVPT2) in conjunction with a full-valence metal and ligand active space. Perspectives for future applications of DMRG-CASSCF/NEVPT2 to exchange coupling in oligonuclear clusters are discussed.
A review of ADM1 extensions, applications, and analysis: 2002-2005.
Batstone, D J; Keller, J; Steyer, J P
2006-01-01
Since publication of the Scientific and Technical Report (STR) describing the ADM1, the model has been extensively used, and analysed in both academic and practical applications. Adoption of the ADM1 in popular systems analysis tools such as the new wastewater benchmark (BSM2), and its use as a virtual industrial system can stimulate modelling of anaerobic processes by researchers and practitioners outside the core expertise of anaerobic processes. It has been used as a default structural element that allows researchers to concentrate on new extensions such as sulfate reduction, and new applications such as distributed parameter modelling of biofilms. The key limitations for anaerobic modelling originally identified in the STR were: (i) regulation of products from glucose fermentation, (ii) parameter values, and variability, and (iii) specific extensions. Parameter analysis has been widespread, and some detailed extensions have been developed (e.g., sulfate reduction). A verified extension that describes regulation of products from glucose fermentation is still limited, though there are promising fundamental approaches. This is a critical issue, given the current interest in renewable hydrogen production from carbohydrate-type waste. Critical analysis of the model has mainly focused on model structure reduction, hydrogen inhibition functions, and the default parameter set recommended in the STR. This default parameter set has largely been verified as a reasonable compromise, especially for wastewater sludge digestion. One criticism of note is that the ADM1 stoichiometry focuses on catabolism rather than anabolism. This means that inorganic carbon can be used unrealistically as a carbon source during some anabolic reactions. Advances and novel applications have also been made in the present issue, which focuses on the ADM1. These papers also explore a number of novel areas not originally envisaged in this review.
A 60 yr record of atmospheric carbon monoxide reconstructed from Greenland firn air
NASA Astrophysics Data System (ADS)
Petrenko, V. V.; Martinerie, P.; Novelli, P.; Etheridge, D. M.; Levin, I.; Wang, Z.; Blunier, T.; Chappellaz, J.; Kaiser, J.; Lang, P.; Steele, L. P.; Hammer, S.; Mak, J.; Langenfelds, R. L.; Schwander, J.; Severinghaus, J. P.; Witrant, E.; Petron, G.; Battle, M. O.; Forster, G.; Sturges, W. T.; Lamarque, J.-F.; Steffen, K.; White, J. W. C.
2013-08-01
We present the first reconstruction of the Northern Hemisphere (NH) high latitude atmospheric carbon monoxide (CO) mole fraction from Greenland firn air. Firn air samples were collected at three deep ice core sites in Greenland (NGRIP in 2001, Summit in 2006 and NEEM in 2008). CO records from the three sites agree well with each other as well as with recent atmospheric measurements, indicating that CO is well preserved in the firn at these sites. CO atmospheric history was reconstructed back to the year 1950 from the measurements using a combination of two forward models of gas transport in firn and an inverse model. The reconstructed history suggests that Arctic CO in 1950 was 140-150 nmol mol-1, which is higher than today's values. CO mole fractions rose by 10-15 nmol mol-1 from 1950 to the 1970s and peaked in the 1970s or early 1980s, followed by a ≈ 30 nmol mol-1 decline to today's levels. We compare the CO history with the atmospheric histories of methane, light hydrocarbons, molecular hydrogen, CO stable isotopes and hydroxyl radicals (OH), as well as with published CO emission inventories and results of a historical run from a chemistry-transport model. We find that the reconstructed Greenland CO history cannot be reconciled with available emission inventories unless unrealistically large changes in OH are assumed. We argue that the available CO emission inventories strongly underestimate historical NH emissions, and fail to capture the emission decline starting in the late 1970s, which was most likely due to reduced emissions from road transportation in North America and Europe.
Modeling riverine nitrate export from an East-Central Illinois watershed using SWAT.
Hu, X; McIsaac, G F; David, M B; Louwers, C A L
2007-01-01
Reliable water quality models are needed to forecast the water quality consequences of different agricultural nutrient management scenarios. In this study, the Soil and Water Assessment Tool (SWAT), version 2000, was applied to simulate streamflow, riverine nitrate (NO(3)) export, crop yield, and watershed nitrogen (N) budgets in the upper Embarras River (UER) watershed in east-central Illinois, which has extensive maize-soybean cultivation, large N fertilizer input, and extensive tile drainage. During the calibration (1994-2002) and validation (1985-1993) periods, SWAT simulated monthly and annual stream flows with Nash-Sutcliffe coefficients (E) ranging from 0.67 to 0.94 and R(2) from 0.75 to 0.95. For monthly and annual NO(3) loads, E ranged from -0.16 to 0.45 and R(2) from 0.36 to 0.74. Annual maize and soybean yields were simulated with relative errors ranging from -10 to 6%. The model was then used to predict the changes in NO(3) output with N fertilizer application rates 10 to 50% lower than original application rates in UER. The calibrated SWAT predicted a 10 to 43% decrease in NO(3) export from UER and a 6 to 38% reduction in maize yield in response to the reduction in N fertilizer. The SWAT model markedly overestimated NO(3) export during major wet periods. Moreover, SWAT estimated soybean N fixation rates considerably greater than literature values, and some simulated changes in the N cycle in response to fertilizer reduction seemed to be unrealistic. Improving these aspects of SWAT could lead to more reliable predictions in the water quality outcomes of nutrient management practices in tile-drained watersheds.
Compaction behavior of surrogate degraded emplaced WIPP waste.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broome, Scott Thomas; Bronowski, David R.; Kuthakun, Souvanny James
The present study results are focused on laboratory testing of surrogate waste materials. The surrogate wastes correspond to a conservative estimate of degraded Waste Isolation Pilot Plant (WIPP) containers and TRU waste materials at the end of the 10,000 year regulatory period. Testing consists of hydrostatic, triaxial, and uniaxial strain tests performed on surrogate waste recipes that were previously developed by Hansen et al. (1997). These recipes can be divided into materials that simulate 50% and 100% degraded waste by weight. The percent degradation indicates the anticipated amount of iron corrosion, as well as the decomposition of cellulosics, plastics, andmore » rubbers (CPR). Axial, lateral, and volumetric strain and axial, lateral, and pore stress measurements were made. Two unique testing techniques were developed during the course of the experimental program. The first involves the use of dilatometry to measure sample volumetric strain under a hydrostatic condition. Bulk moduli of the samples measured using this technique were consistent with those measured using more conventional methods. The second technique involved performing triaxial tests under lateral strain control. By limiting the lateral strain to zero by controlling the applied confining pressure while loading the specimen axially in compression, one can maintain a right-circular cylindrical geometry even under large deformations. This technique is preferred over standard triaxial testing methods which result in inhomogeneous deformation or (3z(Bbarreling(3y. (BManifestations of the inhomogeneous deformation included non-uniform stress states, as well as unrealistic Poissons ratios (> 0.5) or those that vary significantly along the length of the specimen. Zero lateral strain controlled tests yield a more uniform stress state, and admissible and uniform values of Poissons ratio.« less
NASA Astrophysics Data System (ADS)
Khobragade, P.; Fan, Jiahua; Rupcich, Franco; Crotty, Dominic J.; Gilat Schmidt, Taly
2016-03-01
This study quantitatively evaluated the performance of the exponential transformation of the free-response operating characteristic curve (EFROC) metric, with the Channelized Hotelling Observer (CHO) as a reference. The CHO has been used for image quality assessment of reconstruction algorithms and imaging systems and often it is applied to study the signal-location-known cases. The CHO also requires a large set of images to estimate the covariance matrix. In terms of clinical applications, this assumption and requirement may be unrealistic. The newly developed location-unknown EFROC detectability metric is estimated from the confidence scores reported by a model observer. Unlike the CHO, EFROC does not require a channelization step and is a non-parametric detectability metric. There are few quantitative studies available on application of the EFROC metric, most of which are based on simulation data. This study investigated the EFROC metric using experimental CT data. A phantom with four low contrast objects: 3mm (14 HU), 5mm (7HU), 7mm (5 HU) and 10 mm (3 HU) was scanned at dose levels ranging from 25 mAs to 270 mAs and reconstructed using filtered backprojection. The area under the curve values for CHO (AUC) and EFROC (AFE) were plotted with respect to different dose levels. The number of images required to estimate the non-parametric AFE metric was calculated for varying tasks and found to be less than the number of images required for parametric CHO estimation. The AFE metric was found to be more sensitive to changes in dose than the CHO metric. This increased sensitivity and the assumption of unknown signal location may be useful for investigating and optimizing CT imaging methods. Future work is required to validate the AFE metric against human observers.
Van Geel, Paul J; Murray, Kathleen E
2015-12-01
Twelve instrument bundles were placed within two waste profiles as waste was placed in an operating landfill in Ste. Sophie, Quebec, Canada. The settlement data were simulated using a three-component model to account for primary or instantaneous compression, secondary compression or mechanical creep and biodegradation induced settlement. The regressed model parameters from the first waste layer were able to predict the settlement of the remaining four waste layers with good agreement. The model parameters were compared to values published in the literature. A MSW landfill scenario referenced in the literature was used to illustrate how the parameter values from the different studies predicted settlement. The parameters determined in this study and other studies with total waste heights between 15 and 60 m provided similar estimates of total settlement in the long term while the settlement rates and relative magnitudes of the three components varied. The parameters determined based on studies with total waste heights less than 15m resulted in larger secondary compression indices and lower biodegradation induced settlements. When these were applied to a MSW landfill scenario with a total waste height of 30 m, the settlement was overestimated and provided unrealistic values. This study concludes that more field studies are needed to measure waste settlement during the filling stage of landfill operations and more field data are needed to assess different settlement models and their respective parameters. Copyright © 2015 Elsevier Ltd. All rights reserved.
Brumbaugh, William G.; Besser, John M.; Ingersoll, Christopher G.; May, Thomas W.; Ivey, Chris D.; Schlekat, Christian E.; Garman, Emily R.
2013-01-01
Two spiking methods were compared and nickel (Ni) partitioning was evaluated during a series of toxicity tests with 8 different freshwater sediments having a range of physicochemical characteristics. A 2-step spiking approach with immediate pH adjustment by addition of NaOH at a 2:1 molar ratio to the spiked Ni was effective in producing consistent pH and other chemical characteristics across a range of Ni spiking levels. When Ni was spiked into sediment having a high acid-volatile sulfide and organic matter content, a total equilibration period of at least 10 wk was needed to stabilize Ni partitioning. However, highest spiking levels evidently exceeded sediment binding capacities; therefore, a 7-d equilibration in toxicity test chambers and 8 volume-additions/d of aerobic overlying water were used to avoid unrealistic Ni partitioning during toxicity testing. The 7-d pretest equilibration allowed excess spiked Ni and other ions from pH adjustment to diffuse from sediment porewater and promoted development of an environmentally relevant, 0.5- to 1-cm oxic/suboxic sediment layer in the test chambers. Among the 8 different spiked sediments, the logarithm of sediment/porewater distribution coefficient values (log Kd) for Ni during the toxicity tests ranged from 3.5 to 4.5. These Kd values closely match the range of values reported for various field Ni-contaminated sediments, indicating that testing conditions with our spiked sediments were environmentally realistic.
Song, Yoon S; Koontz, John L; Juskelis, Rima O; Zhao, Yang
2013-01-01
The migration of low molecular weight organic compounds through polyethylene terephthalate (PET) films was determined by using a custom permeation cell assembly. Fatty food simulant (Miglyol 812) was added to the receptor chamber, while the donor chamber was filled with 1% and 10% (v/v) migrant compounds spiked in simulant. The permeation cell was maintained at 40°C, 66°C, 100°C or 121°C for up to 25 days of polymer film exposure time. Migrants in Miglyol were directly quantified without a liquid-liquid extraction step by headspace-GC-MS analysis. Experimental diffusion coefficients (DP) of toluene, benzyl alcohol, ethyl butyrate and methyl salicylate through PET film were determined. Results from Limm's diffusion model showed that the predicted DP values for PET were all greater than the experimental values. DP values predicted by Piringer's diffusion model were also greater than those determined experimentally at 66°C, 100°C and 121°C. However, Piringer's model led to the underestimation of benzyl alcohol (Áp = 3.7) and methyl salicylate (Áp = 4.0) diffusion at 40°C with its revised "upper-bound" Áp value of 3.1 at temperatures below the glass transition temperature (Tg) of PET (<70°C). This implies that input parameters of Piringer's model may need to be revised to ensure a margin of safety for consumers. On the other hand, at temperatures greater than the Tg, both models appear too conservative and unrealistic. The highest estimated Áp value from Piringer's model was 2.6 for methyl salicylate, which was much lower than the "upper-bound" Áp value of 6.4 for PET. Therefore, it may be necessary further to refine "upper-bound" Áp values for PET such that Piringer's model does not significantly underestimate or overestimate the migration of organic compounds dependent upon the temperature condition of the food contact material.
ERIC Educational Resources Information Center
Jacobs, James N., Ed.; Felix, Joseph L., Ed.
1966-01-01
This evaluation of Cincinnati's Title I projects for the disadvantaged public school students notes that definitive statements about measurable results are unrealistic because the projects were evaluated after only 5 months in operation. However the evaluation establishes baseline data. Information about the 13 Title I projects was gathered from…
The Market for Optical Disk Products: A Review of Published Forecasts, 1980-1990.
ERIC Educational Resources Information Center
Saffady, William
1991-01-01
Examines the expectations and reality of optical disk market forecasts published between 1980 and 1990. A historical survey and review of these studies is presented, with general forecasts of all optical disk types included. It is concluded that unrealistic predictions may have contributed to a sluggish market for optical disks. (70 notes)…
Preschoolers Can Infer General Rules Governing Fantastical Events in Fiction
ERIC Educational Resources Information Center
Van de Vondervoort, Julia W.; Friedman, Ori
2014-01-01
Young children are frequently exposed to fantastic fiction. How do they make sense of the unrealistic and impossible events that occur in such fiction? Although children could view such events as isolated episodes, the present experiments suggest that children use such events to infer general fantasy rules. In 2 experiments, 2-to 4-year-olds were…
Uniting Technology and Pedagogy: The Evolution of an Online Teaching Certification Course
ERIC Educational Resources Information Center
Riedinger, Bonnie; Rosenberg, Paul
2006-01-01
Like all learners, new online instructors need hands-on experience, feedback, and ongoing support to become comfortable and proficient in the virtual classroom. It is unrealistic to expect even the most self-motivated, creatively pedagogical, and technically inclined instructor to fly solo after just a few hours of training. With the authors'…
ERIC Educational Resources Information Center
King, Michael A.
2009-01-01
Business intelligence derived from data warehousing and data mining has become one of the most strategic management tools today, providing organizations with long-term competitive advantages. Business school curriculums and popular database textbooks cover data warehousing, but the examples and problem sets typically are small and unrealistic. The…
ERIC Educational Resources Information Center
Niemiec, Ryan M.; Schulenberg, Stefan E.
2011-01-01
The portrayal of death is one of the most common themes in movies and is often unrealistic, promoting misconceptions to the public. However, there are also many films that portray death acceptance in an instructive way. Such films depict the development of character strengths useful in embracing life and lessening death anxiety, namely zest,…
Education and the Free Will Problem: A Spinozist Contribution
ERIC Educational Resources Information Center
Dahlbeck, Johan
2017-01-01
In this Spinozist defence of the educational promotion of students' autonomy I argue for a deterministic position where freedom of will is deemed unrealistic in the metaphysical sense, but important in the sense that it is an undeniable psychological fact. The paper is structured in three parts. The first part investigates the concept of autonomy…
The BreakAway Company: A Complete Career Readiness Program.
ERIC Educational Resources Information Center
Campbell, Don; And Others
The program was designed for 13- to 17-year-old at risk adolescents, those individuals who are experiencing difficulties at school, home and/or work. Their main characteristics are a lack of self-control and self-confidence, aggressive behavior, and either little or unrealistic thought about their future. The primary outcomes or aims of the…
Can Anyone Have It All? Gendered Views on Parenting and Academic Careers
ERIC Educational Resources Information Center
Sallee, Margaret; Ward, Kelly; Wolf-Wendel, Lisa
2016-01-01
This article is based on data from two qualitative studies that examined the experiences of 93 tenure-line faculty members who are also mothers and fathers. Using gender schemas and ideal worker norms as a guide, we examined the pressures that professors experience amid unrealistic expectations in their work and home lives. Women participants…
ERIC Educational Resources Information Center
Krstic, Marina; Kevereski, Ljupco
2015-01-01
Various pressures and influences of family, society, media, and other agents of socialization on individuals and their own pressures, associated with setting unrealistic goals and requirements lead to a life filled with worry, frustration and guilt, (Ferbezer, 2002). Perfectionism emerges especially as a negative trend in the behaviour of a…
ERIC Educational Resources Information Center
Bagheridoust, Esmaeil; Husseini, Zahra
2011-01-01
Writing as one important skill in language proficiency demands validity, hence high schools are real places in which valid results are needed for high-stake decisions. Unrealistic and non-viable tests result in improper and invalid interpretation and use. Illustrations without any written research have proved their effectiveness in whatsoever…
ERIC Educational Resources Information Center
Milligan, Tony
2007-01-01
In analytic moral philosophy it is standard to use unrealistic puzzles to set up moral dilemmas of a sort that I will call Lockean Puzzles. This paper will try to pinpoint just what is and what is not problematic about their use as a teaching tool or component part of philosophical arguments. I will try to flesh out the claim that what may be lost…
The mirror system in human and nonhuman primates.
Orban, Guy A
2014-04-01
The description of the mirror neuron system provided by Cook et al. is incomplete for the macaque, and incorrect for humans. This is relevant to exaptation versus associative learning as the underlying mechanism generating mirror neurons, and to the sensorimotor learning as evidence for the authors' viewpoint. The proposed additional testing of the mirror system in rodents is unrealistic.
ERIC Educational Resources Information Center
Stinson, Terrye A.; Zhao, Xiaofeng
2008-01-01
Past studies indicate that students are frequently poor judges of their likely academic performance in the classroom. The difficulty a student faces in accurately predicting performance on a classroom exam may be due to unrealistic optimism or may be due to an inability to self-evaluate academic performance, but the resulting disconnect between…
Thoughts on Teaching: Sometimes Apologies Are Not Enough
ERIC Educational Resources Information Center
Starnes, Bobby Ann
2004-01-01
This author believes the NCLB is a masterpiece of language manipulation. She feels that she can almost live with NCLB's flawed funding and unrealistic expectations. What she can't live with is its blatant failure and the hubris of those who willingly trade personal and political gain for our children's futures, regardless of skin color, accent,…
ERIC Educational Resources Information Center
Nabi, Robin L.
2009-01-01
The recent proliferation of reality-based television programs highlighting cosmetic surgery has raised concerns that such programming promotes unrealistic expectations of plastic surgery and increases the desire of viewers to undergo such procedures. In Study 1, a survey of 170 young adults indicated little relationship between cosmetic surgery…
ERIC Educational Resources Information Center
de Castro, Bram Orobio; Brendgen, Mara; Van Boxtel, Herman; Vitaro, Frank; Schaepers, Linda
2007-01-01
It has been proposed that aggressive behavior may result from unrealistically positive self-evaluations that are disputed by others (Baumeister, Smart, & Boden, 1996). The present three studies tested this proposition concurrently and longitudinally for the domain of self-perceived social competence (SPSC) in 3-6th grade children on two…
ERIC Educational Resources Information Center
Jayakumar, Uma M.; Comeaux, Eddie
2016-01-01
Using a combined grounded theory and case study methodology, Jayakumar and Comeaux examined the role of organizational culture in shaping the lives of college athletes, particularly related to negotiating dual roles as both student and athlete. Data collection involved 20 interviews with athletes and stakeholders in the affairs of intercollegiate…
ERIC Educational Resources Information Center
Morlando Zurlo, Tara
2017-01-01
The pathway for community college students to transfer vertically into four-year institutions to complete a bachelor's degree was designed nearly a century ago, yet it remains plagued by the same structural problems, such as confusing admissions processes, lack of transparent advising resources, and unrealistic time-to-degree demands without…
Young Girls Discovering Their Voice with Literacy and Readers Theater
ERIC Educational Resources Information Center
Zambo, Debby
2011-01-01
The ideal female, as portrayed in the media, has a perfect body, owns many trendy and costly possessions, and is submissive and sexy. Young girls are easily influenced by the media's portrayal of teens and women, therefore, they may begin to form unrealistic ideas about beauty and begin to judge themselves and each other based on these…
Educational Journeys of Hispanic Women in Nursing
ERIC Educational Resources Information Center
Herrera, Antoinette Navalta
2012-01-01
Hispanics continue to be the fastest growing minority population in the Nation. According to U.S. Census Bureau (2011; 2008), the Hispanic or Latino population was 16.3 percent in 2010 and is projected to be over 30 percent in 2050. However, only 3.6% of the RN population is Hispanic indicating an unrealistic representation of today's…
ERIC Educational Resources Information Center
Landry, Brett J. L.; Koger, M. Scott
2006-01-01
Disasters happen all the time; yet despite this, many organizations are caught unprepared or make unrealistic assumptions. These factors create environments that will fail during a disaster. Most information technology (IT) curricula do not cover disaster recovery (DR) plans and strategies in depth. The unfortunate result is that most new computer…
Improving Evaluation Use in Local School Settings. Optimizing Evaluation Use: Final Report.
ERIC Educational Resources Information Center
King, Jean A.; And Others
A project for studying ways to optimize utilization of evaluation products in public schools is reported. The results indicate that the negative picture of use prevalent in recent literature stems from the unrealistic expectation that local decision-makers will behave in a classically rational manner. Such a view ignores the political settings of…
The Destruction of the Young Black Male: The Impact of Popular Culture and Organized Sports.
ERIC Educational Resources Information Center
Gaston, John C.
1986-01-01
Argues that the negative aspects of popular culture and organized sports in American society contribute to the economic, psychological, and social destruction of the Black male. The media nurtures unrealistic fantasies in young Black males, preventing them from acquiring the education and skills necessary to participate in the mainstream. (ETS)
The Continued Need for Effective Remedial Reading Programs.
ERIC Educational Resources Information Center
Jansen, Mogens
Reading disability is described in this paper as a relative state that will appear different across societies and over the years. It is noted that researchers who claim that their objective is to find methods and materials to make remedial reading unnecessary are unrealistic. An overview of the Danish educational system and society during the last…
Solving Problems in Genetics, Part III: Change in the View of the Nature of Science
ERIC Educational Resources Information Center
Ibanez-Orcajo, M. Teresa; Martinez-Aznar, M. Mercedes
2007-01-01
Numerous investigations show that most school science teaching, in Spain and elsewhere, implicitly transmits an inductivist and very stereotyped view of science and conveys an unrealistic image of scientific work. We present some results of an investigation with fourth-level Spanish secondary education students (15 year olds) who were taught…
ERIC Educational Resources Information Center
Warren, Hermine
2014-01-01
In 2011, nearly 13 million nonsurgical cosmetic procedures were performed, representing a 6% increase from the previous year. Patients often present with unrealistic treatment expectations based on beauty industry standards and misinformation. In addition, due to the lack of competency standardization in this area, providers frequently deliver…
As a Matter of Force—Systematic Biases in Idealized Turbulence Simulations
NASA Astrophysics Data System (ADS)
Grete, Philipp; O’Shea, Brian W.; Beckwith, Kris
2018-05-01
Many astrophysical systems encompass very large dynamical ranges in space and time, which are not accessible by direct numerical simulations. Thus, idealized subvolumes are often used to study small-scale effects including the dynamics of turbulence. These turbulent boxes require an artificial driving in order to mimic energy injection from large-scale processes. In this Letter, we show and quantify how the autocorrelation time of the driving and its normalization systematically change the properties of an isothermal compressible magnetohydrodynamic flow in the sub- and supersonic regime and affect astrophysical observations such as Faraday rotation. For example, we find that δ-in-time forcing with a constant energy injection leads to a steeper slope in kinetic energy spectrum and less-efficient small-scale dynamo action. In general, we show that shorter autocorrelation times require more power in the acceleration field, which results in more power in compressive modes that weaken the anticorrelation between density and magnetic field strength. Thus, derived observables, such as the line-of-sight (LOS) magnetic field from rotation measures, are systematically biased by the driving mechanism. We argue that δ-in-time forcing is unrealistic and numerically unresolved, and conclude that special care needs to be taken in interpreting observational results based on the use of idealized simulations.
NASA Astrophysics Data System (ADS)
Walz, M. A.; Donat, M.; Leckebusch, G. C.
2017-12-01
As extreme wind speeds are responsible for large socio-economic losses in Europe, a skillful prediction would be of great benefit for disaster prevention as well as for the actuarial community. Here we evaluate patterns of large-scale atmospheric variability and the seasonal predictability of extreme wind speeds (e.g. >95th percentile) in the European domain in the dynamical seasonal forecast system ECMWF System 4, and compare to the predictability based on a statistical prediction model. The dominant patterns of atmospheric variability show distinct differences between reanalysis and ECMWF System 4, with most patterns in System 4 extended downstream in comparison to ERA-Interim. The dissimilar manifestations of the patterns within the two models lead to substantially different drivers associated with the occurrence of extreme winds in the respective model. While the ECMWF System 4 is shown to provide some predictive power over Scandinavia and the eastern Atlantic, only very few grid cells in the European domain have significant correlations for extreme wind speeds in System 4 compared to ERA-Interim. In contrast, a statistical model predicts extreme wind speeds during boreal winter in better agreement with the observations. Our results suggest that System 4 does not seem to capture the potential predictability of extreme winds that exists in the real world, and therefore fails to provide reliable seasonal predictions for lead months 2-4. This is likely related to the unrealistic representation of large-scale patterns of atmospheric variability. Hence our study points to potential improvements of dynamical prediction skill by improving the simulation of large-scale atmospheric dynamics.
NASA Astrophysics Data System (ADS)
Metzl, N.; Moore, B.; Poisson, A.
1990-10-01
For computing large-scale advective flow in the Indian ocean (including the Indian-Antarctic sector), we use a box-model approach and perturbed inverse method. The top 400 meters is not considered in this study, in view of the dominant seasonal dynamics. We use 1244 hydrographic stations, to estimate mean values for temperature, salinity, oxygen and phosphate concentratons. Fifty perturbed inversions of steady-state tracers conservations and thermal-wind equations are done using box-averages standard deviations and a 25% perturbation on the thermal-wind coefficients. The mean solutions represent the large-scale advective flow and carbon-decomposition rates in which we are interested. Solutions with only advective processes are first considered. The broad features of the circulation in the Indian Ocean are resolved in the intermediate levels, but in deeper layers, an input from North Atlantic Deep Water (NADW) is not apparent. Inspection of oxygen and phosphate residuals reveals a biochemical signal. Therefore, we introduce in the oxygen and phosphate budgets a linear parameterization (Redfield ratios) for the organic-decomposition processes. The structure of the residuals for oxygen and phosphate is changed in that the biochemical signal vanishes. The advective solutions are nearly the same in intermediate waters; however, in deep layers the new solution shows an inflow of 11 (±8) Sv of NADW south of Africa. The calculated total organic decomposition of 0.93 (±0.25) 10 15g C year -1 is about one fifth of the estimated world ocean amount, but total residuals of oxygen and phosphate lead to an unexplained 0.5 10 15g C year -1 missing carbon sink. The new solution does contain unrealistic elements (e.g. large deep flow between Indonesia and Australia). Finally, to investigate this last result, we add one advective constraint at the Indonesia-Australia boundary. This addition changes the circulation in the northeastern part of the Indian Ocean. The circumpolar flow between 400 m and 27.65 (σo) remains unchanged at 20°E and at 80°E; however, at 130°E the constraint increases the advective flow by 20%. Total organic-carbon-decomposition rates are not affected by the additional inflow from Indonesia.
Brunner-La Rocca, Hans Peter; Kaiser, Christoph; Bernheim, Alain; Zellweger, Michael J; Jeger, Raban; Buser, Peter T; Osswald, Stefan; Pfisterer, Matthias
2007-11-03
Our aim was to determine whether drug-eluting stents are good value for money in long-term, everyday practice. We did an 18-month cost-effectiveness analysis of the Basel Stent KostenEffektivitäts Trial (BASKET), which randomised 826 patients 2:1 to drug-eluting stents (n=545) or to bare-metal stents (281). We used non-parametric bootstrap techniques to determine incremental cost-effectiveness ratios (ICERs) of drug-eluting versus bare-metal stents, to compare low-risk (> or =3.0 mm stents in native vessels; n=558, 68%) and high-risk patients (<3.0 mm stents/bypass graft stenting; n=268, 32%), and to do sensitivity analyses by altering costs and event rates in the whole study sample and in predefined subgroups. Quality-adjusted life-years (QALYs) were assessed by EQ-5D questionnaire (available in 703/826 patients). Overall costs were higher for patients with drug-eluting stents than in those with bare-metal stents (11,808 euros [SD 400] per patient with drug-eluting stents and 10,450 euros [592] per patient with bare-metal stents, mean difference 1358 euros [717], p<0.0001), due to higher stent costs. We calculated an ICER of 64,732 euros to prevent one major adverse cardiac event, and of 40,467 euros per QALY gained. Stent costs, number of events, and QALYs affected ICERs most, but unrealistic alterations would have been required to achieve acceptable cost-effectiveness. In low-risk patients, the probability of drug-eluting stents achieving an arbitrary ICER of 10,000 euros or less to prevent one major adverse cardiac event was 0.016; by contrast, it was 0.874 in high-risk patients. If used in all patients, drug-eluting stents are not good value for money, even if prices were substantially reduced. Drug-eluting stents are cost effective in patients needing small vessel or bypass graft stenting, but not in those who require large native vessel stenting.
Maximum a posteriori resampling of noisy, spatially correlated data
NASA Astrophysics Data System (ADS)
Goff, John A.; Jenkins, Chris; Calder, Brian
2006-08-01
In any geologic application, noisy data are sources of consternation for researchers, inhibiting interpretability and marring images with unsightly and unrealistic artifacts. Filtering is the typical solution to dealing with noisy data. However, filtering commonly suffers from ad hoc (i.e., uncalibrated, ungoverned) application. We present here an alternative to filtering: a newly developed method for correcting noise in data by finding the "best" value given available information. The motivating rationale is that data points that are close to each other in space cannot differ by "too much," where "too much" is governed by the field covariance. Data with large uncertainties will frequently violate this condition and therefore ought to be corrected, or "resampled." Our solution for resampling is determined by the maximum of the a posteriori density function defined by the intersection of (1) the data error probability density function (pdf) and (2) the conditional pdf, determined by the geostatistical kriging algorithm applied to proximal data values. A maximum a posteriori solution can be computed sequentially going through all the data, but the solution depends on the order in which the data are examined. We approximate the global a posteriori solution by randomizing this order and taking the average. A test with a synthetic data set sampled from a known field demonstrates quantitatively and qualitatively the improvement provided by the maximum a posteriori resampling algorithm. The method is also applied to three marine geology/geophysics data examples, demonstrating the viability of the method for diverse applications: (1) three generations of bathymetric data on the New Jersey shelf with disparate data uncertainties; (2) mean grain size data from the Adriatic Sea, which is a combination of both analytic (low uncertainty) and word-based (higher uncertainty) sources; and (3) side-scan backscatter data from the Martha's Vineyard Coastal Observatory which are, as is typical for such data, affected by speckle noise. Compared to filtering, maximum a posteriori resampling provides an objective and optimal method for reducing noise, and better preservation of the statistical properties of the sampled field. The primary disadvantage is that maximum a posteriori resampling is a computationally expensive procedure.
Pro-development soap operas: a novel approach to development communication.
Brown, W J; Singhal, A; Rogers, E M
1989-01-01
Soap operas have their roots in 18th century English romance novels. These evolved into serialized radio dramas. In their current form, they were developed primarily to attract large audiences in order to sell consumer products. Hence the name soap which refers to the soap manufacturers who commonly advertise on such programs. In the world of soap operas there are 2 kinds. Those that function primarily to entertain and sell consumer products, and those that primarily entertain, but infuse positive social messages. The former are found everywhere, but are the only kind in America. The latter are found exclusively in developing countries. American soap operas have conveyed pro-social messages in the past, but they differ fundamentally from pro-development soap operas in their theoretical foundations. American soap operas are created by people who want to sell consumer goods. Development soap operas are created by people who want to convey pro-social messages that will aid their country's development. Both must be popular in order to be successful, but the former lack moral coherency, are unrealistic, erode values, and are created through a process of a theoretical development; while the latter have moral coherency, are realistic, promote values, and are created through a process of theoretical development. The 1st pro-development soap opera was Ven Conmigo (Come With Me) and was produced in Mexico between 1975-76. Its primary purpose was to increase adult literacy. During the year it ran, applicants at adult literacy centers rose by 600,000 or 63% compared to 7% the year before, and 2% the year after. The 2nd pro-development soap opera was Acompaname (Accompany Me) and it primary purpose was to promote family planning. It ran from 1977-78 and during that time the number of family planning adopters rose by 560,000 and contraceptive sale sin Mexico rose sharply. The question of what are pro-social messages and who should control them must be answered by each country in its effort to increase development.
Schaumberg, A
2015-04-01
Simulation often relies on a case-based learning approach and is used as a teaching tool for a variety of audiences. The knowledge transfer goes beyond the mere exchange of soft skills and practical abilities and also includes practical knowledge and decision-making behavior; however, verification of knowledge or practical skills seldom unfolds during simulations. Simulation-based learning seems to affect many learning domains and can, therefore, be considered to be multifactorial in nature. At present, studies examining the effects of learning environments with varying levels of reality on the cognitive long-term retention of students are lacking. The present study focused on the question whether case scenarios with varying levels of reality produce differences in the cognitive long-term retention of students, in particular with regard to the learning dimensions knowledge, understanding and transfer. The study was conducted on 153 students in the first clinical semester at the Justus-Liebig University of Giessen. Students were randomly selected and subsequently assigned, also in a random fashion, to two practice groups, i.e. realistic and unrealistic. In both groups the students were presented with standardized case scenarios consisting of three case studies, which were accurately defined with a case report containing a detailed description of each scenario and all relevant values so as to ensure identical conditions for both groups. The unrealistic group sat in an unfurnished practice room as a learning environment. The realistic group sat in a furnished learning environment with various background pictures and ambient noise. Students received examination questions before, immediately following and 14 days after the practice. Examination questions were identical at each of the three time points, classified into three learning dimensions following Bloom's taxonomy and evaluated. Furthermore, examination questions were supplemented by a questionnaire concerning the individual perception of reality and own learning success, to be filled in by students immediately after the practice. Examination questions and questionnaires were anonymous but associated with each other. Even with less experienced participants, realistic simulation design led to a significant increase of knowledge immediately after the end of the simulation. This effect, however, did not impact the cognitive long-term retention of students. While the realistic group showed a higher initial knowledge after the simulation, this "knowledge delta" was forgotten within 14 days, putting them back on par with the unrealistic comparison group. It could be significantly demonstrated that 2 weeks after the practice, comprehension questions were answered better than those on pure knowledge. Therefore, it can be concluded that even vaguely realistic simulation scenarios affect the learning dimension of understanding. For simulation-based learning the outcome depends not only on knowledge, practical skills and motivational variables but also on the onset of negative emotions, perception of own ability and personality profile. Simulation training alone does not appear to guarantee learning success but it seems to be necessary to establish a simulation setting suitable for the education level, needs and personality characteristics of the students.
Modeling central metabolism and energy biosynthesis across microbial life
Edirisinghe, Janaka N.; Weisenhorn, Pamela; Conrad, Neal; ...
2016-08-08
Here, automatically generated bacterial metabolic models, and even some curated models, lack accuracy in predicting energy yields due to poor representation of key pathways in energy biosynthesis and the electron transport chain (ETC). Further compounding the problem, complex interlinking pathways in genome-scale metabolic models, and the need for extensive gapfilling to support complex biomass reactions, often results in predicting unrealistic yields or unrealistic physiological flux profiles. As a result, to overcome this challenge, we developed methods and tools to build high quality core metabolic models (CMM) representing accurate energy biosynthesis based on a well studied, phylogenetically diverse set of modelmore » organisms. We compare these models to explore the variability of core pathways across all microbial life, and by analyzing the ability of our core models to synthesize ATP and essential biomass precursors, we evaluate the extent to which the core metabolic pathways and functional ETCs are known for all microbes. 6,600 (80 %) of our models were found to have some type of aerobic ETC, whereas 5,100 (62 %) have an anaerobic ETC, and 1,279 (15 %) do not have any ETC. Using our manually curated ETC and energy biosynthesis pathways with no gapfilling at all, we predict accurate ATP yields for nearly 5586 (70 %) of the models under aerobic and anaerobic growth conditions. This study revealed gaps in our knowledge of the central pathways that result in 2,495 (30 %) CMMs being unable to produce ATP under any of the tested conditions. We then established a methodology for the systematic identification and correction of inconsistent annotations using core metabolic models coupled with phylogenetic analysis. In conclusion, we predict accurate energy yields based on our improved annotations in energy biosynthesis pathways and the implementation of diverse ETC reactions across the microbial tree of life. We highlighted missing annotations that were essential to energy biosynthesis in our models. We examine the diversity of these pathways across all microbial life and enable the scientific community to explore the analyses generated from this large-scale analysis of over 8000 microbial genomes.« less
Modeling central metabolism and energy biosynthesis across microbial life
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edirisinghe, Janaka N.; Weisenhorn, Pamela; Conrad, Neal
Here, automatically generated bacterial metabolic models, and even some curated models, lack accuracy in predicting energy yields due to poor representation of key pathways in energy biosynthesis and the electron transport chain (ETC). Further compounding the problem, complex interlinking pathways in genome-scale metabolic models, and the need for extensive gapfilling to support complex biomass reactions, often results in predicting unrealistic yields or unrealistic physiological flux profiles. As a result, to overcome this challenge, we developed methods and tools to build high quality core metabolic models (CMM) representing accurate energy biosynthesis based on a well studied, phylogenetically diverse set of modelmore » organisms. We compare these models to explore the variability of core pathways across all microbial life, and by analyzing the ability of our core models to synthesize ATP and essential biomass precursors, we evaluate the extent to which the core metabolic pathways and functional ETCs are known for all microbes. 6,600 (80 %) of our models were found to have some type of aerobic ETC, whereas 5,100 (62 %) have an anaerobic ETC, and 1,279 (15 %) do not have any ETC. Using our manually curated ETC and energy biosynthesis pathways with no gapfilling at all, we predict accurate ATP yields for nearly 5586 (70 %) of the models under aerobic and anaerobic growth conditions. This study revealed gaps in our knowledge of the central pathways that result in 2,495 (30 %) CMMs being unable to produce ATP under any of the tested conditions. We then established a methodology for the systematic identification and correction of inconsistent annotations using core metabolic models coupled with phylogenetic analysis. In conclusion, we predict accurate energy yields based on our improved annotations in energy biosynthesis pathways and the implementation of diverse ETC reactions across the microbial tree of life. We highlighted missing annotations that were essential to energy biosynthesis in our models. We examine the diversity of these pathways across all microbial life and enable the scientific community to explore the analyses generated from this large-scale analysis of over 8000 microbial genomes.« less
NASA Astrophysics Data System (ADS)
Baranowski, D.; Waliser, D. E.; Jiang, X.
2016-12-01
One of the key challenges in subseasonal weather forecasting is the fidelity in representing the propagation of the Madden-Julian Oscillation (MJO) across the Maritime Continent (MC). In reality both propagating and non-propagating MJO events are observed, but in numerical forecast the latter group largely dominates. For this study, comprehensive model performances are evaluated using metrics that utilize the mean precipitation pattern and the amplitude and phase of the diurnal cycle, with a particular focus on the linkage between a model's local MC variability and its fidelity in representing propagation of the MJO and equatorial Kelvin waves across the MC. Subseasonal to seasonal variability of mean precipitation and its diurnal cycle in 20 year long climate simulations from over 20 general circulation models (GCMs) is examined to benchmark model performance. Our results show that many models struggle to represent the precipitation pattern over complex Maritime Continent terrain. Many models show negative biases of mean precipitation and amplitude of its diurnal cycle; these biases are often larger over land than over ocean. Furthermore, only a handful of models realistically represent the spatial variability of the phase of the diurnal cycle of precipitation. Models tend to correctly simulate the timing of the diurnal maximum of precipitation over ocean during local solar time morning, but fail to acknowledge influence of the land, with the timing of the maximum of precipitation there occurring, unrealistically, at the same time as over ocean. The day-to-day and seasonal variability of the mean precipitation follows observed patterns, but is often unrealistic for the diurnal cycle amplitude. The intraseasonal variability of the amplitude of the diurnal cycle of precipitation is mainly driven by model's ability (or lack of) to produce eastward propagating MJO-like signal. Our results show that many models tend to decrease apparent air-sea contrast in the mean precipitation and diurnal cycle of precipitation patterns over the Maritime Continent. As a result, the complexity of those patterns is heavily smoothed, to such an extent in some models that the Maritime Continent features and imprint is almost unrecognizable relative to the eastern Indian Ocean or Western Pacific.
Modeling central metabolism and energy biosynthesis across microbial life.
Edirisinghe, Janaka N; Weisenhorn, Pamela; Conrad, Neal; Xia, Fangfang; Overbeek, Ross; Stevens, Rick L; Henry, Christopher S
2016-08-08
Automatically generated bacterial metabolic models, and even some curated models, lack accuracy in predicting energy yields due to poor representation of key pathways in energy biosynthesis and the electron transport chain (ETC). Further compounding the problem, complex interlinking pathways in genome-scale metabolic models, and the need for extensive gapfilling to support complex biomass reactions, often results in predicting unrealistic yields or unrealistic physiological flux profiles. To overcome this challenge, we developed methods and tools ( http://coremodels.mcs.anl.gov ) to build high quality core metabolic models (CMM) representing accurate energy biosynthesis based on a well studied, phylogenetically diverse set of model organisms. We compare these models to explore the variability of core pathways across all microbial life, and by analyzing the ability of our core models to synthesize ATP and essential biomass precursors, we evaluate the extent to which the core metabolic pathways and functional ETCs are known for all microbes. 6,600 (80 %) of our models were found to have some type of aerobic ETC, whereas 5,100 (62 %) have an anaerobic ETC, and 1,279 (15 %) do not have any ETC. Using our manually curated ETC and energy biosynthesis pathways with no gapfilling at all, we predict accurate ATP yields for nearly 5586 (70 %) of the models under aerobic and anaerobic growth conditions. This study revealed gaps in our knowledge of the central pathways that result in 2,495 (30 %) CMMs being unable to produce ATP under any of the tested conditions. We then established a methodology for the systematic identification and correction of inconsistent annotations using core metabolic models coupled with phylogenetic analysis. We predict accurate energy yields based on our improved annotations in energy biosynthesis pathways and the implementation of diverse ETC reactions across the microbial tree of life. We highlighted missing annotations that were essential to energy biosynthesis in our models. We examine the diversity of these pathways across all microbial life and enable the scientific community to explore the analyses generated from this large-scale analysis of over 8000 microbial genomes.
Analysis of uncertainties in GOSAT-inferred regional CO2 fluxes
NASA Astrophysics Data System (ADS)
Ishizawa, M.; Shirai, T.; Maksyutov, S. S.; Yoshida, Y.; Morino, I.; Inoue, M.; Nakatsuru, T.; Uchino, O.; Mabuchi, K.
2016-12-01
Satellite-based CO2 measurements have potential for improving our understanding global carbon cycle because of more spatiotemporal coverage than those from ground-based observations. Since the Greenhouse gases Observing Satellite (GOSAT) was launched in January 2009, it has been measuring the column-average dry air-mole function of CO2 (XCO2) from the space. To utilize the GOSAT XCO2 for better CO2 flux estimates, several challenges should be overcome. Systematic errors (biases) in XCO2 retrievals are a major factor which leads to large differences among inverted CO2 fluxes. Temporally variable data coverage and density are also taken into account when interpreting the estimated surface fluxes. In this study, we employ an atmospheric inverse model to investigate the impacts of retrievals biases and temporally varying global distribution of GOSAT XCO2 on surface CO2 flux estimates. Inversions are performed for 2009-2013, with several subsets of the 5-year record of GOSAT XCO2 (v2.21) and its bias-corrected XCO2. GOSAT XCO2 data consist of three types: H-gain for vegetated lands, M-gain for bright surfaces (desert areas), and sun-glint for ocean surface. The results show that the global spatial distributions of estimated CO2 fluxes depend on the subset of XCO2 used. M-gain XCO2 results in unrealistically high CO2 emissions in and around the Middle East, including the neighboring ocean regions. On the other hand, M-gain XCO2 causes compensating unrealistic uptakes far beyond M-gain regions in low latitudes, also partially contributing on the summer uptake in Europe. The joint inversions with both surface measurements and GOSAT XCO2 data obtain larger flux gradient between the northern extra-tropics and the tropics than the inversion with surface measurements only for the first 2 years. Recently, these North-South gradients seem to be gradually reducing as the tropics become a weaker source or turn into a sink, while the net emission strength in East Asia is increasing. The 5-year XCO2 data allows us detailed analysis of uncertainties in GOSAT-inferred fluxes and assessment of GOSAT XCO2 biases.
On the Principles of Building a Layered Intrusion
NASA Astrophysics Data System (ADS)
Marsh, B. D.
2009-12-01
An accurate and realistic understanding of all magmatic processes involves knowing the combined physical and chemical fundamentals governing the overall process. Magmatic processes involve such a vast array of sub-processes (e.g., heat and mass transfer, crystal growth, slurry transport and sorting, annealing, resorbtion, etc.) that rarely is there any single feature or measurement that can be safely inverted to solve the problem. And each event as in the formation of an intrusion must at some level for heuristic purposes be defined as an isolated event. This is commonly done without much forethought, as is the absolutely critical assumption of the initial conditions defining the beginning of the event. Almost without exception, it is the initial conditions that determine the outcome of the entire process in all physical and biological systems. Automobile factories produce motorized vehicles not water melons or chimpanzees. Nucleosynthesis of H and He always gives the same set of elements. The initial conditions of the magma giving rise to the end product for mafic layered systems are especially difficult to discern and must be bounded by observing simpler, real time magmatic and volcanic processes. Initial conditions come from posing a series of questions: What was the style and duration of filling? What was the rate of influx and final volume of each delivery of magma? What was the compositional variation and phenocryst content of the individual magmatic deliveries? If phenocrysts are present, were they sorted prior to injection during ascension? What was the original and ongoing shape of the magmatic reservoir? A failure to appreciate or answer such basic questions leads to vastly untenable evolutionary scenarios. Unrealistic initial conditions necessarily lead to unrealistic magmatic scenarios. There are certain safe starting points. Eruptive and emplacement fluxes are limited. The larger an intrusion is the longer it took to build and the longer to build the more varied are the deliveries in time, volume, and constitution. Instantaneous emplacement of crystal free magma are unlikely initial conditions for a large intrusion. The most realistic initial conditions are that intrusions are made of a combination of crystal poor and crystal-rich inputs. Examples abound of the outcomes of systems with clearly known initial conditions. The huge Sudbury magma was produced in 5 minutes at a temperature of 1700C. Clearly crystal free, it produced no layering whatsoever. Sills worldwide, regardless of size, approaching these initial conditions are similarly featureless. At the other extreme are the lava outputs of large volcanic systems like Kilauea. The ensuing lava lakes produced over months are filled with magma containing varied amounts of phenocrysts/xenocrysts and ultramafic layers are produced. Intrusions abound of all sizes that show the same characteristics. Ponding in crystal-laden sills forms layered systems with many of the features of large bodies. Rapid cooling preserves diagnostic textural relations lost to annealing in large bodies. Slow cooling promotes annealing to sharpen and accentuate the initial modal and cryptic layering. Initial conditions are fundamental to understanding the final product. Physical processes buttressed by chemistry mainly dominate magmatic systems.
Fast Open-World Person Re-Identification.
Zhu, Xiatian; Wu, Botong; Huang, Dongcheng; Zheng, Wei-Shi
2018-05-01
Existing person re-identification (re-id) methods typically assume that: 1) any probe person is guaranteed to appear in the gallery target population during deployment (i.e., closed-world) and 2) the probe set contains only a limited number of people (i.e., small search scale). Both assumptions are artificial and breached in real-world applications, since the probe population in target people search can be extremely vast in practice due to the ambiguity of probe search space boundary. Therefore, it is unrealistic that any probe person is assumed as one target people, and a large-scale search in person images is inherently demanded. In this paper, we introduce a new person re-id search setting, called large scale open-world (LSOW) re-id, characterized by huge size probe images and open person population in search thus more close to practical deployments. Under LSOW, the under-studied problem of person re-id efficiency is essential in addition to that of commonly studied re-id accuracy. We, therefore, develop a novel fast person re-id method, called Cross-view Identity Correlation and vErification (X-ICE) hashing, for joint learning of cross-view identity representation binarisation and discrimination in a unified manner. Extensive comparative experiments on three large-scale benchmarks have been conducted to validate the superiority and advantages of the proposed X-ICE method over a wide range of the state-of-the-art hashing models, person re-id methods, and their combinations.
Numerical solution of the electron transport equation
NASA Astrophysics Data System (ADS)
Woods, Mark
The electron transport equation has been solved many times for a variety of reasons. The main difficulty in its numerical solution is that it is a very stiff boundary value problem. The most common numerical methods for solving boundary value problems are symmetric collocation methods and shooting methods. Both of these types of methods can only be applied to the electron transport equation if the boundary conditions are altered with unrealistic assumptions because they require too many points to be practical. Further, they result in oscillating and negative solutions, which are physically meaningless for the problem at hand. For these reasons, all numerical methods for this problem to date are a bit unusual because they were designed to try and avoid the problem of extreme stiffness. This dissertation shows that there is no need to introduce spurious boundary conditions or invent other numerical methods for the electron transport equation. Rather, there already exists methods for very stiff boundary value problems within the numerical analysis literature. We demonstrate one such method in which the fast and slow modes of the boundary value problem are essentially decoupled. This allows for an upwind finite difference method to be applied to each mode as is appropriate. This greatly reduces the number of points needed in the mesh, and we demonstrate how this eliminates the need to define new boundary conditions. This method is verified by showing that under certain restrictive assumptions, the electron transport equation has an exact solution that can be written as an integral. We show that the solution from the upwind method agrees with the quadrature evaluation of the exact solution. This serves to verify that the upwind method is properly solving the electron transport equation. Further, it is demonstrated that the output of the upwind method can be used to compute auroral light emissions.
NASA Astrophysics Data System (ADS)
Vasiliev, Iuliana; Reichart, Gert-Jan; Krijgsman, Wout
2013-01-01
The Messinian Salinity Crisis (5.96-5.33 Ma ago) was a dramatic oceanographic event, when evaporites kilometers thick precipitated in a desiccating Mediterranean basin, trapping more than 5% of the world's oceanic salt. Hydrological changes in the adjacent Black Sea and water exchange with the Mediterranean region are crucial, but poorly understood factors, influencing Messinian evaporite formation. Here, we present compound specific hydrogen isotope (δD) data from Messinian Black Sea sedimentary rocks that show a rapid change to heavy waters at 5.8 Ma, when major glaciations occurred. At the same time, highly depleted δD values of long chain n-alkanes derived from plant waxes indicate that fresh, river transported water originated from colder northern latitudes. The δD values of alkenones, biosynthesized by haptophyte algae, show an unprecedented increase of 60‰ within ˜100 kyr. The corresponding rapid change to +110‰ for δD of the Black Sea waters seem unrealistic, being heavier than anywhere in the present day oceans. Regardless of the applied relation between the δD values of the alkenones and δD of the waters where they were produced, the 60‰ enrichment in the δD values of alkenones indicates strongly enhanced evaporitic conditions. Still, the relative distribution of the alkenones implies in-situ growth and reproduction of haptophyte algae, requiring sustained marine conditions in the Black Sea up to 5.6 Ma. This indicates that Mediterranean-Black Sea connectivity persisted during the first MSC phase when gypsum precipitated in the Mediterranean basin. When the Black Sea became isolated, at the peak of the MSC (˜5.6 Ma), it had a strongly negative hydrological budget and rapidly desiccated due to excess evaporation.
On the nature of absorption features toward nearby stars
NASA Astrophysics Data System (ADS)
Kohl, S.; Czesla, S.; Schmitt, J. H. M. M.
2016-06-01
Context. Diffuse interstellar absorption bands (DIBs) of largely unknown chemical origin are regularly observed primarily in distant early-type stars. More recently, detections in nearby late-type stars have also been claimed. These stars' spectra are dominated by stellar absorption lines. Specifically, strong interstellar atomic and DIB absorption has been reported in τ Boo. Aims: We test these claims by studying the strength of interstellar absorption in high-resolution TIGRE spectra of the nearby stars τ Boo, HD 33608, and α CrB. Methods: We focus our analysis on a strong DIB located at 5780.61 Å and on the absorption of interstellar Na. First, we carry out a differential analysis by comparing the spectra of the highly similar F-stars, τ Boo and HD 33608, whose light, however, samples different lines of sight. To obtain absolute values for the DIB absorption, we compare the observed spectra of τ Boo, HD 33608, and α CrB to PHOENIX models and carry out basic spectral modeling based on Voigt line profiles. Results: The intercomparison between τ Boo and HD 33608 reveals that the difference in the line depth is 6.85 ± 1.48 mÅ at the DIB location which is, however, unlikely to be caused by DIB absorption. The comparison between PHOENIX models and observed spectra yields an upper limit of 34.0 ± 0.3 mÅ for any additional interstellar absorption in τ Boo; similar results are obtained for HD 33608 and α CrB. For all objects we derive unrealistically large values for the radial velocity of any presumed interstellar clouds. In τ Boo we find Na D absorption with an equivalent width of 0.65 ± 0.07 mÅ and 2.3 ± 0.1 mÅ in the D2 and D1 lines. For the other Na, absorption of the same magnitude could only be detected in the D2 line. Our comparisons between model and data show that the interstellar absorption toward τ Boo is not abnormally high. Conclusions: We find no significant DIB absorption in any of our target stars. Any differences between modeled and observed spectra are instead attributable to inaccuracies in the stellar atmospheric modeling than to DIB absorption. The spectra are available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/591/A20
NASA Astrophysics Data System (ADS)
Winslow, M.; Akhtar-Schuster, M.; Cherlet, M.; Martius, C.; Sommer, S.; Thomas, R.; Vogt, J.
2009-12-01
The United Nations Convention to Combat Desertification (UNCCD) is a global treaty that emerged from the Rio Earth Summit and formally took force in 1996. It has now been ratified by 193 countries (known as Parties to the Convention). Yet the UNCCD has gained only modest support from donors, largely due to questions about the science base underlying its target issue (desertification) resulting in ambiguous definitions and quantification of the problem. The UNCCD recognizes the need to reform itself and commissioned a scientific conference in Buenos Aires, Argentina in September 2009 to discuss ways to improve the scientific underpinning of monitoring and assessment (M&A) of desertification, land degradation and drought (DLDD). Previous attempts by the UNCCD on M&A focused largely on a search for a common, simple, universal set of indicators that could be reported by country Parties to the Convention Secretariat, which would collate them into a global report. However experience found that no single set of indicators is satisfactory to all countries, because DLDD depends strongly on the local environmental and human/social context. Three preparatory Working Groups analyzed the issue of DLDD M&A and recommended the following. Parties should recognize that M&A methods must integrate human-environment parameters to capture the complexity of DLDD phenomena as defined in the Convention’s text. Traditional tendencies had been to isolate biophysical from social and economic parameters, leading to unrealistic conclusions. Parties should take advantage of a much wider range of analytical techniques than just the coarse-scale indicators that had been their main focus to date. Powerful but underutilized techniques include integrated assessment models, remote sensing, geographic information systems and mapping, participatory stakeholder assessment, hierarchical aggregation of related data, knowledge management and many others. Multiple methods could provide validation checks on each other from complementary perspectives. M&A should also collect information to support benefit/cost analysis because decision-makers require such information in weighing priorities for public investment. Such information should include non-monetary as well as monetary values. Ecosystem services should also be valued, even if they are currently available free to land users. Parties should recognize the potential utility of knowledge management (KM) methods to overcome knowledge barriers that currently inhibit M&A collaboration between institutions, scientific disciplines, scale levels, formal/informal sectors, development sectors (e.g. water, health, food, infrastructure etc.), and between land users, scientists and policy makers. Improved KM could also build human and institutional capacities, resulting in improved M&A in the future.
Rodríguez, Ariel; Burgon, James D; Lyra, Mariana; Irisarri, Iker; Baurain, Denis; Blaustein, Leon; Göçmen, Bayram; Künzel, Sven; Mable, Barbara K; Nolte, Arne W; Veith, Michael; Steinfartz, Sebastian; Elmer, Kathryn R; Philippe, Hervé; Vences, Miguel
2017-10-01
The rise of high-throughput sequencing techniques provides the unprecedented opportunity to analyse controversial phylogenetic relationships in great depth, but also introduces a risk of being misinterpreted by high node support values influenced by unevenly distributed missing data or unrealistic model assumptions. Here, we use three largely independent phylogenomic data sets to reconstruct the controversial phylogeny of true salamanders of the genus Salamandra, a group of amphibians providing an intriguing model to study the evolution of aposematism and viviparity. For all six species of the genus Salamandra, and two outgroup species from its sister genus Lyciasalamandra, we used RNA sequencing (RNAseq) and restriction site associated DNA sequencing (RADseq) to obtain data for: (1) 3070 nuclear protein-coding genes from RNAseq; (2) 7440 loci obtained by RADseq; and (3) full mitochondrial genomes. The RNAseq and RADseq data sets retrieved fully congruent topologies when each of them was analyzed in a concatenation approach, with high support for: (1) S. infraimmaculata being sister group to all other Salamandra species; (2) S. algira being sister to S. salamandra; (3) these two species being the sister group to a clade containing S. atra, S. corsica and S. lanzai; and (4) the alpine species S. atra and S. lanzai being sister taxa. The phylogeny inferred from the mitochondrial genome sequences differed from these results, most notably by strongly supporting a clade containing S. atra and S. corsica as sister taxa. A different placement of S. corsica was also retrieved when analysing the RNAseq and RADseq data under species tree approaches. Closer examination of gene trees derived from RNAseq revealed that only a low number of them supported each of the alternative placements of S. atra. Furthermore, gene jackknife support for the S. atra - S. lanzai node stabilized only with very large concatenated data sets. The phylogeny of true salamanders thus provides a compelling example of how classical node support metrics such as bootstrap and Bayesian posterior probability can provide high confidence values in a phylogenomic topology even if the phylogenetic signal for some nodes is spurious, highlighting the importance of complementary approaches such as gene jackknifing. Yet, the general congruence among the topologies recovered from the RNAseq and RADseq data sets increases our confidence in the results, and validates the use of phylotranscriptomic approaches for reconstructing shallow relationships among closely related taxa. We hypothesize that the evolution of Salamandra has been characterized by episodes of introgressive hybridization, which would explain the difficulties of fully reconstructing their evolutionary relationships. Copyright © 2017. Published by Elsevier Inc.
Genomic tests for ovarian cancer detection and management.
Myers, Evan R; Havrilesky, Laura J; Kulasingam, Shalini L; Sanders, Gillian D; Cline, Kathryn E; Gray, Rebecca N; Berchuck, Andrew; McCrory, Douglas C
2006-10-01
To assess the evidence that the use of genomic tests for ovarian cancer screening, diagnosis, and treatment leads to improved outcomes. PubMed and reference lists of recent reviews. We evaluated tests for: (a) single gene products; (b) genetic variations affecting risk of ovarian cancer; (c) gene expression; and (d) proteomics. For tests covered in recent evidence reports (cancer antigen 125 [CA-125] and breast cancer genes 1 and 2 [BRCA1/2]), we added studies published subsequent to the reports. We sought evidence on: (a) the analytic performance of tests in clinical laboratories; (b) the sensitivity and specificity of tests in different patient populations; (c) the clinical impact of testing in asymptomatic women, women with suspected ovarian cancer, and women with diagnosed ovarian cancer; (d) the harms of genomic testing; and (e) the impact of direct-to-consumer and direct-to-physician advertising on appropriate use of tests. We also constructed a computer simulation model to test the impact of different assumptions about ovarian cancer natural history on the relative effectiveness of different strategies. There are reasonable data on the clinical laboratory performance of most radioimmunoassays, but the majority of the data on other genomic tests comes from research laboratories. Genomic test sensitivity/specificity estimates are limited by small sample sizes, spectrum bias, and unrealistically large prevalences of ovarian cancer; in particular, estimates of positive predictive values derived from most of the studies are substantially higher than would be expected in most screening or diagnostic settings. We found no evidence relevant to the question of the impact of genomic tests on health outcomes in asymptomatic women. Although there is a relatively large literature on the association of test results and various clinical outcomes, the clinical utility of changing management based on these results has not been evaluated. We found no evidence that genomic tests for ovarian cancer have unique harms beyond those common to other tests for genetic susceptibility or other tests used in screening, diagnosis, and management of ovarian cancer. Studies of a direct-to-consumer campaign for BRCA1/2 testing suggest increased utilization, but the effect on "appropriateness" was unclear. Model simulations suggest that annual screening, even with a highly sensitive test, will not reduce ovarian cancer mortality by more than 50 percent; frequent screening has a very low positive predictive value, even with a highly specific test. Although research remains promising, adaptation of genomic tests into clinical practice must await appropriately designed and powered studies in relevant clinical settings.
ERIC Educational Resources Information Center
Somekh, Bridget
2004-01-01
This article suggests that it is time for sociologists to redirect their focus from critiques of policy makers' unrealistic visions for information and communication technologies (ICTs) to the more generic issues that consistently mobilise resistance to ICTs within schools and education systems. There is an extraordinary difference between young…
ERIC Educational Resources Information Center
Poyatos, Fernando
1974-01-01
Described the methodological problems in setting up a kinesic inventory. Concludes that it is highly unrealistic to study language by itself without analyzing the formal and semantic make-up of the triple basic structure of language-paralanguage-kinesics. (Text is in Spanish.) (DS)
Being professional in the social media world.
Chan, Steven D
2012-01-01
What is at stake for dentists in the world of social media? Because it is unrealistic to completely avoid the new network, dentists should master some of these skills: risk management, crises management, and reputation management, as well as understanding that the playing field is not even. Guidelines for professional use of media are presented, along with some suggestions for effective participation.
The Impact of an Educational Intervention to Protect Women against the Influence of Media Images
ERIC Educational Resources Information Center
Ogden, Jane; Smith, Lauren; Nolan, Helen; Moroney, Rachel; Lynch, Hannah
2011-01-01
Purpose: Media images of unrealistic beauty have been identified as a determinant of women's body dissatisfaction. This experimental study aims to explore whether the negative impact of such images could be reduced by a one-time educational intervention consisting of a presentation and discussion, teaching women to be critical of media images.…
Wilma Mankiller, Chief of the Cherokee Nation. The Library of Famous Women. First Edition.
ERIC Educational Resources Information Center
Glassman, Bruce
Interspersed with the story of Wilma Mankiller's life is a brief history of the Cherokee Nation of Oklahoma and comments on unrealistic and negative stereotypes of Native Americans. In addition to recounting her life and achievements leading up to becoming the first woman chief of the Cherokee Nation, the book explores Wilma Mankiller's philosophy…
Dissatisfaction among women with "thunder thighs" undergoing closed aspirative lipoplasty.
Lewis, C M
1987-01-01
In our practice, we have uncovered a small series of female patients with "thunder thighs" who were dissatisfied with results of closed aspirative lipoplasty. The common problem appears to be unrealistic expectations. These patients expected a change in body habitus. This article reiterates the need for careful patient selection and preoperative information of what the procedure can and cannot accomplish.
Recent Personnel Reforms of Public Universities in China and in Italy: A Comparison
ERIC Educational Resources Information Center
Ha, Sha
2018-01-01
Purpose of the present research is an investigation of the most recent personnel reforms of higher education institutions in China and in Italy. A one-to-one comparison between the two realities would have been unrealistic, given the enormous differences between the two Countries in size and historical development. We focused our analysis on some…
Undergraduate Student Expectations of University in the United Kingdom: What Really Matters to Them?
ERIC Educational Resources Information Center
Money, Julie; Nixon, Sarah; Tracy, Fran; Hennessy, Claire; Ball, Emma; Dinning, Track
2017-01-01
Students spend 12 to 14 years in school settings learning in what could be considered a carefully controlled and structured environment. Higher education may not offer the same landscape to students and it appears that many enter with unrealistic conceptions of what is expected of them and are faced with different approaches to aspects of…
Can we restore the fire process? What awaits us if we don't?
R. Gordon Schmidt
1996-01-01
This paper's title - "Can we restore the fire process? What awaits us if we don't?" - represents an ecologist's view of the world. I submit that this view is unrealistic. The first clause uses the term "restore" which implies reestablishing the fire process of the past. The second phrase uses the absolute term "don't"...
Caring for Each Other in a Peace Club
ERIC Educational Resources Information Center
Stomfay-Stitz, Aline; Wheeler, Edyth
2007-01-01
Each teacher has a favorite dream for the first weeks of the new school year: a calm and peaceful classroom. Yet, due to the tragic incidents of violence and school shootings, matched by an increase in bullying and harassment, such noble goals seem remote and unrealistic. The authors believe that there is a way to succeed through the concept of an…
Body Image and Self-Esteem among Adolescent Girls: Testing the Influence of Sociocultural Factors
ERIC Educational Resources Information Center
Clay, Daniel; Vignoles, Vivian L.; Dittmar, Helga
2005-01-01
In Western cultures, girls' self-esteem declines substantially during middle adolescence, with changes in body image proposed as a possible explanation. Body image develops in the context of sociocultural factors, such as unrealistic media images of female beauty. In a study of 136 U.K. girls aged 11-16, experimental exposure to either ultra-thin…
Modeling Temporal Crowd Work Quality with Limited Supervision
2015-11-11
crowdsourcing, human computation, predic- tion, uncertainty-aware learning, time- series modeling Introduction While crowdsourcing offers a cost...individual correctness. As discussed ear- lier, such a strategy is difficult to employ in a live setting because it is unrealistic to assume that all...et al. 2014). Finally, there are interesting opportunities to investigate at the intersection of live task-routing with active-learning techniques
Quality Evidence about Leadership for Organizational and Student Learning in Schools
ERIC Educational Resources Information Center
Mulford, Bill
2005-01-01
Where do those in schools start sorting the wheat from the chaff, genuine growth potions offering long-term improvement from the elixirs, short-term opportunism and/or unrealistic expectations? The current and growing emphasis on evidence informed policy and practice is as good a place as any. The purpose of this article is to take up the issues…
2016-11-09
the model does not become a full probabilistic attack graph analysis of the network , whose data requirements are currently unrealistic. The second...flow. – Untrustworthy persons may intentionally try to exfiltrate known sensitive data to ex- ternal networks . People may also unintentionally leak...section will provide details on the components, procedures, data requirements, and parameters required to instantiate the network porosity model. These
ERIC Educational Resources Information Center
Erwin, Elizabeth J.
2017-01-01
The landscape of early childhood education and care has become unrecognizable in many countries, particularly in the West. There is an increasing pressure to focus on outcomes over process, prescribed curricula, standardized assessments, and unrealistic academic expectations for young learners and the adults who work on their behalf. This shift in…
Admitting Syrian Refugees: Is The Threat of Islamic State Infiltration Justified
2017-06-01
operatives through electronic media , and risking exposure during the vetting process is unnecessary when easier means of access are available. 14...refugee program are unrealistic. The Islamic State recruits Western operatives through electronic media , and risking exposure during the vetting...Treasury Enforcement Communications System UNHCR United Nations High Commissioner for Refugees USCIS United States Citizenship and Immigration
Psychosocial presentation of revisional LAGB patients: a qualitative study.
Janse Van Vuuren, M; Strodl, E; White, K M; Lockie, P
2015-10-01
This qualitative study offers insight into the experiences, expectations, perceptions and beliefs that may lead to laparoscopic adjustable gastric band patients' failure to achieve expected weight loss and seek revisional bariatric surgery. The 23 participants from two sites were interviewed and data were analysed from a grounded theory methodology in order to build a causal model. Analysis of participants' reports identified 'unrealistic expectations of the LAGB' as the core category. Additionally, the restriction of the band had a negative impact on participants' social interactions, leading to feelings of deprivation and, thus, to a desire for reward from food choices and consequently an increase of consumption of high-calorie-dense foods. These foods were chosen because of their specific texture or ability to provide reward. The resulting increase in weight or failure to achieve excess weight loss, led to feelings of shame and loneliness and emotional eating resulting in increased the consumption of rewarding foods. Thus, identifying unrealistic expectations of laparoscopic adjustable gastric band (LAGB) and emotional eating behaviours are important in those who are present initially for primary bariatric and revisional bariatric surgery, as they may contribute specifically to these patients' weight regain and consequent failure to achieve excess weight loss. © 2015 World Obesity.
Expert assessment concludes negative emissions scenarios may not deliver
NASA Astrophysics Data System (ADS)
Vaughan, Naomi E.; Gough, Clair
2016-09-01
Many integrated assessment models (IAMs) rely on the availability and extensive use of biomass energy with carbon capture and storage (BECCS) to deliver emissions scenarios consistent with limiting climate change to below 2 °C average temperature rise. BECCS has the potential to remove carbon dioxide (CO2) from the atmosphere, delivering ‘negative emissions’. The deployment of BECCS at the scale assumed in IAM scenarios is highly uncertain: biomass energy is commonly used but not at such a scale, and CCS technologies have been demonstrated but not commercially established. Here we present the results of an expert elicitation process that explores the explicit and implicit assumptions underpinning the feasibility of BECCS in IAM scenarios. Our results show that the assumptions are considered realistic regarding technical aspects of CCS but unrealistic regarding the extent of bioenergy deployment, and development of adequate societal support and governance structures for BECCS. The results highlight concerns about the assumed magnitude of carbon dioxide removal achieved across a full BECCS supply chain, with the greatest uncertainty in bioenergy production. Unrealistically optimistic assumptions regarding the future availability of BECCS in IAM scenarios could lead to the overshoot of critical warming limits and have significant impacts on near-term mitigation options.
A high resolution WRF model for wind energy forecasting
NASA Astrophysics Data System (ADS)
Vincent, Claire Louise; Liu, Yubao
2010-05-01
The increasing penetration of wind energy into national electricity markets has increased the demand for accurate surface layer wind forecasts. There has recently been a focus on forecasting the wind at wind farm sites using both statistical models and numerical weather prediction (NWP) models. Recent advances in computing capacity and non-hydrostatic NWP models means that it is possible to nest mesoscale models down to Large Eddy Simulation (LES) scales over the spatial area of a typical wind farm. For example, the WRF model (Skamarock 2008) has been run at a resolution of 123 m over a wind farm site in complex terrain in Colorado (Liu et al. 2009). Although these modelling attempts indicate a great hope for applying such models for detailed wind forecasts over wind farms, one of the obvious challenges of running the model at this resolution is that while some boundary layer structures are expected to be modelled explicitly, boundary layer eddies into the inertial sub-range can only be partly captured. Therefore, the amount and nature of sub-grid-scale mixing that is required is uncertain. Analysis of Liu et al. (2009) modelling results in comparison to wind farm observations indicates that unrealistic wind speed fluctuations with a period of around 1 hour occasionally occurred during the two day modelling period. The problem was addressed by re-running the same modelling system with a) a modified diffusion constant and b) two-way nesting between the high resolution model and its parent domain. The model, which was run with horizontal grid spacing of 370 m, had dimensions of 505 grid points in the east-west direction and 490 points in the north-south direction. It received boundary conditions from a mesoscale model of resolution 1111 m. Both models had 37 levels in the vertical. The mesoscale model was run with a non-local-mixing planetary boundary layer scheme, while the 370 m model was run with no planetary boundary layer scheme. It was found that increasing the diffusion constant caused damping of the unrealistic fluctuations, but did not completely solve the problem. Using two-way nesting also mitigated the unrealistic fluctuations significantly. It can be concluded that for real case LES modelling of wind farm circulations, care should be taken to ensure the consistency between the mesoscale weather forcing and LES models to avoid exciting spurious noise along the forcing boundary. The development of algorithms that adequately model the sub-grid-scale mixing that cannot be resolved by LES models is an important area for further research. References Liu, Y. Y._W. Liu, W. Y.Y. Cheng, W. Wu, T. T. Warner and K. Parks, 2009: Simulating intra-farm wind variations with the WRF-RTFDDA-LES modeling system. 10th WRF Users' Workshop, Boulder, C, USA. June 23 - 26, 2009. Skamarock, W., J. Dudhia, D.O. Gill, D.M. Barker, M.G.Duda, X-Y. Huang, W. Wang and J.G. Powers, A Description of the Advanced Research WRF version 3, NCAR Technical Note TN-475+STR, NCAR, Boulder, Colorado, 2008.
Hydroponics as a valid tool to assess arsenic availability in mine soils.
Moreno-Jiménez, E; Esteban, E; Fresno, T; de Egea, C López; Peñalosa, J M
2010-04-01
The low solubility of As in mine soils limits its phytoavailability. This makes the extrapolation of data obtained under hydroponic conditions unrealistic because the concentration in nutrient solution frequently overexposes plants to this metalloid. This work evaluates whether As supply in hydroponics resembles, to some extent, the As phytoavailable fraction in soils and the implications for phytoremediation. Phytotoxicity of As, in terms of biomass production, chlorophyll levels, and As concentrations in plants, was estimated and compared in both soils and hydroponics. In order for hydroponic conditions to be compared to soil conditions, plant exposure levels were measured in both cultures. Hydroponic As concentration ranging from 2-8microM equated to the same plant organ concentrations from soils with 700-3000mgkg(-1). Total and extractable As fractions exceeded those values, but As concentrations in pore water were bellow them. According to our results (i) hydroponics should include doses in the range 0-10microM As to allow the extrapolation of the results to As-polluted soils, and (ii) phytoextraction of As in mining sites will be limited by low As phytoavailability.
NASA Astrophysics Data System (ADS)
Babaeian, E.; Tuller, M.; Sadeghi, M.; Franz, T.; Jones, S. B.
2017-12-01
Soil Moisture Active Passive (SMAP) soil moisture products are commonly validated based on point-scale reference measurements, despite the exorbitant spatial scale disparity. The difference between the measurement depth of point-scale sensors and the penetration depth of SMAP further complicates evaluation efforts. Cosmic-ray neutron probes (CRNP) with an approximately 500-m radius footprint provide an appealing alternative for SMAP validation. This study is focused on the validation of SMAP level-4 root zone soil moisture products with 9-km spatial resolution based on CRNP observations at twenty U.S. reference sites with climatic conditions ranging from semiarid to humid. The CRNP measurements are often biased by additional hydrogen sources such as surface water, atmospheric vapor, or mineral lattice water, which sometimes yield unrealistic moisture values in excess of the soil water storage capacity. These effects were removed during CRNP data analysis. Comparison of SMAP data with corrected CRNP observations revealed a very high correlation for most of the investigated sites, which opens new avenues for validation of current and future satellite soil moisture products.
A nonradial pulsation model for the rapidly rotating Delta Scuti star Kappa(2) Bootis
NASA Technical Reports Server (NTRS)
Kennelly, E. J.; Walker, G. A. H.; Hubeny, I.
1991-01-01
A sectorial nonradial pulsation model is used to construct theoretical line profiles which mimic the variations for Kappa(2) Boo. Synthetic spectra generated with the appropriate Teff and log g are used as input. It is found that the data can be reproduced by the combination of a high-degree l is approximately equal to 12 mode with P(osc) aproximately equal to 0.071 d, and a low-degree mode, l is approximately equal to 0-2 with P(osc) approximately equal to 0.071-0.079 d. The projected rotational velocity (v sin i - 115 +/-5 km/s) was determined by fitting synthetic line profiles to the observed spectra. The velocity amplitude of the high-degree oscillations is estimated to be about 3.5 km/s. It is found that the ratio of the horizontal and radial pulsation amplitudes is small (about 0.02) and consistent with p-mode oscillations. Comparisons are made with models invoking starspots, and it is impossible to fit the observations of Kappa(2) Boo by a starspot model without assuming unrealistic values of radius or equatorial velocity.
Johnston, Gloria
2016-01-01
Photovoice methodology is growing in popularity in the health, education and social sciences as a research tool based on the core values of community-based participatory research. Most photovoice projects state a claim to the third goal of photovoice: to reach policy-makers or effect policy change. This paper examines the concerns of raising false hopes or unrealistic expectations amongst the participants of photovoice projects as they are positioned to be the champions for social change in their communities. The impetus for social change seems to lie in the hands of those most affected by the issue. This drive behind collective social action forms, what could be termed, a micro-social movement or comparative interest group. Looking to the potential use of social movement theory and resource mobilisation concepts, this paper poses a series of unanswered questions about the ethics of photovoice projects. The ethical concern centres on the focus of policy change as a key initiative; yet, most projects remain vague about the implementation and outcomes of this focus.
Bende, Attila; Muntean, Cristina M
2014-03-01
The theoretical IR and Raman spectra of the guanine-cytosine DNA base pairs in Watson-Crick and Hoogsteen configurations were computed using DFT method with M06-2X meta-hybrid GGA exchange-correlation functional, including the anharmonic corrections and solvent effects. The results for harmonic frequencies and their anharmonic corrections were compared with our previously calculated values obtained with the B3PW91 hybrid GGA functional. Significant differences were obtained for the anharmonic corrections calculated with the two different DFT functionals, especially for the stretching modes, while the corresponding harmonic frequencies did not differ considerable. For the Hoogtseen case the H⁺ vibration between the G-C base pair can be characterized as an asymmetric Duffing oscillator and therefore unrealistic anharmonic corrections for normal modes where this proton vibration is involved have been obtained. The spectral modification due to the anharmonic corrections, solvent effects and the influence of sugar-phosphate group for the Watson-Crick and Hoogsteen base pair configurations, respectively, were also discussed. For the Watson-Crick case also the influence of the stacking interaction on the theoretical IR and Raman spectra was analyzed. Including the anharmonic correction in our normal mode analysis is essential if one wants to obtain correct assignments of the theoretical frequency values as compared with the experimental spectra.
Medical Ethics in Plastic Surgery: A Mini Review
Nejadsarvari, Nasrin; Ebrahimi, Ali; Ebrahimi, Azin; Hashem-Zade, Haleh
2016-01-01
Currently, cosmetic surgery is spread around the world. Several factors are involved in this rapidly evolving field such as socio-economic development, changes in cultural norms, globalization and the effects of Western culture, advertising, media, and mental disorders. Nowadays the cosmetic surgery is becoming a profitable business, which deals exclusively with human appearance and less from the perspective of beauty based on physical protests and considering factors such as sex, age, and race. The morality of plastic surgery subspecialty has undergone many moral dilemmas in the past few years. The role of the patient regardless of his unrealistic dreams has questionable ethical dimension. The problem is the loss of human values and replacing them with false values, of pride and glory to a charismatic person of higher status, that may underlie some of the posed ethical dilemmas. Cosmetic surgery has huge difference with the general principle of legal liability in professional orientation, because the objective for cosmetic surgeries is different from common therapeutic purposes. To observe excellence in the medical profession, we should always keep in mind that these service providers, often as a therapist (healer) must maintain a commitment and priority for patient safety and prior to any action, a real apply for this service recipient should be present. Also, patient–physician confidentiality is the cornerstone of medical ethics. In this review, we study the issues addressed and the ways that they can be resolved. PMID:27853683
Medical Ethics in Plastic Surgery: A Mini Review.
Nejadsarvari, Nasrin; Ebrahimi, Ali; Ebrahimi, Azin; Hashem-Zade, Haleh
2016-09-01
Currently, cosmetic surgery is spread around the world. Several factors are involved in this rapidly evolving field such as socio-economic development, changes in cultural norms, globalization and the effects of Western culture, advertising, media, and mental disorders. Nowadays the cosmetic surgery is becoming a profitable business, which deals exclusively with human appearance and less from the perspective of beauty based on physical protests and considering factors such as sex, age, and race. The morality of plastic surgery subspecialty has undergone many moral dilemmas in the past few years. The role of the patient regardless of his unrealistic dreams has questionable ethical dimension. The problem is the loss of human values and replacing them with false values, of pride and glory to a charismatic person of higher status, that may underlie some of the posed ethical dilemmas. Cosmetic surgery has huge difference with the general principle of legal liability in professional orientation, because the objective for cosmetic surgeries is different from common therapeutic purposes. To observe excellence in the medical profession, we should always keep in mind that these service providers, often as a therapist (healer) must maintain a commitment and priority for patient safety and prior to any action, a real apply for this service recipient should be present. Also, patient-physician confidentiality is the cornerstone of medical ethics. In this review, we study the issues addressed and the ways that they can be resolved.
Tax, Casper; Govaert, Paulien H M; Stommel, Martijn W J; Besselink, Marc G H; Gooszen, Hein G; Rovers, Maroeska M
2017-11-02
To illustrate how decision modeling may identify relevant uncertainty and can preclude or identify areas of future research in surgery. To optimize use of research resources, a tool is needed that assists in identifying relevant uncertainties and the added value of reducing these uncertainties. The clinical pathway for laparoscopic distal pancreatectomy (LDP) versus open (ODP) for nonmalignant lesions was modeled in a decision tree. Cost-effectiveness based on complications, hospital stay, costs, quality of life, and survival was analyzed. The effect of existing uncertainty on the cost-effectiveness was addressed, as well as the expected value of eliminating uncertainties. Based on 29 nonrandomized studies (3.701 patients) the model shows that LDP is more cost-effective compared with ODP. Scenarios in which LDP does not outperform ODP for cost-effectiveness seem unrealistic, e.g., a 30-day mortality rate of 1.79 times higher after LDP as compared with ODP, conversion in 62.2%, surgically repair of incisional hernias in 21% after LDP, or an average 2.3 days longer hospital stay after LDP than after ODP. Taking all uncertainty into account, LDP remained more cost-effective. Minimizing these uncertainties did not change the outcome. The results show how decision analytical modeling can help to identify relevant uncertainty and guide decisions for future research in surgery. Based on the current available evidence, a randomized clinical trial on complications, hospital stay, costs, quality of life, and survival is highly unlikely to change the conclusion that LDP is more cost-effective than ODP.
ERIC Educational Resources Information Center
Lavallee, Kristen L.; Parker, Jeffrey G.
2009-01-01
Two focal social cognitive processes were evaluated in a structural model for their direct and indirect roles in early adolescents' jealousy surrounding their closest friend in a sample of 325 early adolescents (169 girls and 156 boys) ages 11-14 years. Individuals who are rigid and unrealistic about meeting their friendship needs were more…
How Much Curriculum Change Is Appropriate? Defining a Zone of Feasible Innovation
ERIC Educational Resources Information Center
Rogan, John M.
2007-01-01
The article grapples with the question of how much curriculum change is appropriate in a given context and in a given time frame. How can a balance be struck between stagnation, on the one hand, and the promotion of unrealistic innovation on the other? In answer to this dilemma, the concept of a zone of feasible innovation (ZFI) is proposed and…
ERIC Educational Resources Information Center
Archer, Robert P.; Handel, Richard W.; Couvadelli, Barbara
2004-01-01
The MMPI-2 Superlative (S) scale was developed by Butcher and Han (1995) to assess individuals tendencies to present themselves in an unrealistically positive light. The current study examined the performance of the L, K, and S scales in accurately distinguishing the MMPI-2 profiles of 379 psychiatric inpatients who produced one or more elevations…
ERIC Educational Resources Information Center
Ojala, Maria
2015-01-01
Is hope concerning climate change related to environmental engagement, or is it rather associated with unrealistic optimism and inactivity? This study on Swedish high school students identified two kinds of hope: constructive hope and hope based on denial. Constructive hope was positively associated with engagement and a perception that teachers…
ACOSS Six (Active Control of Space Structures)
1981-10-01
modes, specially useful simpler conditions for ensuring closed-loop asymptotic stability are also derived. In addition, conditions for robustness of...in this initial study of FOCL stability and robustness . Such a condition is strong but not unreasonable nor unrealistic. Many useful simple in- sights...smallest possible feedback gains) and many interesting numerical results on closed-loop stability and robustness of the modal-dashpot designs. The
Stem cell terminology: practical, theological and ethical implications.
Shanner, Laura
2002-01-01
Stem cell policy discussions frequently confuse embryonic and fetal sources of stem cells, and label untested, non-reproductive cloning as "therapeutic." Such misnomers distract attention from significant practical and ethical implications: accelerated research agendas tend to be supported at the expense of physical risks to women, theological implications in a multi-faith community, informed consent for participation in research, and treatment decisions altered by unrealistic expectations.
ERIC Educational Resources Information Center
Letawsky Shultz, Nicole
2017-01-01
The responsibilities of being a Division I student-athlete often leave little time for experiences outside of sport that are critical for their future careers. Many student-athletes have unrealistic expectations of competing in their sport after college, while others expend little effort exploring potential careers. This study examines how career…
Steven R. Martin; Kristen Pope
2012-01-01
As devices like personal locator beacons become more readily available, more visitors may bring them into wilderness and use them to request rescues and may develop unrealistic expectations of rescue. In an exploratory study in 2009, 235 overnight visitors to the King Range Wilderness in California completed a written survey. Of the respondents, 40 percent considered...
Prevalence and correlates of poor sleep quality and daytime sleepiness in Belgian truck drivers.
Braeckman, Lutgart; Verpraet, Rini; Van Risseghem, Marleen; Pevernagie, Dirk; De Bacquer, Dirk
2011-03-01
Sleepiness and sleep complaints are common among professional drivers. Sleepiness is a considerable problem not only because it affects the drivers' well-being, but also because of the consequences for performance and safety. Assessment of the (self-reported) prevalence and research into the risk factors are thus an important health issue and are also indispensable to prevent productivity loss and work-related accidents and injuries. Therefore, the aim of this study was to describe sleeping, driving, and health characteristics of Belgian truck drivers and to determine occupational and individual factors associated with poor sleep quality and daytime sleepiness. Cross-sectional data were collected using a self-administered questionnaire that included the Pittsburgh Sleep Quality Index (PSQI), Epworth Sleepiness Scale (ESS), and Berlin Questionnaire (BQ). The mean (SD) age of the 476 studied truck drivers was 42.7 (10.2) yrs and the mean (SD) body mass index was 27.3 (5.1) kg/m(2). Approximately 47% declared that they drove >50 h/wk and found their work schedule unrealistic. The mean (SD) PSQI score was 4.45 (2.7); poor quality of sleep (PSQI >5) was found in 27.2%. The mean (SD) ESS score was 6.79 (4.17); 18% had a score >10. The BQ indicated that 21.5% had a higher risk on obstructive sleep apnea. In multiple logistic regression analysis, low educational level (odds ratio [OR] 1.86), current smoking (OR 1.75), unrealistic work schedule (OR 1.75), and risk for obstructive sleep apnea (OR 2.97) were found to be independent correlates of daytime sleepiness. Poor sleep quality was significantly associated with poor self-perceived health (OR 1.95), unrealistic work schedule (OR 2.85), low job satisfaction (OR 1.91), and less driving experience (OR 1.73). These results show that poor sleep quality and daytime sleepiness were prevalent in Belgian truck drivers. Taking into account that several significant correlates with respect to these sleep problems were identified both at the individual and the occupational level, comprehensive countermeasures to improve working conditions and organization are needed, as well as health promotion interventions, to ensure the safety and well-being of truck drivers.
Application of the CERES Flux-by-Cloud Type Simulator to GCM Output
NASA Technical Reports Server (NTRS)
Eitzen, Zachary; Su, Wenying; Xu, Kuan-Man; Loeb, Norman G.; Sun, Moguo; Doelling, David R.; Bodas-Salcedo, Alejandro
2016-01-01
The CERES Flux By CloudType data product produces CERES top-of-atmosphere (TOA) fluxes by region and cloud type. Here, the cloud types are defined by cloud optical depth (t) and cloud top pressure (pc), with bins similar to those used by ISCCP (International Satellite Cloud Climatology Project). This data product has the potential to be a powerful tool for the evaluation of the clouds produced by climate models by helping to identify which physical parameterizations have problems (e.g., boundary-layer parameterizations, convective clouds, processes that affect surface albedo). Also, when the flux-by-cloud type and frequency of cloud types are simultaneously used to evaluate a model, the results can determine whether an unrealistically large or small occurrence of a given cloud type has an important radiative impact for a given region. A simulator of the flux-by-cloud type product has been applied to three-hourly data from the year 2008 from the UK Met Office HadGEM2-A model using the Langley Fu-Lour radiative transfer model to obtain TOA SW and LW fluxes.
NASA Technical Reports Server (NTRS)
Kharecha, Pushker A.; Hansen, James
2013-01-01
The critique by Rabilloud-whose only listed professional affiliation is an antinuclear activist group?is grossly biased and contains numerous misleading, hyperbolic, and erroneous claims about our paper2 and about nuclear energy in general. The nature of his comments bears a striking resemblance to the fallacious reasoning commonly employed by climate change deniers to try to undermine public concern about the climate crisis. Specifically, he resorts to cherry-picking of information and diversionary (red herring) arguments, demands unrealistic exactness, and cites untrustworthy sources. None of his claims undermine any of the key results of our paper, most notably our conclusion that nuclear energy has prevented, and can continue to prevent, a very high number of fatalities and very large greenhouse gas emissions due to fossil fuel burning. It follows that, as uncomfortable as it is for many well-intentioned environmentalists to admit, efforts to undermine nuclear energy also undermine mitigation of climate change and air pollution, with a heavy cost in human lives and potentially disastrous future climate change.
Li, Zhixun; Zhang, Yingtao; Gong, Huiling; Li, Weimin; Tang, Xianglong
2016-12-01
Coronary artery disease has become the most dangerous diseases to human life. And coronary artery segmentation is the basis of computer aided diagnosis and analysis. Existing segmentation methods are difficult to handle the complex vascular texture due to the projective nature in conventional coronary angiography. Due to large amount of data and complex vascular shapes, any manual annotation has become increasingly unrealistic. A fully automatic segmentation method is necessary in clinic practice. In this work, we study a method based on reliable boundaries via multi-domains remapping and robust discrepancy correction via distance balance and quantile regression for automatic coronary artery segmentation of angiography images. The proposed method can not only segment overlapping vascular structures robustly, but also achieve good performance in low contrast regions. The effectiveness of our approach is demonstrated on a variety of coronary blood vessels compared with the existing methods. The overall segmentation performances si, fnvf, fvpf and tpvf were 95.135%, 3.733%, 6.113%, 96.268%, respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.
Professional nursing burnout and irrational thinking.
Balevre, P
2001-01-01
This article reports how professional, job-related burnout in nurses (N = 192) is examined in relation to a developed index of irrational thinking patterns in a large, urban hospital setting. Based on the constructs of Rational Emotive Behavior Therapy (REBT), the study examines maladaptive thinking patterns related to nursing burnout and provides insight into possible educational and staff interventions for the syndrome. Low mean scores on all but two subscales indicate overall strength and stability among this sample. The demonstration that both burnout thoughts (r = 0.451, p = < .01) and burnout behaviors (r = 0.350, p = < .01) are significantly correlated with the perfection and control pattern support the study's assumptions. Nurses who demand perfection and control in themselves and others create unrealistic demands and expectations that cannot be met in the real world of nursing. The investigator believes that a regular stress management program, using the concepts of REBT, can foster professional growth and development, decrease workplace conflict and stress, and provide nurses (and other employees) with strategies and tools to disarm the irrational beliefs that build maladaptive cognitive patterns leading to professional burnout.
Dependence of radiation belt simulations to assumed radial diffusion rates
NASA Astrophysics Data System (ADS)
Drozdov, A.; Shprits, Y.; Aseev, N.; Kellerman, A. C.; Reeves, G. D.
2017-12-01
Radial diffusion is one of the dominant physical mechanisms that drives acceleration and loss of the radiation belt electrons due to wave-particle interaction with ultra low frequency (ULF) waves, which makes it very important for radiation belt modeling and forecasting. We investigate the sensitivity of several parameterizations of the radial diffusion including Brautigam and Albert [2000], Ozeke et al. [2014] and Ali et al. [2016] on long-term radiation belt modeling using the Versatile Electron Radiation Belt (VERB). Following previous studies, we first perform 1-D radial diffusion simulations. To take into account effects of local acceleration and loss, we perform additional 3-D simulations, including pitch-angle, energy and mixed diffusion. The obtained result demonstrates that the inclusion of local acceleration and pitch-angle diffusion can provide a negative feedback effect, such that the result is largely indistinguishable between simulations conducted with different radial diffusion parameterizations. We also perform a number of sensitivity tests by multiplying radial diffusion rates by constant factors and show that such an approach leads to unrealistic predictions of radiation belt dynamics.
Kohut, Taylor; Fisher, William A; Campbell, Lorne
2017-02-01
The current study adopted a participant-informed, "bottom-up," qualitative approach to identifying perceived effects of pornography on the couple relationship. A large sample (N = 430) of men and women in heterosexual relationships in which pornography was used by at least one partner was recruited through online (e.g., Facebook, Twitter, etc.) and offline (e.g., newspapers, radio, etc.) sources. Participants responded to open-ended questions regarding perceived consequences of pornography use for each couple member and for their relationship in the context of an online survey. In the current sample of respondents, "no negative effects" was the most commonly reported impact of pornography use. Among remaining responses, positive perceived effects of pornography use on couple members and their relationship (e.g., improved sexual communication, more sexual experimentation, enhanced sexual comfort) were reported frequently; negative perceived effects of pornography (e.g., unrealistic expectations, decreased sexual interest in partner, increased insecurity) were also reported, albeit with considerably less frequency. The results of this work suggest new research directions that require more systematic attention.
Very large eddy simulation of the Red Sea overflow
NASA Astrophysics Data System (ADS)
Ilıcak, Mehmet; Özgökmen, Tamay M.; Peters, Hartmut; Baumert, Helmut Z.; Iskandarani, Mohamed
Mixing between overflows and ambient water masses is a critical problem of deep-water mass formation in the downwelling branch of the meridional overturning circulation of the ocean. Modeling approaches that have been tested so far rely either on algebraic parameterizations in hydrostatic ocean circulation models, or on large eddy simulations that resolve most of the mixing using nonhydrostatic models. In this study, we examine the performance of a set of turbulence closures, that have not been tested in comparison to observational data for overflows before. We employ the so-called very large eddy simulation (VLES) technique, which allows the use of k-ɛ models in nonhydrostatic models. This is done by applying a dynamic spatial filtering to the k-ɛ equations. To our knowledge, this is the first time that the VLES approach is adopted for an ocean modeling problem. The performance of k-ɛ and VLES models are evaluated by conducting numerical simulations of the Red Sea overflow and comparing them to observations from the Red Sea Outflow Experiment (REDSOX). The computations are constrained to one of the main channels transporting the overflow, which is narrow enough to permit the use of a two-dimensional (and nonhydrostatic) model. A large set of experiments are conducted using different closure models, Reynolds numbers and spatial resolutions. It is found that, when no turbulence closure is used, the basic structure of the overflow, consisting of a well-mixed bottom layer (BL) and entraining interfacial layer (IL), cannot be reproduced. The k-ɛ model leads to unrealistic thicknesses for both BL and IL, while VLES results in the most realistic reproduction of the REDSOX observations.
DeForest, David K; Gilron, Guy; Armstrong, Sarah A; Robertson, Erin L
2012-01-01
A freshwater Se guideline was developed for consideration based on concentrations in fish eggs or ovaries, with a focus on Canadian species, following the Canadian Council of Ministers of the Environment protocol for developing guideline values. When sufficient toxicity data are available, the protocol recommends deriving guidelines as the 5th percentile of the species sensitivity distribution (SSD). When toxicity data are limited, the protocol recommends a lowest value approach, where the lowest toxicity threshold is divided by a safety factor (e.g., 10). On the basis of a comprehensive review of the current literature and an assessment of the data therein, there are sufficient egg and ovary Se data available for freshwater fish to develop an SSD. For most fish species, Se EC10 values (10% effect concentrations) could be derived, but for some species, only no-observed-effect concentrations and/or lowest-observed-effect concentrations could be identified. The 5th percentile egg and ovary Se concentrations from the SSD were consistently 20 µg/g dry weight (dw) for the best-fitting distributions. In contrast, the lowest value approach using a safety factor of 10 would result in a Se egg and ovary guideline of 2 µg/g dw, which is unrealistically conservative, as this falls within the range of egg and ovary Se concentrations in laboratory control fish and fish collected from reference sites. An egg and ovary Se guideline of 20 µg/g dw should be considered a conservative, broadly applicable guideline, as no species mean toxicity thresholds lower than this value have been identified to date. When concentrations exceed this guideline, site-specific studies with local fish species, conducted using a risk-based approach, may result in higher egg and ovary Se toxicity thresholds. Copyright © 2011 SETAC.
Gallardo-Moreno, Amparo M; Vadillo-Rodríguez, Virginia; Perera-Núñez, Julia; Bruque, José M; González-Martín, M Luisa
2012-07-21
The electrical characterization of surfaces in terms of the zeta potential (ζ), i.e., the electric potential contributing to the interaction potential energy, is of major importance in a wide variety of industrial, environmental and biomedical applications in which the integration of any material with the surrounding media is initially mediated by the physico-chemical properties of its outer surface layer. Among the different existing electrokinetic techniques for obtaining ζ, streaming potential (V(str)) and streaming current (I(str)) are important when dealing with flat-extended samples. Mostly dielectric materials have been subjected to this type of analysis and only a few papers can be found in the literature regarding the electrokinetic characterization of conducting materials. Nevertheless, a standardized procedure is typically followed to calculate ζ from the measured data and, importantly, it is shown in this paper that such a procedure leads to incorrect zeta potential values when conductors are investigated. In any case, assessment of a reliable numerical value of ζ requires careful consideration of the origin of the input data and the characteristics of the experimental setup. In particular, it is shown that the cell resistance (R) typically obtained through a.c. signals (R(a.c.)), and needed for the calculations of ζ, always underestimates the zeta potential values obtained from streaming potential measurements. The consideration of R(EK), derived from the V(str)/I(str) ratio, leads to reliable values of ζ when dielectrics are investigated. For metals, the contribution of conductivity of the sample to the cell resistance provokes an underestimation of R(EK), which leads to unrealistic values of ζ. For the electrical characterization of conducting samples I(str) measurements constitute a better choice. In general, the findings gathered in this manuscript establish a measurement protocol for obtaining reliable zeta potentials of dielectrics and conductors based on the intrinsic electrokinetic behavior of both types of samples.
NASA Astrophysics Data System (ADS)
Katavouta, Anna; Thompson, Keith
2017-04-01
A high resolution regional model (1/36 degree) of the Gulf of Maine, Scotian Shelf and adjacent deep ocean (GoMSS) is developed to downscale ocean conditions from an existing global operational system. First, predictions from the regional GoMSS model in a one-way nesting set up are evaluated using observations from multiple sources including satellite-borne sensors of surface temperature and sea level, CTDs, Argo floats and moored current meters. It is shown that on the shelf, the regional model predicts more realistic fields than the global system because it has higher resolution and includes tides that are absent from the global system. However, in deep water the regional model misplaces deep ocean eddies and meanders associated with the Gulf Stream. This is because of unrealistic internally generated variability (associated with the one-way nesting set up) that leads to decoupling of the regional model from the global system in the deep water. To overcome this problem, the large scales (length scales > 90 km) of the regional model are spectrally nudged towards the global system fields. This leads to more realistic predictions off the shelf. Wavenumber spectra show that even though spectral nudging constrains the large scales, it does not suppress the variability on small scales; on the contrary, it favours the formation of eddies with length scales below the cut-off wavelength of the spectral nudging.
NASA Astrophysics Data System (ADS)
Zhao, Chongbin; Hobbs, B. E.; Ord, A.
2018-04-01
Reaction-infiltration instability, in which chemical reactions can dissolve minerals and therefore create preferential pore-fluid flow channels in fluid-saturated rocks, may play an important role in controlling groundwater quality in groundwater hydrology. Although this topic has been studied for many years, there is a recent debate, which says that the use of large-density asymptotics in the previous studies is invalid. However, there is a crucial conceptual mistake in this debate, which leads to results and conclusions that are inconsistent with the fundamental laws of physics. It is well known that in terms of distance, time and velocity, there are only two independent variables. But they are treated as three independent variables, a procedure that is the main source of the physically unrealistic results and conclusions in the debate. In this paper, we will discuss the results and conclusions related to the debate, with emphasis on the issues leading to the corresponding errors. In particular, we demonstrate that there is an unappreciated constraint condition between the dimensional/dimensionless distance, time and velocity in the debate. By using this constraint condition, it can be confirmed that as the ratio of the reactant concentration in the incoming fluid stream to the mineral concentration approaches zero, the dimensionless transport parameter, H, automatically approaches infinity. Therefore, it is further confirmed that the previous work conducted by Chadam and others remains valid.
Assessing the role of slab rheology in coupled plate-mantle convection models
NASA Astrophysics Data System (ADS)
Bello, Léa; Coltice, Nicolas; Tackley, Paul J.; Dietmar Müller, R.; Cannon, John
2015-11-01
Reconstructing the 3D structure of the Earth's mantle has been a challenge for geodynamicists for about 40 yr. Although numerical models and computational capabilities have substantially progressed, parameterizations used for modeling convection forced by plate motions are far from being Earth-like. Among the set of parameters, rheology is fundamental because it defines in a non-linear way the dynamics of slabs and plumes, and the organization of lithosphere deformation. In this study, we evaluate the role of the temperature dependence of viscosity (variations up to 6 orders of magnitude) and the importance of pseudo-plasticity on reconstructing slab evolution in 3D spherical models of convection driven by plate history models. Pseudo-plasticity, which produces plate-like behavior in convection models, allows a consistent coupling between imposed plate motions and global convection, which is not possible with temperature-dependent viscosity alone. Using test case models, we show that increasing temperature dependence of viscosity enhances vertical and lateral coherence of slabs, but leads to unrealistic slab morphologies for large viscosity contrasts. Introducing pseudo-plasticity partially solves this issue, producing thin laterally and vertically more continuous slabs, and flat subduction where trench retreat is fast. We evaluate the differences between convection reconstructions employing different viscosity laws to be very large, and similar to the differences between two models with the same rheology but using two different plate histories or initial conditions.
Coding Without Your Crystal Ball: Unanticipated Object-Oriented Reuse
2009-12-01
Abstract In many ways, existing languages place unrealistic expectations on library and framework designers, allowing some varieties of client reuse only...if it is explicitly— sometimes manually—supported. Instead, we should aim for the ideal: a language design that reduces the amount of prognostication...that is required on the part of the original designers. In particular, I show that languages can and should support a combination of structural and
Measured Visual Motion Sensitivity at Fixed Contrast in the Periphery and Far Periphery
2017-08-01
group Soldier performance. Soldier performance depends on visual detection of enemy personnel and materiel. Vision modeling in IWARS is neither...a highly time-critical and order- dependent activity, these unrealistic characterizations of target detection time and order severely limit the...recognize that MVTs should depend on target contrast, so we selected a target design different from that used in the Monaco et al. (2007) study. Based
Farmer, E; Chipperfield, C
1996-04-01
Key elements: Problems of One-to-One midwifery developing a caseload relationships with other staff--perceived 'elitism' boundaries of care for high-risk mothers long hours isolation women's unrealistic expectations of midwife, including overdependence Solutions maintain dialogue with other professionals hospital midwives invited to join OTO midwives on community visits clarify roles and responsibilities direct referral to on-call registrar partnerships with teams peer support weekly group meeting empowering women, reducing overdependence
Fuel Tank Non-Nuclear Vulnerability Test Program
1975-02-01
configurations and structures , for all the threat velocities and obli~quities, alid for all the different fuel tank conditions. This is very unrealistic and can...of operational aircraft. It is, ot. course, imtpractical to simiul~ate all the potential conditions, threat variables, structural materials, and...simulate the structural members of the aircraft to which the aircraft skin and fuel tank walls are attached. The effect that paint, on the aircraft
JPRS Report. Near East & South Asia.
1990-11-02
Discussed [YEDI’OT AHARONOT17 Sep] 7 Economic Forecasts for New Year Given [MA’ARJV(Business Supplement) 18 Sep] 9 SOUTH ASIA INDIA Report...November Polls [THE TIMES OF INDIA 20 Sep] 18 Reservation Policy Seen Creating Discord, Divisions [ANANDA BAZAR PATRIKA 18 Aug] 18 Congress-BJP...8217Unrealistic’ [THE HINDU 20 Sep] 28 Officials Predict Record Foodgrain Production [THE TIMES OF INDIA 14 Sep] 29 Commentary Questions State
Air Force IT System Security Compliance with Law and Policy
2016-04-01
production /1/saf_cio_a6/publication/afpd33-2/afpd33-2.pdf 21 AFI33-210, Air Force Certification and Accreditation Program (AFCAP), October 2014: http...cyber systems for support and operation. Today’s system certification and compliancy tracking methods are very costly, time intensive, unrealistic...and often lag behind operational and test requirements. However, with changes to policy and implementation requirements, the IT system certification
Space Particle Hazard Measurement and Modeling
2007-11-30
the spacecraft and perturbations of the environment generated by the spacecraft. Koons et al. (1999) compiled and studied all spacecraft anomalies...unrealistic for D12 than for Dα0p). However, unlike the stability problems associated with the original cross diffusion terms, they are quite manageable ...E), to mono-energetic beams of charged particles of known energies which enables one, in principle , to unfold the space environment spectrum, j(E
Questionable Assumptions and Unrealistic Expectations: Reassessing the U.S.- India Relationship
2010-06-16
space. It must face complicated regional dynamics shaped by thousands of years of contact between diverse populations and the legacies of colonialism... between the public and private sectors, in particular to further economic cooperation as an important strategic means for achieving U.S. goals, will...also be required. 4 This monograph seeks to highlight the asymmetry between Washington’s foreign policy assumptions and approach toward India, on
JPRS Report, Soviet Union, World Economy & International Relations, No. 10, October 1988
1989-02-10
essence of the capitalist mode of production and via the decomposition of the Ricardian school, which became, according to Marx, the "vulgar apologists...the framework of the accepted limitations) if it is seen as a model reflecting quantitatively the participation or role of individual production ...From the viewpoint of a characteriza- tion of modern capitalist production the models of general balance proceeded from unrealistic premises: they
US Air Force 1989 Research Initiation Program. Volume 2.
1992-06-25
University of Minnesota-Duluth Specialtv: Inorganic Chemistry Specialty: Mechanics Dr. Satish Chandra Mr. Asad Yousuf Kansas State University Savannah...the Study Van der Waals forces in capillary tubes have previously been calculated by Philip (1977b]. His study was based on the Hamaker theory, which...important in condensed media, are not taken into account by the Hamaker theory. Calculations using on the Hamaker theory are often based on an unrealistic
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Eylen, V.; Lindholm Nielsen, M.; Hinrup, B.
2013-09-10
With years of Kepler data currently available, the measurement of variations in planetary transit depths over time can now be attempted. To do so, it is of primary importance to understand which systematic effects may affect the measurement of transits. We aim to measure the stability of Kepler measurements over years of observations. We present a study of the depth of about 500 transit events of the Hot Jupiter HAT-P-7b, using 14 quarters (Q0-Q13) of data from the Kepler satellite. We find a systematic variation in the depth of the primary transit, related to quarters of data and recurring yearly.more » These seasonal variations are about 1%. Within seasons, we find no evidence for trends. We speculate that the cause of the seasonal variations could be unknown field crowding or instrumental artifacts. Our results show that care must be taken when combining transits throughout different quarters of Kepler data. Measuring the relative planetary radius of HAT-P-7b without taking these systematic effects into account leads to unrealistically low error estimates. This effect could be present in all Kepler targets. If so, relative radius measurements of all Hot Jupiters to a precision much better than 1% are unrealistic.« less
A calibration hierarchy for risk models was defined: from utopia to empirical data.
Van Calster, Ben; Nieboer, Daan; Vergouwe, Yvonne; De Cock, Bavo; Pencina, Michael J; Steyerberg, Ewout W
2016-06-01
Calibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions. We present results based on simulated data sets. A common definition of calibration is "having an event rate of R% among patients with a predicted risk of R%," which we refer to as "moderate calibration." Weaker forms of calibration only require the average predicted risk (mean calibration) or the average prediction effects (weak calibration) to be correct. "Strong calibration" requires that the event rate equals the predicted risk for every covariate pattern. This implies that the model is fully correct for the validation setting. We argue that this is unrealistic: the model type may be incorrect, the linear predictor is only asymptotically unbiased, and all nonlinear and interaction effects should be correctly modeled. In addition, we prove that moderate calibration guarantees nonharmful decision making. Finally, results indicate that a flexible assessment of calibration in small validation data sets is problematic. Strong calibration is desirable for individualized decision support but unrealistic and counter productive by stimulating the development of overly complex models. Model development and external validation should focus on moderate calibration. Copyright © 2016 Elsevier Inc. All rights reserved.
Compactified cosmological simulations of the infinite universe
NASA Astrophysics Data System (ADS)
Rácz, Gábor; Szapudi, István; Csabai, István; Dobos, László
2018-06-01
We present a novel N-body simulation method that compactifies the infinite spatial extent of the Universe into a finite sphere with isotropic boundary conditions to follow the evolution of the large-scale structure. Our approach eliminates the need for periodic boundary conditions, a mere numerical convenience which is not supported by observation and which modifies the law of force on large scales in an unrealistic fashion. We demonstrate that our method outclasses standard simulations executed on workstation-scale hardware in dynamic range, it is balanced in following a comparable number of high and low k modes and, its fundamental geometry and topology match observations. Our approach is also capable of simulating an expanding, infinite universe in static coordinates with Newtonian dynamics. The price of these achievements is that most of the simulated volume has smoothly varying mass and spatial resolution, an approximation that carries different systematics than periodic simulations. Our initial implementation of the method is called StePS which stands for Stereographically projected cosmological simulations. It uses stereographic projection for space compactification and naive O(N^2) force calculation which is nevertheless faster to arrive at a correlation function of the same quality than any standard (tree or P3M) algorithm with similar spatial and mass resolution. The N2 force calculation is easy to adapt to modern graphics cards, hence our code can function as a high-speed prediction tool for modern large-scale surveys. To learn about the limits of the respective methods, we compare StePS with GADGET-2 running matching initial conditions.
Short-term depression and transient memory in sensory cortex.
Gillary, Grant; Heydt, Rüdiger von der; Niebur, Ernst
2017-12-01
Persistent neuronal activity is usually studied in the context of short-term memory localized in central cortical areas. Recent studies show that early sensory areas also can have persistent representations of stimuli which emerge quickly (over tens of milliseconds) and decay slowly (over seconds). Traditional positive feedback models cannot explain sensory persistence for at least two reasons: (i) They show attractor dynamics, with transient perturbations resulting in a quasi-permanent change of system state, whereas sensory systems return to the original state after a transient. (ii) As we show, those positive feedback models which decay to baseline lose their persistence when their recurrent connections are subject to short-term depression, a common property of excitatory connections in early sensory areas. Dual time constant network behavior has also been implemented by nonlinear afferents producing a large transient input followed by much smaller steady state input. We show that such networks require unphysiologically large onset transients to produce the rise and decay observed in sensory areas. Our study explores how memory and persistence can be implemented in another model class, derivative feedback networks. We show that these networks can operate with two vastly different time courses, changing their state quickly when new information is coming in but retaining it for a long time, and that these capabilities are robust to short-term depression. Specifically, derivative feedback networks with short-term depression that acts differentially on positive and negative feedback projections are capable of dynamically changing their time constant, thus allowing fast onset and slow decay of responses without requiring unrealistically large input transients.
Hsu, Yi-Fang; Vincent, Romain; Waszak, Florian
2015-08-27
Recent research suggested a link between the prediction mechanism and depressive symptoms. While healthy people tend to maintain unrealistic optimism in the face of reality challenging their beliefs, depressed people show systematic pessimism. However, it remains unclear at which stage these individual differences in optimism/pessimism arise in the brain. In the current study we designed a simple gambling task with two difficulty levels, the easy game and the hard game. Participants were required to press one of four keys to gain a bonus signalled by a sinusoidal tone. For three of the four keys, the probability of getting a large bonus was 80% in the easy game and 8% in the hard game. In both games, the fourth key, randomly determined in each trial, yielded a large bonus with a probability of 100%. This arrangement allowed us to observe less/more depressed participants׳ optimistic/pessimistic expectations about hitting the key that guarantees a large bonus. The opposite expectation patterns of less/more depressed participants were reflected on the N1 amplitude. Meanwhile, all participants were well aware of the true probability of obtaining certain bonus in each game as reflected on the P3 amplitude. The results suggest that the subjective system (tracking subjective beliefs) and the objective system (tracking objective evidence) are dissociable in the human brain, with the former feeding information into sensory areas and the latter representing prediction errors on a higher level. Moreover, individual differences arise from variability in the former rather than the latter. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Dodd, J. P.; Abbott, T.; Scherer, R. P.
2016-12-01
Oxygen isotope (δ18O) values of biogenic silica have enormous potential as paleoenvironmental proxies in marine and non-marine environments. Isotopic exchange and phase changes (opal-A to opal-CT) can overprint the initial δ18O values, but these diagenetic processes can also provide additional paleoenvironmental information. The timing and magnitude of isotopic exchange reactions are difficult to constrain in natural environments; however, experimental results of diatom aging indicate that changes in the δ18O values of the biogenic silca occur coincident with dehydroxylation of the silica prior to opal-CT formation. Diagenetic alteration of biogenic silica dramatically changes our interpretation of silica isotope data from sedimentary records. For example, diatom δ18O values from a Pliocene ( 4.68 to 3.44 Ma) age marine sediment core (AND-1B) from the Ross Sea, Antarctica range from +28 to +36‰. The silica-water fractionation relationship for marine diatoms of Juillet-Leclerc and Labeyrie (1987) and a standard marine δ18O water value of 0‰ results in unrealistic sea surface temperatures of >20°C. Conversely, if water temperatures of 0 to 10°C are used, the resulting water δ18O values range from -8 to -16‰. A plausible alternate scenario is that the diatom δ18O values are recording sediment pore-water conditions. Pore waters in the AND-2A core had δ18O values of -11‰, possibly as a result cryogenic brine formation (Frank et al., 2010). These low δ18O values are in close agreement with our calculated δ18O water values. Despite diagenetic overprinting of the diatom δ18O values, there is still good agreement between the measured diatom δ18O values and the stacked benthic δ18O record (Lisiecki and Raymo, 2005); biogenic silica δ18O values in the AND-1B core likely record the composition of shallow sediment pore water and cryogenic brine formation. The agreement between our δ18O record and the benthic stack δ18O values suggests that brine formation in the early Pliocene Ross Sea may be driven by regional and global scale climate processes. Sedimentary diatoms most likely represent a combination of growth and diagenetic environments, and the δ18O value of diagenetic water needs to be addressed when reconstructing paleoceanographic and paleoenvironmental conditions.
Keyvanloo, A; Burke, B; Warkentin, B; Tadic, T; Rathee, S; Kirkby, C; Santos, D M; Fallone, B G
2012-10-01
The magnetic fields of linac-MR systems modify the path of contaminant electrons in photon beams, which alters patient skin dose. To accurately quantify the magnitude of changes in skin dose, the authors use Monte Carlo calculations that incorporate realistic 3D magnetic field models of longitudinal and transverse linac-MR systems. Finite element method (FEM) is used to generate complete 3D magnetic field maps for 0.56 T longitudinal and transverse linac-MR magnet assemblies, as well as for representative 0.5 and 1.0 T Helmholtz MRI systems. EGSnrc simulations implementing these 3D magnetic fields are performed. The geometry for the BEAMnrc simulations incorporates the Varian 600C 6 MV linac, magnet poles, the yoke, and the magnetic shields of the linac-MRIs. Resulting phase-space files are used to calculate the central axis percent depth-doses in a water phantom and 2D skin dose distributions for 70 μm entrance and exit layers using DOSXYZnrc. For comparison, skin doses are also calculated in the absence of magnetic field, and using a 1D magnetic field with an unrealistically large fringe field. The effects of photon field size, air gap (longitudinal configuration), and angle of obliquity (transverse configuration) are also investigated. Realistic modeling of the 3D magnetic fields shows that fringe fields decay rapidly and have a very small magnitude at the linac head. As a result, longitudinal linac-MR systems mostly confine contaminant electrons that are generated in the air gap and have an insignificant effect on electrons produced further upstream. The increase in the skin dose for the longitudinal configuration compared to the zero B-field case varies from ∼1% to ∼14% for air gaps of 5-31 cm, respectively. (All dose changes are reported as a % of D(max).) The increase is also field-size dependent, ranging from ∼3% at 20 × 20 cm(2) to ∼11% at 5 × 5 cm(2). The small changes in skin dose are in contrast to significant increases that are calculated for the unrealistic 1D magnetic field. For the transverse configuration, the entrance skin dose is equal or smaller than that of the zero B-field case for perpendicular beams. For a 10 × 10 cm(2) oblique beam the transverse magnetic field decreases the entry skin dose for oblique angles less than ±20° and increases it by no more than 10% for larger angles up to ±45°. The exit skin dose is increased by 42% for a 10 × 10 cm(2) perpendicular beam, but appreciably drops and approaches the zero B-field case for large oblique angles of incidence. For longitudinal linac-MR systems only a small increase in the entrance skin dose is predicted, due to the rapid decay of the realistic magnetic fringe fields. For transverse linac-MR systems, changes to the entrance skin dose are small for most scenarios. For the same geometry, on the exit side a fairly large increase is observed for perpendicular beams, but significantly drops for large oblique angles of incidence. The observed effects on skin dose are not expected to limit the application of linac-MR systems in either the longitudinal or transverse configuration.
A 3-D ecosystem model in the Pacific Ocean and its simulations
NASA Astrophysics Data System (ADS)
Xu, Y.; Ba, Q.
2011-12-01
A simple 3-D ecosystem model with nutrient, phytoplankton, zooplankton and detritus is coupled into the basinwide ocean general circulation (OGCM) of the Pacific Ocean that has been examined by the passive tracer such as tritium. The model was integrated for 500 years under the forcing of climatological monthly mean fields. The model generates similar distribution patterns of ecosystem variables to the estimates based on satellite-derived chlorophyll maps by vertically generalized production model with low water-column NPP values in the subtropical region and high values in the subarctic region and equatorial upwelling region. But the area and strength of oligotrophic gyre is much larger than that indicated in the observations. Compared with the observations, seasonal variations of surface chlorophyll concentrations and top 200-m average zooplankton biomass in the mid-high latitude regions are well simulated in the model. Because of the restoring term near the northern boundary used in the model, a false phytoplankton bloom can occur nearby 50N during winter time. An unrealistic maximum value in the vertical profile of chlorophyll near ocean weather station Papa is generated by our model. In terms of modification of model structure and sensitivity test of the associated parameters, the simulated results can be well improved. Although the division of nutrient into nitrate and ammonium and inclusion of DON in the model can alleviate the low-NPP problem in the subtropical region, modification of the sinking rate and decomposition rate of detritus in the model can be more effective. Introduction of the influence of mixed layer on the ecosystem process and modification of restraint of nutrients near the northern boundary can overcome the shortcomings of simulation of both spring bloom near 50N and vertical profile of chlorophyll at Papa to some extent.
Variability of the Arctic Basin Oceanographic Fields
1996-02-01
the model a very sophisticated turbulence closure scheme. 9. Imitation of the CO2 doubling We parameterized the " greenhouse " effect by changing the...of the Arctic Ocean. A more realistic model of the Arctic Ocean circulation was obtained, and an estimation of the impact of the greenhouse effect on... greenhouse effect is in freshening of the upper Arctic Basin. Although some shortcomings of the model still exist (an unrealistic high coefficient of
The POW Problem in Russia: Justification for Allied Intervention, 1918-1920
1977-06-10
Austrians not Germans. 32 Concerning Japan’s reaction to Allied probes on interventionPresident Wilson told his Secretary of State to inform the Japanese...and punishments would be meted out according to their reactions . 3 7 The Germans were very concerned about Bolshevik propagandizing in the prison camps...This hope was most unrealistic based upon Lenin’s reaction to the Japanese landing at Vladivostok. The investigators in Siberia provided good
NASA Astrophysics Data System (ADS)
Hale, Lucas M.; Trautt, Zachary T.; Becker, Chandler A.
2018-07-01
Atomistic simulations using classical interatomic potentials are powerful investigative tools linking atomic structures to dynamic properties and behaviors. It is well known that different interatomic potentials produce different results, thus making it necessary to characterize potentials based on how they predict basic properties. Doing so makes it possible to compare existing interatomic models in order to select those best suited for specific use cases, and to identify any limitations of the models that may lead to unrealistic responses. While the methods for obtaining many of these properties are often thought of as simple calculations, there are many underlying aspects that can lead to variability in the reported property values. For instance, multiple methods may exist for computing the same property and values may be sensitive to certain simulation parameters. Here, we introduce a new high-throughput computational framework that encodes various simulation methodologies as Python calculation scripts. Three distinct methods for evaluating the lattice and elastic constants of bulk crystal structures are implemented and used to evaluate the properties across 120 interatomic potentials, 18 crystal prototypes, and all possible combinations of unique lattice site and elemental model pairings. Analysis of the results reveals which potentials and crystal prototypes are sensitive to the calculation methods and parameters, and it assists with the verification of potentials, methods, and molecular dynamics software. The results, calculation scripts, and computational infrastructure are self-contained and openly available to support researchers in performing meaningful simulations.
Male body image in Taiwan versus the West: Yanggang Zhiqi meets the Adonis complex.
Yang, Chi-Fu Jeffrey; Gray, Peter; Pope, Harrison G
2005-02-01
Body image disorders appear to be more prevalent in Western than non-Western men. Previous studies by the authors have shown that young Western men display unrealistic body ideals and that Western advertising seems to place an increasing value on the male body. The authors hypothesized that Taiwanese men would exhibit less dissatisfaction with their bodies than Western men and that Taiwanese advertising would place less value on the male body than Western media. The authors administered a computerized test of body image to 55 heterosexual men in Taiwan and compared the results to those previously obtained in an identical study in the United States and Europe. Second, they counted the number of undressed male and female models in American versus Taiwanese women's magazine advertisements. In the body image study, the Taiwanese men exhibited significantly less body dissatisfaction than their Western counterparts. In the magazine study, American magazine advertisements portrayed undressed Western men frequently, but Taiwanese magazines portrayed undressed Asian men rarely. Taiwan appears less preoccupied with male body image than Western societies. This difference may reflect 1) Western traditions emphasizing muscularity and fitness as a measure of masculinity, 2) increasing exposure of Western men to muscular male bodies in media images, and 3) greater decline in traditional male roles in the West, leading to greater emphasis on the body as a measure of masculinity. These factors may explain why body dysmorphic disorder and anabolic steroid abuse are more serious problems in the West than in Taiwan.
Two-stage model of radon-induced malignant lung tumors in rats: effects of cell killing
NASA Technical Reports Server (NTRS)
Luebeck, E. G.; Curtis, S. B.; Cross, F. T.; Moolgavkar, S. H.
1996-01-01
A two-stage stochastic model of carcinogenesis is used to analyze lung tumor incidence in 3750 rats exposed to varying regimens of radon carried on a constant-concentration uranium ore dust aerosol. New to this analysis is the parameterization of the model such that cell killing by the alpha particles could be included. The model contains parameters characterizing the rate of the first mutation, the net proliferation rate of initiated cells, the ratio of the rates of cell loss (cell killing plus differentiation) and cell division, and the lag time between the appearance of the first malignant cell and the tumor. Data analysis was by standard maximum likelihood estimation techniques. Results indicate that the rate of the first mutation is dependent on radon and consistent with in vitro rates measured experimentally, and that the rate of the second mutation is not dependent on radon. An initial sharp rise in the net proliferation rate of initiated cell was found with increasing exposure rate (denoted model I), which leads to an unrealistically high cell-killing coefficient. A second model (model II) was studied, in which the initial rise was attributed to promotion via a step function, implying that it is due not to radon but to the uranium ore dust. This model resulted in values for the cell-killing coefficient consistent with those found for in vitro cells. An "inverse dose-rate" effect is seen, i.e. an increase in the lifetime probability of tumor with a decrease in exposure rate. This is attributed in large part to promotion of intermediate lesions. Since model II is preferable on biological grounds (it yields a plausible cell-killing coefficient), such as uranium ore dust. This analysis presents evidence that a two-stage model describes the data adequately and generates hypotheses regarding the mechanism of radon-induced carcinogenesis.
Unraveling the drivers of community dissimilarity and species extinction in fragmented landscapes.
Banks-Leite, Cristina; Ewers, Robert M; Metzger, Jean Paul
2012-12-01
Communities in fragmented landscapes are often assumed to be structured by species extinction due to habitat loss, which has led to extensive use of the species-area relationship (SAR) in fragmentation studies. However, the use of the SAR presupposes that habitat loss leads species to extinction but does not allow for extinction to be offset by colonization of disturbed-habitat specialists. Moreover, the use of SAR assumes that species richness is a good proxy of community changes in fragmented landscapes. Here, we assessed how communities dwelling in fragmented landscapes are influenced by habitat loss at multiple scales; then we estimated the ability of models ruled by SAR and by species turnover in successfully predicting changes in community composition, and asked whether species richness is indeed an informative community metric. To address these issues, we used a data set consisting of 140 bird species sampled in 65 patches, from six landscapes with different proportions of forest cover in the Atlantic Forest of Brazil. We compared empirical patterns against simulations of over 8 million communities structured by different magnitudes of the power-law SAR and with species-specific rules to assign species to sites. Empirical results showed that, while bird community composition was strongly influenced by habitat loss at the patch and landscape scale, species richness remained largely unaffected. Modeling results revealed that the compositional changes observed in the Atlantic Forest bird metacommunity were only matched by models with either unrealistic magnitudes of the SAR or by models ruled by species turnover, akin to what would be observed along natural gradients. We show that, in the presence of such compositional turnover, species richness is poorly correlated with species extinction, and z values of the SAR strongly underestimate the effects of habitat loss. We suggest that the observed compositional changes are driven by each species reaching its individual extinction threshold: either a threshold of forest cover for species that disappear with habitat loss, or of matrix cover for species that benefit from habitat loss.
NOx emission estimates during the 2014 Youth Olympic Games in Nanjing
NASA Astrophysics Data System (ADS)
Ding, J.; van der A, R. J.; Mijling, B.; Levelt, P. F.; Hao, N.
2015-08-01
The Nanjing Government applied temporary environmental regulations to guarantee good air quality during the Youth Olympic Games (YOG) in 2014. We study the effect of those regulations by applying the emission estimate algorithm DECSO (Daily Emission estimates Constrained by Satellite Observations) to measurements of the Ozone Monitoring Instrument (OMI). We improved DECSO by updating the chemical transport model CHIMERE from v2006 to v2013 and by adding an Observation minus Forecast (OmF) criterion to filter outlying satellite retrievals due to high aerosol concentrations. The comparison of model results with both ground and satellite observations indicates that CHIMERE v2013 is better performing than CHIMERE v2006. After filtering the satellite observations with high aerosol loads that were leading to large OmF values, unrealistic jumps in the emission estimates are removed. Despite the cloudy conditions during the YOG we could still see a decrease of tropospheric NO2 column concentrations of about 32 % in the OMI observations when compared to the average NO2 columns from 2005 to 2012. The results of the improved DECSO algorithm for NOx emissions show a reduction of at least 25 % during the YOG period and afterwards. This indicates that air quality regulations taken by the local government have an effect in reducing NOx emissions. The algorithm is also able to detect an emission reduction of 10 % during the Chinese Spring Festival. This study demonstrates the capacity of the DECSO algorithm to capture the change of NOx emissions on a monthly scale. We also show that the observed NO2 columns and the derived emissions show different patterns that provide complimentary information. For example, the Nanjing smog episode in December 2013 led to a strong increase in NO2 concentrations without an increase in NOx emissions. Furthermore, DECSO gives us important information on the non-trivial seasonal relation between NOx emissions and NO2 concentrations on a local scale.
NASA Astrophysics Data System (ADS)
Krueger, S. K.; Zulauf, M. A.; Li, Y.; Zipser, E. J.
2005-05-01
Global satellite datasets such as those produced by ISCCP, ERBE, and CERES provide strong observational constraints on cloud radiative properties. Such observations have been widely used for model evaluation, tuning, and improvement. Cloud radiative properties depend primarily on small, non-precipitating cloud droplets and ice crystals, yet the dynamical, microphysical and radiative processes which produce these small particles often involve large, precipitating hydrometeors. There now exists a global dataset of tropical cloud system precipitation feature (PF) properties, collected by TRMM and produced by Steve Nesbitt, that provides additional observational constraints on cloud system properties. We are using the TRMM PF dataset to evaluate the precipitation microphysics of two simulations of deep, precipitating, convective cloud systems: one is a 29-day summertime, continental case (ARM Summer 1997 SCM IOP, at the Southern Great Plains site); the second is a tropical maritime case: the Kwajalein MCS of 11-12 August 1999 (part of a 52-day simulation). Both simulations employed the same bulk, three-ice category microphysical parameterization (Krueger et al. 1995). The ARM simulation was executed using the UCLA/Utah 2D CRM, while the KWAJEX simulation was produced using the 3D CSU CRM (SAM). The KWAJEX simulation described above is compared with both the actual radar data and the TRMM statistics. For the Kwajalein MCS of 11 to 12 August 1999, there are research radar data available for the lifetime of the system. This particular MCS was large in size and rained heavily, but it was weak to average in measures of convective intensity, against the 5-year TRMM sample of 108. For the Kwajalein MCS simulation, the 20 dBZ contour is at 15.7 km and the 40 dBZ contour at 14.5 km! Of all 108 MCSs observed by TRMM, the highest value for the 40 dBZ contour is 8 km. Clearly, the high reflectivity cores are off scale compared with observed cloud systems in this area. A similar conclusion can be reached by comparing the simulated microwave brightness temperatures with observed brightness temperatures at 85 GHz and 37 GHz. In each case, the simulations are more extreme than all observed MCSs in the region over the 5 year period. The situation is similar but less egregious for the southern Great Plains simulation. Inspection of the cloud microphysics output files reveals the source of the discrepancy between simulation and observations in the upper troposphere. The simulations have very large graupel concentrations between about 5-10 km, as high as 10 g/kg graupel mixing ratio. This guarantees that there are very high radar reflectivities extending into the upper troposphere, and unrealistically low microwave brightness temperatures. We also performed a set of short (6-h) numerical simulations of the life cycle of a single convection cell to examine the sensitivity of the simulated graupel fields to the intercept parameter and the density of the graupel. The control case used the same values as the ARM and KWAJEX simulations. Reducing the intercept parameter by a factor of 100 reduced the maximum graupel mixing ratios but increased the maximum dBZ values. This suggests that the discrepencies between the simulations and the observations must involve the graupel growth rates.
``Supemodeling" by Coupling Multiple Atmospheres to A Single Ocean Simulates Single-ITCZ Climatology
NASA Astrophysics Data System (ADS)
Duane, G. S.; Shen, M. L.; Keenlyside, N. S.
2017-12-01
If the members of an ensemble of different models are allowed to interact with one another in run time, predictive skill can be improved as compared to that of any individual model or any average of individual model outputs. Inter-model connections in such an interactive ensemble can be trained, using historical data, so that the resulting ``supermodel" synchronizes with reality when observations are continuously assimilated, as in weather prediction. In climate-projection mode, the supermodel, after training, reproduces the attractor of the real system. We consider a variant of full supermodeling in which the models are only connected via the fluxes at the ocean-atmosphere interface. Two ECHAM atmospheres that differ in their convection parametrization schemes are thus connected to a single, shared ocean. The atmospheres partially synchronize at lower levels in the tropics, giving more realistic SST patterns than the constituent models: Although the constituent models both exhibit double ITCZ's, with cold tongues that extend too far west, the supermodel has the desired single ITCZ [Shen et al., Geophys. Res. Lett. 2016]. Here we explain the physical mechanism through which the supermodel removes even defects that are shared. One model (Nordeng) produces an unrealistically large zonal wind stress that results in upwelling of cold water and westward extension of the cold tongue. The other model (Tiedtke) produces an unrealistically small zonal wind stress that also implies a reduced wind stress curl off the equator because of Hadley-Walker coupling. The reduced wind stress curl leads to downwelling off the equator, and resultant upwelling of cold water at the equator through the tropical ocean cell. Thus the two constituent models give erroneous patterns of the same type, while the supermodel, which combines the models dynamically, avoids the error. If the models were linear, the errors of the two models would average; the success of the supermodel depends on nonlinearities in the east-west and north-south ocean-atmosphere feedbacks. It is argued that such behavior is widespread: supermodeling near-critical behavior in the coupled Earth System can give results that depend non-monotonically on the weights attached to the constituent models, thus surpassing those models, even when they err in the same way.
Indian Ocean Dipolelike Variability in the CSIRO Mark 3 Coupled Climate Model.
NASA Astrophysics Data System (ADS)
Cai, Wenju; Hendon, Harry H.; Meyers, Gary
2005-05-01
Coupled ocean-atmosphere variability in the tropical Indian Ocean is explored with a multicentury integration of the Commonwealth Scientific and Industrial Research Organisation (CSIRO) Mark 3 climate model, which runs without flux adjustment. Despite the presence of some common deficiencies in this type of coupled model, zonal dipolelike variability is produced. During July through November, the dominant mode of variability of sea surface temperature resembles the observed zonal dipole and has out-of-phase rainfall variations across the Indian Ocean basin, which are as large as those associated with the model El Niño-Southern Oscillation (ENSO). In the positive dipole phase, cold SST anomaly and suppressed rainfall south of the equator on the Sumatra-Java coast drives an anticyclonic circulation anomaly that is consistent with the steady response (Gill model) to a heat sink displaced south of the equator. The northwest-southeast tilting Sumatra-Java coast results in cold sea surface temperature (SST) centered south of the equator, which forces anticylonic winds that are southeasterly along the coast, which thus produces local upwelling, cool SSTs, and promotes more anticylonic winds; on the equator, the easterlies raise the thermocline to the east via upwelling Kelvin waves and deepen the off-equatorial thermocline to the west via off-equatorial downwelling Rossby waves. The model dipole mode exhibits little contemporaneous relationship with the model ENSO; however, this does not imply that it is independent of ENSO. The model dipole often (but not always) develops in the year following El Niño. It is triggered by an unrealistic transmission of the model's ENSO discharge phase through the Indonesian passages. In the model, the ENSO discharge Rossby waves arrive at the Sumatra-Java coast some 6 to 9 months after an El Niño peaks, causing the majority of model dipole events to peak in the year after an ENSO warm event. In the observed ENSO discharge, Rossby waves arrive at the Australian northwest coast. Thus the model Indian Ocean dipolelike variability is triggered by an unrealistic mechanism. The result highlights the importance of properly representing the transmission of Pacific Rossby waves and Indonesian throughflow in the complex topography of the Indonesian region in coupled climate models.
Risk assessment in mental health care: values and costs.
Szmukler, George; Rose, Nikolas
2013-01-01
Risk assessment has assumed increasing salience in mental health care in a number of countries. The frequency of serious violent incidents perpetrated by people with a mental illness is an insufficient explanation. Understandings of mental illness and of the role of those charged with their care (or control) play a key role. "Moral outrage", associated with an implied culpability when certain types of tragedy occur, is very significant. This leads to tensions concerning the role of post-incident inquiries, and contributes to a flawed conception of what such inquiries can offer. At the same time, understanding of probability and prediction is generally very poor, among both professionals and the public. Unrealistic expectations for risk assessment and management in general psychiatric practice carry a variety of significant costs, taking a number forms, to those with a mental illness, to mental health professionals and to services. Especially important are changes in professional practice and accountabilities that are significantly divorced from traditional practice, implications for trust in patient-clinician relationships and the organisations in which mental health professionals work, and practices that often breach the ethical principle of justice (or fairness) and heighten discrimination against people with mental illness. Copyright © 2013 John Wiley & Sons, Ltd.
Evaluation of High-Performance Space Nuclear Electric Generators for Electric Propulsion Application
NASA Technical Reports Server (NTRS)
Woodcock, Gordon; Kross, Dennis A. (Technical Monitor)
2002-01-01
Electric propulsion applications are enhanced by high power-to-mass ratios for their electric power sources. At multi-megawatt levels, we can expect thrust production systems to be less than 5 kg/kWe. Application of nuclear electric propulsion to human Mars missions becomes an attractive alternative to nuclear thermal propulsion if the propulsion system is less than about 10 kg/kWe. Recent references have projected megawatt-plus nuclear electric sources at specific mass values from less than 1 kg/kWe to about 5 kg/kWe. Various assumptions are made regarding power generation cycle (turbogenerator; MHD (magnetohydrodynamics)) and reactor heat source design. The present paper compares heat source and power generation options on the basis of a parametric model that emphasizes heat transfer design and realizable hardware concept. Pressure drop (important!) is included in the power cycle analysis, and MHD and turbogenerator cycles are compared. Results indicate that power source specific mass less than 5 kg/kWe is attainable, even if peak temperatures achievable are limited to 1500 K. Projections of specific mass less than 1 kg/kWe are unrealistic, even at the highest peak temperatures considered.
NASA Astrophysics Data System (ADS)
Murillo, Sergio; Pattichis, Marios; Soliz, Peter; Barriga, Simon; Loizou, C. P.; Pattichis, C. S.
2010-03-01
Motion estimation from digital video is an ill-posed problem that requires a regularization approach. Regularization introduces a smoothness constraint that can reduce the resolution of the velocity estimates. The problem is further complicated for ultrasound videos (US), where speckle noise levels can be significant. Motion estimation using optical flow models requires the modification of several parameters to satisfy the optical flow constraint as well as the level of imposed smoothness. Furthermore, except in simulations or mostly unrealistic cases, there is no ground truth to use for validating the velocity estimates. This problem is present in all real video sequences that are used as input to motion estimation algorithms. It is also an open problem in biomedical applications like motion analysis of US of carotid artery (CA) plaques. In this paper, we study the problem of obtaining reliable ultrasound video motion estimates for atherosclerotic plaques for use in clinical diagnosis. A global optimization framework for motion parameter optimization is presented. This framework uses actual carotid artery motions to provide optimal parameter values for a variety of motions and is tested on ten different US videos using two different motion estimation techniques.
Cooperation in Harsh Environments and the Emergence of Spatial Patterns.
Smaldino, Paul E
2013-11-01
This paper concerns the confluence of two important areas of research in mathematical biology: spatial pattern formation and cooperative dilemmas. Mechanisms through which social organisms form spatial patterns are not fully understood. Prior work connecting cooperation and pattern formation has often included unrealistic assumptions that shed doubt on the applicability of those models toward understanding real biological patterns. I investigated a more biologically realistic model of cooperation among social actors. The environment is harsh, so that interactions with cooperators are strictly needed to survive. Harshness is implemented via a constant energy deduction. I show that this model can generate spatial patterns similar to those seen in many naturally-occuring systems. Moreover, for each payoff matrix there is an associated critical value of the energy deduction that separates two distinct dynamical processes. In low-harshness environments, the growth of cooperator clusters is impeded by defectors, but these clusters gradually expand to form dense dendritic patterns. In very harsh environments, cooperators expand rapidly but defectors can subsequently make inroads to form reticulated patterns. The resulting web-like patterns are reminiscent of transportation networks observed in slime mold colonies and other biological systems.
Mercury's capture into the 3/2 spin-orbit resonance as a result of its chaotic dynamics.
Correia, Alexandre C M; Laskar, Jacques
2004-06-24
Mercury is locked into a 3/2 spin-orbit resonance where it rotates three times on its axis for every two orbits around the sun. The stability of this equilibrium state is well established, but our understanding of how this state initially arose remains unsatisfactory. Unless one uses an unrealistic tidal model with constant torques (which cannot account for the observed damping of the libration of the planet) the computed probability of capture into 3/2 resonance is very low (about 7 per cent). This led to the proposal that core-mantle friction may have increased the capture probability, but such a process requires very specific values of the core viscosity. Here we show that the chaotic evolution of Mercury's orbit can drive its eccentricity beyond 0.325 during the planet's history, which very efficiently leads to its capture into the 3/2 resonance. In our numerical integrations of 1,000 orbits of Mercury over 4 Gyr, capture into the 3/2 spin-orbit resonant state was the most probable final outcome of the planet's evolution, occurring 55.4 per cent of the time.
The Quantum Logical Challenge: Peter Mittelstaedt's Contributions to Logic and Philosophy of Science
NASA Astrophysics Data System (ADS)
Beltrametti, E.; Dalla Chiara, M. L.; Giuntini, R.
2017-12-01
Peter Mittelstaedt's contributions to quantum logic and to the foundational problems of quantum theory have significantly realized the most authentic spirit of the International Quantum Structures Association: an original research about hard technical problems, which are often "entangled" with the emergence of important changes in our general world-conceptions. During a time where both the logical and the physical community often showed a skeptical attitude towards Birkhoff and von Neumann's quantum logic, Mittelstaedt brought into light the deeply innovating features of a quantum logical thinking that allows us to overcome some strong and unrealistic assumptions of classical logical arguments. Later on his intense research on the unsharp approach to quantum theory and to the measurement problem stimulated the increasing interest for unsharp forms of quantum logic, creating a fruitful interaction between the work of quantum logicians and of many-valued logicians. Mittelstaedt's general views about quantum logic and quantum theory seem to be inspired by a conjecture that is today more and more confirmed: there is something universal in the quantum theoretic formalism that goes beyond the limits of microphysics, giving rise to interesting applications to a number of different fields.
Cahoon, D.R.; Cowan, J.H.
1988-01-01
The capabilities of a new wetland dredging technology were assessed along with associated newly developed state and federal regulatory policies to determine if policy expectations realistically match the technological achievement. Current regulatory practices require amelioration of spoil bank impacts upon abandonment of an oil/gas well, but this may not occur for many years or decades, if at all. Recently, a dreding method (high-pressure spray spoil disposal) was developed that does not create a spoil bank in the traditional sense. Its potential for reducing environmental impacts was recognized immediately by regulatory agencies for whom minimizing spoil bank impacts is a major concern. The use of high-pressure spray disposal as a suitable alternative to traditional dreding technology has been adopted as policy even though its value as a management tool has never been tested or verified. A qualitative evaluation at two spoil disposal sites in saline marsh indicates that high-pressure spray disposal may indeed have great potential to minimize impacts, but most of this potential remains unverified. Also, some aspects of current regulatory policy may be based on unrealistic expectations as to the ability of this new technology to minimize or eliminate spoil bank impacts.
NASA Astrophysics Data System (ADS)
Szolnoki, Attila; Perc, Matjaž
2013-10-01
Economic experiments reveal that humans value cooperation and fairness. Punishing unfair behavior is therefore common, and according to the theory of strong reciprocity, it is also directly related to rewarding cooperative behavior. However, empirical data fail to confirm that positive and negative reciprocity are correlated. Inspired by this disagreement, we determine whether the combined application of reward and punishment is evolutionarily advantageous. We study a spatial public goods game, where in addition to the three elementary strategies of defection, rewarding, and punishment, a fourth strategy that combines the latter two competes for space. We find rich dynamical behavior that gives rise to intricate phase diagrams where continuous and discontinuous phase transitions occur in succession. Indirect territorial competition, spontaneous emergence of cyclic dominance, as well as divergent fluctuations of oscillations that terminate in an absorbing phase are observed. Yet, despite the high complexity of solutions, the combined strategy can survive only in very narrow and unrealistic parameter regions. Elementary strategies, either in pure or mixed phases, are much more common and likely to prevail. Our results highlight the importance of patterns and structure in human cooperation, which should be considered in future experiments.
Fault and event tree analyses for process systems risk analysis: uncertainty handling formulations.
Ferdous, Refaul; Khan, Faisal; Sadiq, Rehan; Amyotte, Paul; Veitch, Brian
2011-01-01
Quantitative risk analysis (QRA) is a systematic approach for evaluating likelihood, consequences, and risk of adverse events. QRA based on event (ETA) and fault tree analyses (FTA) employs two basic assumptions. The first assumption is related to likelihood values of input events, and the second assumption is regarding interdependence among the events (for ETA) or basic events (for FTA). Traditionally, FTA and ETA both use crisp probabilities; however, to deal with uncertainties, the probability distributions of input event likelihoods are assumed. These probability distributions are often hard to come by and even if available, they are subject to incompleteness (partial ignorance) and imprecision. Furthermore, both FTA and ETA assume that events (or basic events) are independent. In practice, these two assumptions are often unrealistic. This article focuses on handling uncertainty in a QRA framework of a process system. Fuzzy set theory and evidence theory are used to describe the uncertainties in the input event likelihoods. A method based on a dependency coefficient is used to express interdependencies of events (or basic events) in ETA and FTA. To demonstrate the approach, two case studies are discussed. © 2010 Society for Risk Analysis.
A remarkably large depleted core in the Abell 2029 BCG IC 1101
NASA Astrophysics Data System (ADS)
Dullo, Bililign T.; Graham, Alister W.; Knapen, Johan H.
2017-10-01
We report the discovery of an extremely large (Rb ˜2.77 arcsec ≈ 4.2 kpc) core in the brightest cluster galaxy, IC 1101, of the rich galaxy cluster Abell 2029. Luminous core-Sérsic galaxies contain depleted cores - with sizes (Rb) typically 20-500 pc - that are thought to be formed by coalescing black hole binaries. We fit a (double nucleus) + (spheroid) + (intermediate-scale component) + (stellar halo) model to the Hubble Space Telescope surface brightness profile of IC 1101, finding the largest core size measured in any galaxy to date. This core is an order of magnitude larger than those typically measured for core-Sérsic galaxies. We find that the spheroid's V-band absolute magnitude (MV) of -23.8 mag (˜25 per cent of the total galaxy light, I.e. including the stellar halo) is faint for the large Rb, such that the observed core is 1.02 dex ≈ 3.4σs (rms scatter) larger than that estimated from the Rb-MV relation. The suspected scouring process has produced a large stellar mass deficit (Mdef) ˜4.9 × 1011 M⊙, I.e. a luminosity deficit ≈28 per cent of the spheroid's luminosity prior to the depletion. Using IC 1101's black hole mass (MBH) estimated from the MBH-σ, MBH-L and MBH-M* relations, we measure an excessive and unrealistically high number of 'dry' major mergers for IC 1101 (I.e. N ≳ 76) as traced by the large Mdef/MBH ratios of 38-101. The large core, high mass deficit and oversized Mdef/MBH ratio of IC 1101 suggest that the depleted core was scoured by overmassive SMBH binaries with a final coalesced mass MBH ˜ (4-10) × 1010 M⊙, I.e. ˜ (1.7-3.2) × σs larger than the black hole masses estimated using the spheroid's σ, L and M*. The large core might be partly due to oscillatory core passages by a gravitational radiation-recoiled black hole.
Essential role of conformational selection in ligand binding.
Vogt, Austin D; Pozzi, Nicola; Chen, Zhiwei; Di Cera, Enrico
2014-02-01
Two competing and mutually exclusive mechanisms of ligand recognition - conformational selection and induced fit - have dominated our interpretation of ligand binding in biological macromolecules for almost six decades. Conformational selection posits the pre-existence of multiple conformations of the macromolecule from which the ligand selects the optimal one. Induced fit, on the other hand, postulates the existence of conformational rearrangements of the original conformation into an optimal one that are induced by binding of the ligand. In the former case, conformational transitions precede the binding event; in the latter, conformational changes follow the binding step. Kineticists have used a facile criterion to distinguish between the two mechanisms based on the dependence of the rate of relaxation to equilibrium, kobs, on the ligand concentration, [L]. A value of kobs decreasing hyperbolically with [L] has been seen as diagnostic of conformational selection, while a value of kobs increasing hyperbolically with [L] has been considered diagnostic of induced fit. However, this simple conclusion is only valid under the rather unrealistic assumption of conformational transitions being much slower than binding and dissociation events. In general, induced fit only produces values of kobs that increase with [L] but conformational selection is more versatile and is associated with values of kobs that increase with, decrease with or are independent of [L]. The richer repertoire of kinetic properties of conformational selection applies to kinetic mechanisms with single or multiple saturable relaxations and explains the behavior of nearly all experimental systems reported in the literature thus far. Conformational selection is always sufficient and often necessary to account for the relaxation kinetics of ligand binding to a biological macromolecule and is therefore an essential component of any binding mechanism. On the other hand, induced fit is never necessary and only sufficient in a few cases. Therefore, the long assumed importance and preponderance of induced fit as a mechanism of ligand binding should be reconsidered. © 2013 Elsevier B.V. All rights reserved.
Clements, William H; Cadmus, Pete; Brinkman, Stephen F
2013-07-02
Field surveys of metal-contaminated streams suggest that some aquatic insects, particularly mayflies (Ephemeroptera) and stoneflies (Plecoptera), are highly sensitive to metals. However, results of single species toxicity tests indicate these organisms are quite tolerant, with LC50 values often several orders of magnitude greater than those obtained using standard test organisms (e.g., cladocerans and fathead minnows). Reconciling these differences is a critical research need, particularly since water quality criteria for metals are based primarily on results of single species toxicity tests. In this research we provide evidence based on community-level microcosm experiments to support the hypothesis that some aquatic insects are highly sensitive to metals. We present results of three experiments that quantified effects of Cu and Zn, alone and in combination, on stream insect communities. EC50 values, defined as the metal concentration that reduced abundance of aquatic insects by 50%, were several orders of magnitude lower than previously published values obtained from single species tests. We hypothesize that the short duration of laboratory toxicity tests and the failure to evaluate effects of metals on sensitive early life stages are the primary factors responsible for unrealistically high LC50 values in the literature. We also observed that Cu alone was significantly more toxic to aquatic insects than the combination of Cu and Zn, despite the fact that exposure concentrations represented theoretically similar toxicity levels. Our results suggest that water quality criteria for Zn were protective of most aquatic insects, whereas Cu was highly toxic to some species at concentrations near water quality criteria. Because of the functional significance of aquatic insects in stream ecosystems and their well-established importance as indicators of water quality, reconciling differences between field and laboratory responses and understanding the mechanisms responsible for variation in sensitivity among metals and metal mixtures is of critical importance.
Statistical Approaches for Spatiotemporal Prediction of Low Flows
NASA Astrophysics Data System (ADS)
Fangmann, A.; Haberlandt, U.
2017-12-01
An adequate assessment of regional climate change impacts on streamflow requires the integration of various sources of information and modeling approaches. This study proposes simple statistical tools for inclusion into model ensembles, which are fast and straightforward in their application, yet able to yield accurate streamflow predictions in time and space. Target variables for all approaches are annual low flow indices derived from a data set of 51 records of average daily discharge for northwestern Germany. The models require input of climatic data in the form of meteorological drought indices, derived from observed daily climatic variables, averaged over the streamflow gauges' catchments areas. Four different modeling approaches are analyzed. Basis for all pose multiple linear regression models that estimate low flows as a function of a set of meteorological indices and/or physiographic and climatic catchment descriptors. For the first method, individual regression models are fitted at each station, predicting annual low flow values from a set of annual meteorological indices, which are subsequently regionalized using a set of catchment characteristics. The second method combines temporal and spatial prediction within a single panel data regression model, allowing estimation of annual low flow values from input of both annual meteorological indices and catchment descriptors. The third and fourth methods represent non-stationary low flow frequency analyses and require fitting of regional distribution functions. Method three is subject to a spatiotemporal prediction of an index value, method four to estimation of L-moments that adapt the regional frequency distribution to the at-site conditions. The results show that method two outperforms successive prediction in time and space. Method three also shows a high performance in the near future period, but since it relies on a stationary distribution, its application for prediction of far future changes may be problematic. Spatiotemporal prediction of L-moments appeared highly uncertain for higher-order moments resulting in unrealistic future low flow values. All in all, the results promote an inclusion of simple statistical methods in climate change impact assessment.
Using Reputation Systems and Non-Deterministic Routing to Secure Wireless Sensor Networks
Moya, José M.; Vallejo, Juan Carlos; Fraga, David; Araujo, Álvaro; Villanueva, Daniel; de Goyeneche, Juan-Mariano
2009-01-01
Security in wireless sensor networks is difficult to achieve because of the resource limitations of the sensor nodes. We propose a trust-based decision framework for wireless sensor networks coupled with a non-deterministic routing protocol. Both provide a mechanism to effectively detect and confine common attacks, and, unlike previous approaches, allow bad reputation feedback to the network. This approach has been extensively simulated, obtaining good results, even for unrealistically complex attack scenarios. PMID:22412345
2008-10-01
attempts to measure the long-term distribution of stor- age time have relied unrealistic assumptions, but two recent studies suggest a new approach. As...sediment 10 age . Everitt (1968) mapped the age distribution of cottonwoods along a 34 km stretch of the Little Missouri River in North Dakota...Dietrich et al. (1982) applied Erikssons (1971) method to estimate the residence time distribution from Everitts age distribution. Somewhat mysteriously
Are Your Patients Burning Out?
Vachon, M. L. S.
1982-01-01
The term burnout came into the literature in the 1970s. Since then it has become a popularized and misunderstood concept. In this article burnout is seen to be an interaction between idealistically high personal expectations and a willingness to sacrifice personal needs to the workplace, with unrealistic expectations within the work environment. Physical, psychological and occupational symptoms accompany this syndrome, which must be differentiated from clinical depression. Suggestions for treatment include changes at both the personal and organizational level. PMID:21286517
Self-Other Differences in Perceiving Why People Eat What They Eat
Sproesser, Gudrun; Klusmann, Verena; Schupp, Harald T.; Renner, Britta
2017-01-01
People often view themselves more favorably than others, displaying unrealistic optimism. In the present study, we investigated whether people perceive their reasons for eating as better than those of others. Furthermore, we investigated which mechanisms of inaccuracy might underlie a possible bias when perceiving why people eat what they eat. In Study 1, 117 participants rated the social desirability of eating motives. In Study 2, 772 participants provided information on their own and others’ motives for eating behavior. In Study 1, particularly desirable motives were eating because of hunger, health reasons, and liking. Particularly undesirable motives were eating to make a good impression, to comply with social norms, and to regulate negative affect. Study 2 revealed that for socially desirable motives, participants perceived their own motives to be stronger; for undesirable motives, the opposite pattern emerged, with others being attributed stronger motives. Moreover, the perception of others’ emotional and social motives varied with participants’ own healthy eating behavior. Since the perception of eating motives of others should be independent of one’s own behavior, this pattern of results indicates a relative inaccuracy in the perception of others’ eating motives. In conclusion, there is evidence for unrealistic optimism in eating motives. For social and emotional motives, this self-favoring view seems to be driven by a relatively inaccurate perception of others. PMID:28261140
The costs and benefits of positive illusions
Makridakis, Spyros; Moleskis, Andreas
2015-01-01
Positive illusions are associated with unrealistic optimism about the future and an inflated assessment of one’s abilities. They are prevalent in normal life and are considered essential for maintaining a healthy mental state, although, there are disagreements to the extent to which people demonstrate these positive illusions and whether they are beneficial or not. But whatever the situation, it is hard to dismiss their existence and their positive and/or negative influence on human behavior and decision making in general. Prominent among illusions is that of control, that is “the tendency for people to overestimate their ability to control events.” This paper describes positive illusions, their potential benefits but also quantifies their costs in five specific fields (gambling, stock and other markets, new firms and startups, preventive medicine and wars). It is organized into three parts. First the psychological reasons giving rise to positive illusions are described and their likely harm and benefits stated. Second, their negative consequences are presented and their costs are quantified in five areas seriously affected with emphasis to those related to the illusion of control that seems to dominate those of unrealistic optimism. The costs involved are huge and serious efforts must be undertaken to understand their enormity and steps taken to avoid them in the future. Finally, there is a concluding section where the challenges related to positive illusions are noted and directions for future research are presented. PMID:26175698
The costs and benefits of positive illusions.
Makridakis, Spyros; Moleskis, Andreas
2015-01-01
Positive illusions are associated with unrealistic optimism about the future and an inflated assessment of one's abilities. They are prevalent in normal life and are considered essential for maintaining a healthy mental state, although, there are disagreements to the extent to which people demonstrate these positive illusions and whether they are beneficial or not. But whatever the situation, it is hard to dismiss their existence and their positive and/or negative influence on human behavior and decision making in general. Prominent among illusions is that of control, that is "the tendency for people to overestimate their ability to control events." This paper describes positive illusions, their potential benefits but also quantifies their costs in five specific fields (gambling, stock and other markets, new firms and startups, preventive medicine and wars). It is organized into three parts. First the psychological reasons giving rise to positive illusions are described and their likely harm and benefits stated. Second, their negative consequences are presented and their costs are quantified in five areas seriously affected with emphasis to those related to the illusion of control that seems to dominate those of unrealistic optimism. The costs involved are huge and serious efforts must be undertaken to understand their enormity and steps taken to avoid them in the future. Finally, there is a concluding section where the challenges related to positive illusions are noted and directions for future research are presented.
Kurt, Emel; Karabaş, Özer; Yorguner, Neşe; Wurz, Axel; Topçuoğlu, Volkan
2016-01-01
Panic disorder is an anxiety disorder that involves recurrent panic attacks, which emerge when a harmless stimulus is interpreted as "catastrophic". In an attempt to avoid the panic attack or prevent confrontation, the patient exhibits a dysfunctional attitude and behavior, such as evasion and safety-seeking behavior (SSB). Dysfunctional behavior leads to an increase in the recurrence of panic attacks and affects the patient's life in a negative way. According to the cognitive behavioral therapy model, SSB contributes to the continuation of unrealistic beliefs (e.g. physical experiences) regarding and prevents the patient from grasping new information that may potentially contradict the unrealistic cognitions. In this paper, we present a case with a primary diagnosis of panic disorder. Interestingly, this patient developed diabetes mellitus (DM) type 2 and psychogenic polydipsia (PPD) as a consequence of his SSB. PPD is a common occurrence in patients with psychiatric disorders, especially in schizophrenia. Up to now, no case of a panic disorder with either DM or PPD has been reported in the literature. While it is accepted that major depression poses a risk for DM type 2, panic disorder may also increase this risk. Treatment of the panic disorder with cognitive behavioral therapy (CBT) resulted in improvement of PPD and DM type 2. In conclusion, the role of SSB in medical disorders accompanied by psychiatric disorders should be kept in mind when treating these patients.
NASA Astrophysics Data System (ADS)
El-Madany, T.; Griessbaum, F.; Maneke, F.; Chu, H.-S.; Wu, C.-C.; Chang, S. C.; Hsia, Y.-J.; Juang, J.-Y.; Klemm, O.
2010-07-01
To estimate carbon dioxide or water vapor fluxes with the Eddy Covariance method high quality data sets are necessary. Under foggy conditions this is challenging, because open path measurements are influenced by the water droplets that cross the measurement path as well as deposit on the windows of the optical path. For the LI-7500 the deposition of droplets on the window results in an intensity reduction of the infrared beam. To keep the strength of the infrared beam under these conditions, the energy is increased. A measure for the increased energy is given by the AGC value (Automatic Gain Control). Up to a AGC threshold value of 70 % the data from the LI-7500 is assumed to be of good quality (personal communication with LICOR). Due to fog deposition on the windows, the AGC value rises above 70 % and stays there until the fog disappears and the water on the windows evaporates. To gain better data quality during foggy conditions, a blower system was developed that blows the deposited water droplets off the window. The system is triggered if the AGC value rises above 70 %. Then a pneumatic jack will lift the blower system towards the LI-7500 and the water-droplets get blown off with compressed air. After the AGC value drops below 70 %, the pneumatic jack will move back to the idle position. Using this technique showed that not only the fog droplets on the window causing significant problems to the measurement, but also the fog droplets inside the measurement path. Under conditions of very dense fog the measured values of carbon dioxide can get unrealistically high, and for water vapor, negative values can be observed even if the AGC value is below 70 %. The negative values can be explained by the scatter of the infrared beam on the fog droplets. It is assumed, that different types of fog droplet spectra are causing the various error patterns observed. For high quality flux measurements, not only the AGC threshold value of 70 % is important, but also the fluctuation of the AGC value in a flux averaging interval. Such AGC value fluctuations can cause severe jumps in the concentration measurements that can hardly be corrected for. Results of fog effects on the LI-7500 performance and its consequences for flux measurements and budget calculations will be presented.
Magma Chambers, Thermal Energy, and the Unsuccessful Search for a Magma Chamber Thermostat
NASA Astrophysics Data System (ADS)
Glazner, A. F.
2015-12-01
Although the traditional concept that plutons are the frozen corpses of huge, highly liquid magma chambers ("big red blobs") is losing favor, the related notion that magma bodies can spend long periods of time (~106years) in a mushy, highly crystalline state is widely accepted. However, analysis of the thermal balance of magmatic systems indicates that it is difficult to maintain a significant portion in a simmering, mushy state, whether or not the system is eutectic-like. Magma bodies cool primarily by loss of heat to the Earth's surface. The balance between cooling via energy loss to the surface and heating via magma accretion can be denoted as M = ρLa/q, where ρ is magma density, L is latent heat of crystallization, a is the vertical rate of magma accretion, and q is surface heat flux. If M>1, then magma accretion outpaces cooling and a magma chamber forms. For reasonable values of ρ, L, and q, the rate of accretion amust be > ~15 mm/yr to form a persistent volume above the solidus. This rate is extremely high, an order of magnitude faster than estimated pluton-filling rates, and would produce a body 10 km thick in 700 ka, an order of magnitude faster than geochronology indicates. Regardless of the rate of magma supply, the proportion of crystals in the system must vary dramatically with depth at any given time owing to transfer of heat. Mechanical stirring (e.g., by convection) could serve to homogenize crystal content in a magma body, but this is unachievable in crystal-rich, locked-up magma. Without convection the lower part of the magma body becomes much hotter than the top—a process familiar to anyone who has scorched a pot of oatmeal. Thermal models that succeed in producing persistent, large bodies of magma rely on scenarios that are unrealistic (e.g., omitting heat loss to the planet's surface), self-fulfilling prophecies (e.g., setting unnaturally high temperatures as fixed boundary conditions), or physically unreasonable (e.g., magma is intruded faster than geodetic and geophysical observations allow). Magma addition and conductive heat loss rates that are consistent with observation invariably lead to the conclusion that large, long-lived magma bodies, mushy or not, are thermally unsustainable.
What the diurnal cycle of precipitation tells us about land-atmosphere coupling strength
NASA Astrophysics Data System (ADS)
Ferguson, Craig; Song, Hyojong; Roundy, Joshua
2015-04-01
The key attributes of a coupled forecast model are the coupling strengths between the land-atmosphere and ocean-atmosphere schemes. If a model cannot skillfully capture the diurnal cycle of clouds and precipitation, then it likely cannot be expected to yield accurate long-term climate projections. The seasonal drought forecast skill shortfalls of the U.S. NCEP Coupled Forecast System Version 2 (CFSv2) have been directly linked to its unrealistically strong land-atmosphere coupling strength. Most models can be similarly categorized, which is to say, sensitivity to the land physics (i.e., soil moisture constraints on evapotranspiration) is too strong. In nature, the land signal: noise ratio appears to be at a much lower value. Diagnosing land-atmosphere coupling strength requires at a minimum: surface soil moisture state, surface turbulent heat fluxes, and atmospheric moisture and instability. Full-on diagnosis would entail hacking into the code and inserting a number of tracers. This study addresses the question: What if, given the soil wetness anomaly, model biases in coupling sign and/or strength could be diagnosed from phase shifts in the diurnal precipitation frequency cycle? We use 34-years of output from the North American Regional Reanalysis (NARR) and North American Land Data Assimilation System Phase 2 (NLDAS-2) to investigate the variation in diurnal precipitation frequency cycle between so-called "wet-advantage" and "dry-advantage" coupling regimes over the U.S. southern Great Plains. Wet-advantage occurs when the atmospheric state is closer to the wet adiabatic rate and convection is triggered by a strong increase in the moist static energy from the surface. In contrast, dry-advantage occurs when the atmosphere is drier and the temperature profile is close to the dry adiabatic lapse rate, which favors convection over areas of large boundary layer growth due to high sensible heat fluxes at the surface. We find that there is a significant difference in the phase of the diurnal precipitation frequency between coupling regimes. Specifically, maximum frequency occurs at 1600 LT and 0500 LT for wet- and dry-advantage samples, respectively. For each of these contrasting regimes, we investigate the relative extent to which diurnal phasing may be attributed to local land -- PBL processes versus influences of the Great Plains low-level jet and large-scale atmospheric circulation.
NASA Technical Reports Server (NTRS)
Ichoku, Charles; Levy, Robert; Kaufman, Yoram; Remer, Lorraine A.; Li, Rong-Rong; Martins, Vanderlei J.; Holben, Brent N.; Abuhassan, Nader; Slutsker, Ilya; Eck, Thomas F.;
2001-01-01
Five Microtops II sun photometers were studied in detail at the NASA Goddard Space Flight Center (GSFC) to determine their performance in measuring aerosol optical thickness (AOT or Tau(sub alphalambda) and precipitable column water vapor (W). Each derives Tau(sub alphalambda) from measured signals at four wavelengths lambda (340, 440, 675, and 870 nm), and W from the 936 nm signal measurements. Accuracy of Tau(sub alphalambda) and W determination depends on the reliability of the relevant channel calibration coefficient (V(sub 0)). Relative calibration by transfer of parameters from a more accurate sun photometer (such as the Mauna-Loa-calibrated AERONET master sun photometer at GSFC) is more reliable than Langley calibration performed at GSFC. It was found that the factory-determined value of the instrument constant for the 936 nm filter (k= 0.7847) used in the Microtops' internal algorithm is unrealistic, causing large errors in V(sub 0(936)), Tau(sub alpha936), and W. Thus, when applied for transfer calibration at GSFC, whereas the random variation of V(aub 0) at 340 to 870 nm is quite small, with coefficients of variation (CV) in the range of 0 to 2.4%, at 936 nm the CV goes up to 19%. Also, the systematic temporal variation of V(sub 0) at 340 to 870 nm is very slow, while at 936 nm it is large and exhibits a very high dependence on W. The algorithm also computes Tau(sub alpha936) as 0.91Tau(sub alpha870), which is highly simplistic. Therefore, it is recommended to determine Tau(sub alpha936) by logarithmic extrapolation from Tau(sub alpha675) and Tau(sub alpha 870. From the operational standpoint of the Microtops, apart from errors that may result from unperceived cloud contamination, the main sources of error include inaccurate pointing to the Sun, neglecting to clean the front quartz window, and neglecting to calibrate correctly. If these three issues are adequately taken care of, the Microtops can be quite accurate and stable, with root mean square (rms) differences between corresponding retrievals from clean calibrated Microtops and the AERONET sun photometer being about +/-0.02 at 340 nm, decreasing down to about +/-0.01 at 870 nm.
SIMULATED HUMAN ERROR PROBABILITY AND ITS APPLICATION TO DYNAMIC HUMAN FAILURE EVENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herberger, Sarah M.; Boring, Ronald L.
Abstract Objectives: Human reliability analysis (HRA) methods typically analyze human failure events (HFEs) at the overall task level. For dynamic HRA, it is important to model human activities at the subtask level. There exists a disconnect between dynamic subtask level and static task level that presents issues when modeling dynamic scenarios. For example, the SPAR-H method is typically used to calculate the human error probability (HEP) at the task level. As demonstrated in this paper, quantification in SPAR-H does not translate to the subtask level. Methods: Two different discrete distributions were generated for each SPAR-H Performance Shaping Factor (PSF) tomore » define the frequency of PSF levels. The first distribution was a uniform, or uninformed distribution that assumed the frequency of each PSF level was equally likely. The second non-continuous distribution took the frequency of PSF level as identified from an assessment of the HERA database. These two different approaches were created to identify the resulting distribution of the HEP. The resulting HEP that appears closer to the known distribution, a log-normal centered on 1E-3, is the more desirable. Each approach then has median, average and maximum HFE calculations applied. To calculate these three values, three events, A, B and C are generated from the PSF level frequencies comprised of subtasks. The median HFE selects the median PSF level from each PSF and calculates HEP. The average HFE takes the mean PSF level, and the maximum takes the maximum PSF level. The same data set of subtask HEPs yields starkly different HEPs when aggregated to the HFE level in SPAR-H. Results: Assuming that each PSF level in each HFE is equally likely creates an unrealistic distribution of the HEP that is centered at 1. Next the observed frequency of PSF levels was applied with the resulting HEP behaving log-normally with a majority of the values under 2.5% HEP. The median, average and maximum HFE calculations did yield different answers for the HFE. The HFE maximum grossly over estimates the HFE, while the HFE distribution occurs less than HFE median, and greater than HFE average. Conclusions: Dynamic task modeling can be perused through the framework of SPAR-H. Identification of distributions associated with each PSF needs to be defined, and may change depending upon the scenario. However it is very unlikely that each PSF level is equally likely as the resulting HEP distribution is strongly centered at 100%, which is unrealistic. Other distributions may need to be identified for PSFs, to facilitate the transition to dynamic task modeling. Additionally discrete distributions need to be exchanged for continuous so that simulations for the HFE can further advance. This paper provides a method to explore dynamic subtask to task translation and provides examples of the process using the SPAR-H method.« less
NASA Technical Reports Server (NTRS)
Li, Xiao-Fan; Sui, C.-H.; Lau, K.-M.; Tao, W.-K.
2004-01-01
Prognostic cloud schemes are increasingly used in weather and climate models in order to better treat cloud-radiation processes. Simplifications are often made in such schemes for computational efficiency, like the scheme being used in the National Centers for Environment Prediction models that excludes some microphysical processes and precipitation-radiation interaction. In this study, sensitivity tests with a 2D cloud resolving model are carried out to examine effects of the excluded microphysical processes and precipitation-radiation interaction on tropical thermodynamics and cloud properties. The model is integrated for 10 days with the imposed vertical velocity derived from the Tropical Ocean Global Atmosphere Coupled Ocean-Atmosphere Response Experiment. The experiment excluding the depositional growth of snow from cloud ice shows anomalous growth of cloud ice and more than 20% increase of fractional cloud cover, indicating that the lack of the depositional snow growth causes unrealistically large mixing ratio of cloud ice. The experiment excluding the precipitation-radiation interaction displays a significant cooling and drying bias. The analysis of heat and moisture budgets shows that the simulation without the interaction produces more stable upper troposphere and more unstable mid and lower troposphere than does the simulation with the interaction. Thus, the suppressed growth of ice clouds in upper troposphere and stronger radiative cooling in mid and lower troposphere are responsible for the cooling bias, and less evaporation of rain associated with the large-scale subsidence induces the drying in mid and lower troposphere.
NASA Astrophysics Data System (ADS)
Liang, X. San; Robinson, Allan R.
2013-10-01
Frontal meanderings are generally difficult to predict. In this study, we demonstrate through an exercise with the Iceland-Faeroe Front (IFF) that satisfactory predictions may be achieved with the aid of hydrodynamic instability analysis. As discovered earlier on, underlying the IFF meandering is a convective instability in the western boundary region followed by an absolute instability in the interior; correspondingly the disturbance growth reveals a switch of pattern from spatial amplification to temporal amplification. To successfully forecast the meandering, the two instability processes must be faithfully reproduced. This sets stringent constraints for the tunable model parameters, e.g., boundary relaxation, temporal relaxation, eddy diffusivity, etc. By analyzing the instability dispersion properties, these parameters can be rather accurately set and their respective ranges of sensitivity estimated. It is shown that too much relaxation inhibits the front from varying; on the other hand, too little relaxation may have the model completely skip the spatial growth phase, leading to a meandering way more upstream along the front. Generally speaking, dissipation/diffusion tends to stabilize the simulation, but unrealistically large dissipation/diffusion could trigger a spurious absolute instability, and hence a premature meandering intrusion. The belief that taking in more data will improve the forecast does not need to be true; it depends on whether the model setup admits the two instabilities. This study may help relieve modelers from the laborious and tedious work of parameter tuning; it also provides us criteria to distinguish a physically relevant forecast from numerical artifacts.
Grandmothers' productivity and the HIV/AIDS pandemic in sub-Saharan Africa.
Bock, John; Johnson, Sara E
2008-06-01
The human immunodeficiency virus (HIV)/acquired immune deficiency syndrome (AIDS) pandemic has left large numbers of orphans in sub-Saharan Africa. Botswana has an HIV prevalence rate of approximately 40% in adults. Morbidity and mortality are high, and in a population of a 1.3 million there are nearly 50,000 children who have lost one or both parents to HIV/AIDS. The extended family, particularly grandparents, absorbs much of the childrearing responsibilities. This creates large amounts of additional work for grandmothers especially. The embodied capital model and the grandmother hypothesis are both derived from life history theory within evolutionary ecology, and both predict that one important factor in the evolution of the human extended family structure is that postreproductive individuals such as grandmothers provide substantial support to their grandchildren's survival. Data collected in the pre-pandemic context in a traditional multi-ethnic community in the Okavango Delta of Botswana are analyzed to calculate the amount of work effort provided to a household by women of different ages. Results show that the contributions of older and younger women to the household in term of both productivity and childrearing are qualitatively and quantitatively different. These results indicate that it is unrealistic to expect older women to be able to compensate for the loss of younger women's contributions to the household, and that interventions be specifically designed to support older women based on the type of activities in which they engage that affect child survival, growth, and development.
NASA Astrophysics Data System (ADS)
Sulangi, Miguel Antonio; Zaanen, Jan
2018-04-01
We explore the effects of various kinds of random disorder on the quasiparticle density of states of two-dimensional d -wave superconductors using an exact real-space method, incorporating realistic details known about the cuprates. Random on-site energy and pointlike unitary impurity models are found to give rise to a vanishing DOS at the Fermi energy for narrow distributions and low concentrations, respectively, and lead to a finite, but suppressed, DOS at unrealistically large levels of disorder. Smooth disorder arising from impurities located away from the copper-oxide planes meanwhile gives rise to a finite DOS at realistic impurity concentrations. For the case of smooth disorder whose average potential is zero, a resonance is found at zero energy for the quasiparticle DOS at large impurity concentrations. We discuss the implications of these results on the computed low-temperature specific heat, the behavior of which we find is strongly affected by the amount of disorder present in the system. We also compute the localization length as a function of disorder strength for various types of disorder and find that intermediate- and high-energy states are quasiextended for low disorder, and that states near the Fermi energy are strongly localized and have a localization length that exhibits an unusual dependence on the amount of disorder. We comment on the origin of disorder in the cuprates and provide constraints on these based on known results from scanning tunneling spectroscopy and specific heat experiments.
NASA Astrophysics Data System (ADS)
Baar, Anne W.; de Smit, Jaco; Uijttewaal, Wim S. J.; Kleinhans, Maarten G.
2018-01-01
Large-scale morphology, in particular meander bend depth, bar dimensions, and bifurcation dynamics, are greatly affected by the deflection of sediment transport on transverse bed slopes due to gravity and by secondary flows. Overestimating the transverse bed slope effect in morphodynamic models leads to flattening of the morphology, while underestimating leads to unrealistically steep bars and banks and a higher braiding index downstream. However, existing transverse bed slope predictors are based on a small set of experiments with a minor range of flow conditions and sediment sizes, and in practice models are calibrated on measured morphology. The objective of this research is to experimentally quantify the transverse bed slope effect for a large range of near-bed flow conditions with varying secondary flow intensity, sediment sizes (0.17-4 mm), sediment transport mode, and bed state to test existing predictors. We conducted over 200 experiments in a rotating annular flume with counterrotating floor, which allows control of the secondary flow intensity separate from the streamwise flow velocity. Flow velocity vectors were determined with a calibrated analytical model accounting for rough bed conditions. We isolated separate effects of all important parameters on the transverse slope. Resulting equilibrium transverse slopes show a clear trend with varying sediment mobilities and secondary flow intensities that deviate from known predictors depending on Shields number, and strongly depend on bed state and sediment transport mode. Fitted functions are provided for application in morphodynamic modeling.
How Many Loci Does it Take to DNA Barcode a Crocus?
Seberg, Ole; Petersen, Gitte
2009-01-01
Background DNA barcoding promises to revolutionize the way taxonomists work, facilitating species identification by using small, standardized portions of the genome as substitutes for morphology. The concept has gained considerable momentum in many animal groups, but the higher plant world has been largely recalcitrant to the effort. In plants, efforts are concentrated on various regions of the plastid genome, but no agreement exists as to what kinds of regions are ideal, though most researchers agree that more than one region is necessary. One reason for this discrepancy is differences in the tests that are used to evaluate the performance of the proposed regions. Most tests have been made in a floristic setting, where the genetic distance and therefore the level of variation of the regions between taxa is large, or in a limited set of congeneric species. Methodology and Principal Findings Here we present the first in-depth coverage of a large taxonomic group, all 86 known species (except two doubtful ones) of crocus. Even six average-sized barcode regions do not identify all crocus species. This is currently an unrealistic burden in a barcode context. Whereas most proposed regions work well in a floristic context, the majority will – as is the case in crocus – undoubtedly be less efficient in a taxonomic setting. However, a reasonable but less than perfect level of identification may be reached – even in a taxonomic context. Conclusions/Significance The time is ripe for selecting barcode regions in plants, and for prudent examination of their utility. Thus, there is no reason for the plant community to hold back the barcoding effort by continued search for the Holy Grail. We must acknowledge that an emerging system will be far from perfect, fraught with problems and work best in a floristic setting. PMID:19240801
How many loci does it take to DNA barcode a crocus?
Seberg, Ole; Petersen, Gitte
2009-01-01
DNA barcoding promises to revolutionize the way taxonomists work, facilitating species identification by using small, standardized portions of the genome as substitutes for morphology. The concept has gained considerable momentum in many animal groups, but the higher plant world has been largely recalcitrant to the effort. In plants, efforts are concentrated on various regions of the plastid genome, but no agreement exists as to what kinds of regions are ideal, though most researchers agree that more than one region is necessary. One reason for this discrepancy is differences in the tests that are used to evaluate the performance of the proposed regions. Most tests have been made in a floristic setting, where the genetic distance and therefore the level of variation of the regions between taxa is large, or in a limited set of congeneric species. Here we present the first in-depth coverage of a large taxonomic group, all 86 known species (except two doubtful ones) of crocus. Even six average-sized barcode regions do not identify all crocus species. This is currently an unrealistic burden in a barcode context. Whereas most proposed regions work well in a floristic context, the majority will--as is the case in crocus--undoubtedly be less efficient in a taxonomic setting. However, a reasonable but less than perfect level of identification may be reached--even in a taxonomic context. The time is ripe for selecting barcode regions in plants, and for prudent examination of their utility. Thus, there is no reason for the plant community to hold back the barcoding effort by continued search for the Holy Grail. We must acknowledge that an emerging system will be far from perfect, fraught with problems and work best in a floristic setting.
Blair, Christopher; Bryson, Robert W
2017-11-01
Biodiversity reduction and loss continues to progress at an alarming rate, and thus, there is widespread interest in utilizing rapid and efficient methods for quantifying and delimiting taxonomic diversity. Single-locus species delimitation methods have become popular, in part due to the adoption of the DNA barcoding paradigm. These techniques can be broadly classified into tree-based and distance-based methods depending on whether species are delimited based on a constructed genealogy. Although the relative performance of these methods has been tested repeatedly with simulations, additional studies are needed to assess congruence with empirical data. We compiled a large data set of mitochondrial ND4 sequences from horned lizards (Phrynosoma) to elucidate congruence using four tree-based (single-threshold GMYC, multiple-threshold GMYC, bPTP, mPTP) and one distance-based (ABGD) species delimitation models. We were particularly interested in cases with highly uneven sampling and/or large differences in intraspecific diversity. Results showed a high degree of discordance among methods, with multiple-threshold GMYC and bPTP suggesting an unrealistically high number of species (29 and 26 species within the P. douglasii complex alone). The single-threshold GMYC model was the most conservative, likely a result of difficulty in locating the inflection point in the genealogies. mPTP and ABGD appeared to be the most stable across sampling regimes and suggested the presence of additional cryptic species that warrant further investigation. These results suggest that the mPTP model may be preferable in empirical data sets with highly uneven sampling or large differences in effective population sizes of species. © 2017 John Wiley & Sons Ltd.
The effects of the spatial influence function on orthotropic femur remodelling.
Shang, Y; Bai, J; Peng, L
2008-07-01
The morphology and internal structure of bone are modulated by the mechanical stimulus. The osteocytes can sense the stimulus signals from the adjacent regions and respond to them through bone growth or bone absorption. This mechanism can be modelled as the spatial influence function (SIF) in bone adaptation algorithm. In this paper, the remodelling process was simulated in human femurs using an adaptation algorithm with and without SIF, and the trabecular bone was assumed to be orthotropic. A different influence radius and weighting factor were adopted to study the effects of the SIF on the bone density distribution and trabecular alignment. The results have shown that the mean density and L-T ratio (the ratio of longitudinal modulus to transverse modulus) had an excellent linear relationship with the weighting factor when the influence radius was small. The characteristics of density distribution and L-T ratio accorded with the actual observation or measurement when a small weighting factor was used. The large influence radius and weighting factor led to unrealistic results. In contrast, the SIF hardly affected the trabecular alignment, as the mean variation angles of principal axes were less than 1.0 degree for any influence radius and weighting factor.
Simulations of heart valves by thin shells with non-linear material properties
NASA Astrophysics Data System (ADS)
Borazjani, Iman; Asgharzadeh, Hafez; Hedayat, Mohammadali
2016-11-01
The primary function of a heart valve is to allow blood to flow in only one direction through the heart. Triangular thin-shell finite element formulation is implemented, which considers only translational degrees of freedom, in three-dimensional domain to simulate heart valves undergoing large deformations. The formulation is based on the nonlinear Kirchhoff thin-shell theory. The developed method is intensively validated against numerical and analytical benchmarks. This method is added to previously developed membrane method to obtain more realistic results since ignoring bending forces can results in unrealistic wrinkling of heart valves. A nonlinear Fung-type constitutive relation, based on experimentally measured biaxial loading tests, is used to model the material properties for response of the in-plane motion in heart valves. Furthermore, the experimentally measured liner constitutive relation is used to model the material properties to capture the flexural motion of heart valves. The Fluid structure interaction solver adopts a strongly coupled partitioned approach that is stabilized with under-relaxation and the Aitken acceleration technique. This work was supported by American Heart Association (AHA) Grant 13SDG17220022 and the Center of Computational Research (CCR) of University at Buffalo.
Karev, Georgy P; Wolf, Yuri I; Koonin, Eugene V
2003-10-12
The distributions of many genome-associated quantities, including the membership of paralogous gene families can be approximated with power laws. We are interested in developing mathematical models of genome evolution that adequately account for the shape of these distributions and describe the evolutionary dynamics of their formation. We show that simple stochastic models of genome evolution lead to power-law asymptotics of protein domain family size distribution. These models, called Birth, Death and Innovation Models (BDIM), represent a special class of balanced birth-and-death processes, in which domain duplication and deletion rates are asymptotically equal up to the second order. The simplest, linear BDIM shows an excellent fit to the observed distributions of domain family size in diverse prokaryotic and eukaryotic genomes. However, the stochastic version of the linear BDIM explored here predicts that the actual size of large paralogous families is reached on an unrealistically long timescale. We show that introduction of non-linearity, which might be interpreted as interaction of a particular order between individual family members, allows the model to achieve genome evolution rates that are much better compatible with the current estimates of the rates of individual duplication/loss events.
Neural net diagnostics for VLSI test
NASA Technical Reports Server (NTRS)
Lin, T.; Tseng, H.; Wu, A.; Dogan, N.; Meador, J.
1990-01-01
This paper discusses the application of neural network pattern analysis algorithms to the IC fault diagnosis problem. A fault diagnostic is a decision rule combining what is known about an ideal circuit test response with information about how it is distorted by fabrication variations and measurement noise. The rule is used to detect fault existence in fabricated circuits using real test equipment. Traditional statistical techniques may be used to achieve this goal, but they can employ unrealistic a priori assumptions about measurement data. Our approach to this problem employs an adaptive pattern analysis technique based on feedforward neural networks. During training, a feedforward network automatically captures unknown sample distributions. This is important because distributions arising from the nonlinear effects of process variation can be more complex than is typically assumed. A feedforward network is also able to extract measurement features which contribute significantly to making a correct decision. Traditional feature extraction techniques employ matrix manipulations which can be particularly costly for large measurement vectors. In this paper we discuss a software system which we are developing that uses this approach. We also provide a simple example illustrating the use of the technique for fault detection in an operational amplifier.
Methodology and Implications of Maximum Paleodischarge Estimates for
Channels, M.; Pruess, J.; Wohl, E.E.; Jarrett, R.D.
1998-01-01
Historical and geologic records may be used to enhance magnitude estimates for extreme floods along mountain channels, as demonstrated in this study from the San Juan Mountains of Colorado. Historical photographs and local newspaper accounts from the October 1911 flood indicate the likely extent of flooding and damage. A checklist designed to organize and numerically score evidence of flooding was used in 15 field reconnaissance surveys in the upper Animas River valley of southwestern Colorado. Step-backwater flow modeling estimated the discharges necessary to create longitudinal flood bars observed at 6 additional field sites. According to these analyses, maximum unit discharge peaks at approximately 1.3 m3 s~' km"2 around 2200 m elevation, with decreased unit discharges at both higher and lower elevations. These results (1) are consistent with Jarrett's (1987, 1990, 1993) maximum 2300-m elevation limit for flash-flooding in the Colorado Rocky Mountains, and (2) suggest that current Probable Maximum Flood (PMF) estimates based on a 24-h rainfall of 30 cm at elevations above 2700 m are unrealistically large. The methodology used for this study should be readily applicable to other mountain regions where systematic streamflow records are of short duration or nonexistent. ?? 1998 Regents of the University of Colorado.
Cribb, Victoria L; Haase, Anne M
2016-01-01
As society continues to advocate an unrealistically thin body shape, awareness and internalization of appearance and its consequent impact upon self-esteem has become increasingly of concern, particularly in adolescent girls. School gender environment may influence these factors, but remains largely unexplored. This study aimed to assess differences between two different school environments in appearance attitudes, social influences and associations with self-esteem. Two hundred and twelve girls (M = 13.8 years) attending either a single-sex or co-educational school completed measures on socio-cultural attitudes towards appearance, social support and self-esteem. Though marginal differences between school environments were found, significantly higher internalization was reported among girls at the co-educational school. School environment moderated relations between internalization and self-esteem such that girls in co-educational environments had poorer self-esteem stemming from greater internalization. Thus, in a single-sex school environment, protective factors may attenuate negative associations between socio-cultural attitudes towards appearance and self-esteem in adolescent girls. Copyright © 2015 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
A peaceful realm? Trauma and social differentiation at Harappa.
Robbins Schug, Gwen; Gray, Kelsey; Mushrif-Tripathy, V; Sankhyan, A R
Thousands of settlements stippled the third millennium B.C. landscape of Pakistan and northwest India. These communities maintained an extensive exchange network that spanned West and South Asia. They shared remarkably consistent symbolic and ideological systems despite a vast territory, including an undeciphered script, standardized weights, measures, sanitation and subsistence systems, and settlement planning. The city of Harappa (3300-1300B.C.) sits at the center of this Indus River Valley Civilization. The relatively large skeletal collection from Harappa offers an opportunity to examine biocultural aspects of urban life and its decline in South Asian prehistory. This paper compares evidence for cranial trauma among burial populations at Harappa through time to assess the hypothesis that Indus state formation occurred as a peaceful heterarchy. The prevalence and patterning of cranial injuries, combined with striking differences in mortuary treatment and demography among the three burial areas indicate interpersonal violence in Harappan society was structured along lines of gender and community membership. The results support a relationship at Harappa among urbanization, access to resources, social differentiation, and risk of interpersonal violence. Further, the results contradict the dehumanizing, unrealistic myth of the Indus Civilization as an exceptionally peaceful prehistoric urban civilization. Copyright © 2012 Elsevier Inc. All rights reserved.
Beyond Measurement and Reward: Methods of Motivating Quality Improvement and Accountability.
Berenson, Robert A; Rice, Thomas
2015-12-01
The article examines public policies designed to improve quality and accountability that do not rely on financial incentives and public reporting of provider performance. Payment policy should help temper the current "more is better" attitude of physicians and provider organizations. Incentive neutrality would better support health professionals' intrinsic motivation to act in their patients' best interests to improve overall quality than would pay-for-performance plans targeted to specific areas of clinical care. Public policy can support clinicians' intrinsic motivation through approaches that support systematic feedback to clinicians and provide concrete opportunities to collaborate to improve care. Some programs administered by the Centers for Medicare & Medicaid Services, including Partnership for Patients and Conditions of Participation, deserve more attention; they represent available, but largely ignored, approaches to support providers to improve quality and protect beneficiaries against substandard care. Public policies related to quality improvement should focus more on methods of enhancing professional intrinsic motivation, while recognizing the potential role of organizations to actively promote and facilitate that motivation. Actually achieving improvement, however, will require a reexamination of the role played by financial incentives embedded in payments and the unrealistic expectations placed on marginal incentives in pay-for-performance schemes. © Health Research and Educational Trust.