Science.gov

Sample records for failure assessment diagram

  1. Failure Assessment Diagram for Titanium Brazed Joints

    NASA Technical Reports Server (NTRS)

    Flom, Yury; Jones, Justin S.; Powell, Mollie M.; Puckett, David F.

    2011-01-01

    The interaction equation was used to predict failure in Ti-4V-6Al joints brazed with Al 1100 filler metal. The joints used in this study were geometrically similar to the joints in the brazed beryllium metering structure considered for the ATLAS telescope. This study confirmed that the interaction equation R(sub sigma) + R(sub Tau) = 1, where R(sub sigma) and R(sub Tau)are normal and shear stress ratios, can be used as conservative lower bound estimate of the failure criterion in ATLAS brazed joints as well as for construction of the Failure Assessment Diagram (FAD).

  2. Failure Assessment Diagram for Brazed 304 Stainless Steel Joints

    NASA Technical Reports Server (NTRS)

    Flom, Yory

    2011-01-01

    Interaction equations were proposed earlier to predict failure in Albemet 162 brazed joints. Present study demonstrates that the same interaction equations can be used for lower bound estimate of the failure criterion in 304 stainless steel joints brazed with silver-based filler metals as well as for construction of the Failure Assessment Diagrams (FAD).

  3. Evaluation of Brazed Joints Using Failure Assessment Diagram

    NASA Technical Reports Server (NTRS)

    Flom, Yury

    2012-01-01

    Fitness-for service approach was used to perform structural analysis of the brazed joints consisting of several base metal / filler metal combinations. Failure Assessment Diagrams (FADs) based on tensile and shear stress ratios were constructed and experimentally validated. It was shown that such FADs can provide a conservative estimate of safe combinations of stresses in the brazed joints. Based on this approach, Margins of Safety (MS) of the brazed joints subjected to multi-axial loading conditions can be evaluated..

  4. Improved reliability analysis method based on the failure assessment diagram

    NASA Astrophysics Data System (ADS)

    Zhou, Yu; Zhang, Zheng; Zhong, Qunpeng

    2012-07-01

    With the uncertainties related to operating conditions, in-service non-destructive testing (NDT) measurements and material properties considered in the structural integrity assessment, probabilistic analysis based on the failure assessment diagram (FAD) approach has recently become an important concern. However, the point density revealing the probabilistic distribution characteristics of the assessment points is usually ignored. To obtain more detailed and direct knowledge from the reliability analysis, an improved probabilistic fracture mechanics (PFM) assessment method is proposed. By integrating 2D kernel density estimation (KDE) technology into the traditional probabilistic assessment, the probabilistic density of the randomly distributed assessment points is visualized in the assessment diagram. Moreover, a modified interval sensitivity analysis is implemented and compared with probabilistic sensitivity analysis. The improved reliability analysis method is applied to the assessment of a high pressure pipe containing an axial internal semi-elliptical surface crack. The results indicate that these two methods can give consistent sensitivities of input parameters, but the interval sensitivity analysis is computationally more efficient. Meanwhile, the point density distribution and its contour are plotted in the FAD, thereby better revealing the characteristics of PFM assessment. This study provides a powerful tool for the reliability analysis of critical structures.

  5. Application of ISO22000, failure mode, and effect analysis (FMEA) cause and effect diagrams and pareto in conjunction with HACCP and risk assessment for processing of pastry products.

    PubMed

    Varzakas, Theodoros H

    2011-09-01

    The Failure Mode and Effect Analysis (FMEA) model has been applied for the risk assessment of pastry processing. A tentative approach of FMEA application to the pastry industry was attempted in conjunction with ISO22000. Preliminary Hazard Analysis was used to analyze and predict the occurring failure modes in a food chain system (pastry processing plant), based on the functions, characteristics, and/or interactions of the ingredients or the processes, upon which the system depends. Critical Control points have been identified and implemented in the cause and effect diagram (also known as Ishikawa, tree diagram, and fishbone diagram). In this work a comparison of ISO22000 analysis with HACCP is carried out over pastry processing and packaging. However, the main emphasis was put on the quantification of risk assessment by determining the Risk Priority Number (RPN) per identified processing hazard. Storage of raw materials and storage of final products at -18°C followed by freezing were the processes identified as the ones with the highest RPN (225, 225, and 144 respectively) and corrective actions were undertaken. Following the application of corrective actions, a second calculation of RPN values was carried out leading to considerably lower values (below the upper acceptable limit of 130). It is noteworthy that the application of Ishikawa (Cause and Effect or Tree diagram) led to converging results thus corroborating the validity of conclusions derived from risk assessment and FMEA. Therefore, the incorporation of FMEA analysis within the ISO22000 system of a pastry processing industry is considered imperative.

  6. Application of ISO22000, failure mode, and effect analysis (FMEA) cause and effect diagrams and pareto in conjunction with HACCP and risk assessment for processing of pastry products.

    PubMed

    Varzakas, Theodoros H

    2011-09-01

    The Failure Mode and Effect Analysis (FMEA) model has been applied for the risk assessment of pastry processing. A tentative approach of FMEA application to the pastry industry was attempted in conjunction with ISO22000. Preliminary Hazard Analysis was used to analyze and predict the occurring failure modes in a food chain system (pastry processing plant), based on the functions, characteristics, and/or interactions of the ingredients or the processes, upon which the system depends. Critical Control points have been identified and implemented in the cause and effect diagram (also known as Ishikawa, tree diagram, and fishbone diagram). In this work a comparison of ISO22000 analysis with HACCP is carried out over pastry processing and packaging. However, the main emphasis was put on the quantification of risk assessment by determining the Risk Priority Number (RPN) per identified processing hazard. Storage of raw materials and storage of final products at -18°C followed by freezing were the processes identified as the ones with the highest RPN (225, 225, and 144 respectively) and corrective actions were undertaken. Following the application of corrective actions, a second calculation of RPN values was carried out leading to considerably lower values (below the upper acceptable limit of 130). It is noteworthy that the application of Ishikawa (Cause and Effect or Tree diagram) led to converging results thus corroborating the validity of conclusions derived from risk assessment and FMEA. Therefore, the incorporation of FMEA analysis within the ISO22000 system of a pastry processing industry is considered imperative. PMID:21838557

  7. Automatically Assessing Graph-Based Diagrams

    ERIC Educational Resources Information Center

    Thomas, Pete; Smith, Neil; Waugh, Kevin

    2008-01-01

    To date there has been very little work on the machine understanding of imprecise diagrams, such as diagrams drawn by students in response to assessment questions. Imprecise diagrams exhibit faults such as missing, extraneous and incorrectly formed elements. The semantics of imprecise diagrams are difficult to determine. While there have been…

  8. Using Dynamic Master Logic Diagram for component partial failure analysis

    SciTech Connect

    Ni, T.; Modarres, M.

    1996-12-01

    A methodology using the Dynamic Master Logic Diagram (DMLD) for the evaluation of component partial failure is presented. Since past PRAs have not focused on partial failure effects, the reliability of components are only based on the binary state assumption, i.e. defining a component as fully failed or functioning. This paper is to develop an approach to predict and estimate the component partial failure on the basis of the fuzzy state assumption. One example of the application of this methodology with the reliability function diagram of a centrifugal pump is presented.

  9. Failure mode diagram of rubble pile asteroids: Application to (25143) asteroid Itokawa

    NASA Astrophysics Data System (ADS)

    Hirabayashi, Masatoshi; Scheeres, Daniel J.

    2016-01-01

    Proposing a diagram which shows the variation in asteroidal failure as a function of a spin period, later called the failure mode diagram, this paper considers the failure modes and conditions of asteroid (25143) Itokawa. This diagram is useful to describe when and where failure occurs in an asteroid. Assuming that Itokawa is homogeneous, we use a plastic finite element code to obtain the diagram for this object. The results show that if the bulk cohesive strength is less than 0.1 Pa, Itokawa experiences compressional failure on the neck surface at the current spin period 12.1 hours. At a spin period shorter than 4.5 hours, tension across the neck causes this asteroid to split into two components. It is also found that if the breakup spin period is longer than 5.2 hours, their motion is bounded. This implies that once Itokawa splits, the components may escape from one another.

  10. The Problem of Labels in E-Assessment of Diagrams

    ERIC Educational Resources Information Center

    Jayal, Ambikesh; Shepperd, Martin

    2009-01-01

    In this article we explore a problematic aspect of automated assessment of diagrams. Diagrams have partial and sometimes inconsistent semantics. Typically much of the meaning of a diagram resides in the labels; however, the choice of labeling is largely unrestricted. This means a correct solution may utilize differing yet semantically equivalent…

  11. Probabilistic Failure Assessment For Fatigue

    NASA Technical Reports Server (NTRS)

    Moore, Nicholas; Ebbeler, Donald; Newlin, Laura; Sutharshana, Sravan; Creager, Matthew

    1995-01-01

    Probabilistic Failure Assessment for Fatigue (PFAFAT) package of software utilizing probabilistic failure-assessment (PFA) methodology to model high- and low-cycle-fatigue modes of failure of structural components. Consists of nine programs. Three programs perform probabilistic fatigue analysis by means of Monte Carlo simulation. Other six used for generating random processes, characterizing fatigue-life data pertaining to materials, and processing outputs of computational simulations. Written in FORTRAN 77.

  12. Summary diagrams for coupled hydrodynamic-ecosystem model skill assessment

    NASA Astrophysics Data System (ADS)

    Jolliff, Jason K.; Kindle, John C.; Shulman, Igor; Penta, Bradley; Friedrichs, Marjorie A. M.; Helber, Robert; Arnone, Robert A.

    2009-02-01

    The increasing complexity of coupled hydrodynamic-ecosystem models may require skill assessment methods that both quantify various aspects of model performance and visually summarize these aspects within compact diagrams. Hence summary diagrams, such as the Taylor diagram [Taylor, 2001, Journal of Geophysical Research, 106, D7, 7183-7192], may meet this requirement by exploiting mathematical relationships between widely known statistical quantities in order to succinctly display a suite of model skill metrics in a single plot. In this paper, sensitivity results from a coupled model are compared with Sea-viewing Wide Field-of-view Sensor (SeaWiFS) satellite ocean color data in order to assess the utility of the Taylor diagram and to develop a set of alternatives. Summary diagrams are only effective as skill assessment tools insofar as the statistical quantities they communicate adequately capture differentiable aspects of model performance. Here we demonstrate how the linear correlation coefficients and variance comparisons (pattern statistics) that constitute a Taylor diagram may fail to identify other potentially important aspects of coupled model performance, even if these quantities appear close to their ideal values. An additional skill assessment tool, the target diagram, is developed in order to provide summary information about how the pattern statistics and the bias (difference of mean values) each contribute to the magnitude of the total Root-Mean-Square Difference (RMSD). In addition, a potential inconsistency in the use of RMSD statistics as skill metrics for overall model and observation agreement is identified: underestimates of the observed field's variance are rewarded when the linear correlation scores are less than unity. An alternative skill score and skill score-based summary diagram is presented.

  13. Using Tree Diagrams as an Assessment Tool in Statistics Education

    ERIC Educational Resources Information Center

    Yin, Yue

    2012-01-01

    This study examines the potential of the tree diagram, a type of graphic organizer, as an assessment tool to measure students' knowledge structures in statistics education. Students' knowledge structures in statistics have not been sufficiently assessed in statistics, despite their importance. This article first presents the rationale and method…

  14. Application of Failure Mode and Effect Analysis (FMEA), cause and effect analysis, and Pareto diagram in conjunction with HACCP to a corn curl manufacturing plant.

    PubMed

    Varzakas, Theodoros H; Arvanitoyannis, Ioannis S

    2007-01-01

    The Failure Mode and Effect Analysis (FMEA) model has been applied for the risk assessment of corn curl manufacturing. A tentative approach of FMEA application to the snacks industry was attempted in an effort to exclude the presence of GMOs in the final product. This is of crucial importance both from the ethics and the legislation (Regulations EC 1829/2003; EC 1830/2003; Directive EC 18/2001) point of view. The Preliminary Hazard Analysis and the Fault Tree Analysis were used to analyze and predict the occurring failure modes in a food chain system (corn curls processing plant), based on the functions, characteristics, and/or interactions of the ingredients or the processes, upon which the system depends. Critical Control points have been identified and implemented in the cause and effect diagram (also known as Ishikawa, tree diagram, and the fishbone diagram). Finally, Pareto diagrams were employed towards the optimization of GMOs detection potential of FMEA.

  15. Application of Failure Mode and Effect Analysis (FMEA), cause and effect analysis, and Pareto diagram in conjunction with HACCP to a corn curl manufacturing plant.

    PubMed

    Varzakas, Theodoros H; Arvanitoyannis, Ioannis S

    2007-01-01

    The Failure Mode and Effect Analysis (FMEA) model has been applied for the risk assessment of corn curl manufacturing. A tentative approach of FMEA application to the snacks industry was attempted in an effort to exclude the presence of GMOs in the final product. This is of crucial importance both from the ethics and the legislation (Regulations EC 1829/2003; EC 1830/2003; Directive EC 18/2001) point of view. The Preliminary Hazard Analysis and the Fault Tree Analysis were used to analyze and predict the occurring failure modes in a food chain system (corn curls processing plant), based on the functions, characteristics, and/or interactions of the ingredients or the processes, upon which the system depends. Critical Control points have been identified and implemented in the cause and effect diagram (also known as Ishikawa, tree diagram, and the fishbone diagram). Finally, Pareto diagrams were employed towards the optimization of GMOs detection potential of FMEA. PMID:17457722

  16. Acute Effects of Vagotomy on Baroreflex Equilibrium Diagram in Rats with Chronic Heart Failure.

    PubMed

    Kawada, Toru; Li, Meihua; Zheng, Can; Sugimachi, Masaru

    2016-01-01

    The arterial baroreflex system can be divided into the neural arc, from pressure input to efferent sympathetic nerve activity (SNA), and the peripheral arc, from SNA to arterial pressure (AP). Plotting the neural and peripheral arcs on a pressure-SNA plane yields a baroreflex equilibrium diagram. We examined the effects of vagotomy on the open-loop static characteristics of the carotid sinus baroreflex in normal control rats (NC, n = 10) and rats with heart failure after myocardial infarction (MI, n = 10). In the NC group, vagotomy shifted the neural arc toward higher SNA and decreased the slope of the peripheral arc. Consequently, the operating-point SNA increased without a significant change in the operating-point AP on the baroreflex equilibrium diagram. These vagotomy-induced effects were not observed in the MI group, suggesting a loss of vagal modulation of the carotid sinus baroreflex function in heart failure. PMID:27594790

  17. Acute Effects of Vagotomy on Baroreflex Equilibrium Diagram in Rats with Chronic Heart Failure

    PubMed Central

    Kawada, Toru; Li, Meihua; Zheng, Can; Sugimachi, Masaru

    2016-01-01

    The arterial baroreflex system can be divided into the neural arc, from pressure input to efferent sympathetic nerve activity (SNA), and the peripheral arc, from SNA to arterial pressure (AP). Plotting the neural and peripheral arcs on a pressure–SNA plane yields a baroreflex equilibrium diagram. We examined the effects of vagotomy on the open-loop static characteristics of the carotid sinus baroreflex in normal control rats (NC, n = 10) and rats with heart failure after myocardial infarction (MI, n = 10). In the NC group, vagotomy shifted the neural arc toward higher SNA and decreased the slope of the peripheral arc. Consequently, the operating-point SNA increased without a significant change in the operating-point AP on the baroreflex equilibrium diagram. These vagotomy-induced effects were not observed in the MI group, suggesting a loss of vagal modulation of the carotid sinus baroreflex function in heart failure.

  18. Acute Effects of Vagotomy on Baroreflex Equilibrium Diagram in Rats with Chronic Heart Failure

    PubMed Central

    Kawada, Toru; Li, Meihua; Zheng, Can; Sugimachi, Masaru

    2016-01-01

    The arterial baroreflex system can be divided into the neural arc, from pressure input to efferent sympathetic nerve activity (SNA), and the peripheral arc, from SNA to arterial pressure (AP). Plotting the neural and peripheral arcs on a pressure–SNA plane yields a baroreflex equilibrium diagram. We examined the effects of vagotomy on the open-loop static characteristics of the carotid sinus baroreflex in normal control rats (NC, n = 10) and rats with heart failure after myocardial infarction (MI, n = 10). In the NC group, vagotomy shifted the neural arc toward higher SNA and decreased the slope of the peripheral arc. Consequently, the operating-point SNA increased without a significant change in the operating-point AP on the baroreflex equilibrium diagram. These vagotomy-induced effects were not observed in the MI group, suggesting a loss of vagal modulation of the carotid sinus baroreflex function in heart failure. PMID:27594790

  19. Development of partial failure analysis method in probability risk assessments

    SciTech Connect

    Ni, T.; Modarres, M.

    1996-12-01

    This paper presents a new approach to evaluate the partial failure effect on current Probability Risk Assessments (PRAs). An integrated methodology of the thermal-hydraulic analysis and fuzzy logic simulation using the Dynamic Master Logic Diagram (DMLD) was developed. The thermal-hydraulic analysis used in this approach is to identify partial operation effect of any PRA system function in a plant model. The DMLD is used to simulate the system performance of the partial failure effect and inspect all minimal cut sets of system functions. This methodology can be applied in the context of a full scope PRA to reduce core damage frequency. An example of this application of the approach is presented. The partial failure data used in the example is from a survey study of partial failure effects from the Nuclear Plant Reliability Data System (NPRDS).

  20. Failure Assessment of Stainless Steel and Titanium Brazed Joints

    NASA Technical Reports Server (NTRS)

    Flom, Yury A.

    2012-01-01

    Following successful application of Coulomb-Mohr and interaction equations for evaluation of safety margins in Albemet 162 brazed joints, two additional base metal/filler metal systems were investigated. Specimens consisting of stainless steel brazed with silver-base filler metal and titanium brazed with 1100 Al alloy were tested to failure under combined action of tensile, shear, bending and torsion loads. Finite Element Analysis (FEA), hand calculations and digital image comparison (DIC) techniques were used to estimate failure stresses and construct Failure Assessment Diagrams (FAD). This study confirms that interaction equation R(sub sigma) + R(sub tau) = 1, where R(sub sigma) and R(sub t u) are normal and shear stress ratios, can be used as conservative lower bound estimate of the failure criterion in stainless steel and titanium brazed joints.

  1. Strength-probability-time (SPT) diagram--an adjunct to the assessment of dental materials?

    PubMed

    Chadwick, R G

    1994-12-01

    This investigation sought to construct and compare strength-probability-time (S-P-T) diagrams for four dental materials. Three of these were resin composites and one was dental plaster. In the case of dental plaster a total of 90 compressive specimens as fabricated whereas for each of the other materials a total of 75 specimens was prepared. The compressive strength of equal sized groups of each material was then determined at the crosshead (XHD) speeds of 1, 5 and 10 mm min-1 respectively. The data was subjected to Weibull analysis to relate the probability of failure to the applied stress. Where strong correlations were found between the (i) mean compressive strength and crosshead speed, (ii) individual compressive strengths and failure times, the data was used to determine the crack velocity exponent (n) and produce a S-P-T diagram. Although only one of the materials (P-50) evaluated fulfilled all the necessary criteria and yielded a value of n = 16.13 (7.22), it is suggested that this method may enable comparisons to be made amongst other materials satisfying the required conditions. As such diagrams are based upon a crack growth law they may be of value in assessing the likely clinical wear resistance of new formulations. Consideration, however, should always be given to what levels are deemed acceptable for the intended clinical application of the material. Thus, before this technique can be employed fully, to the evaluation of new restorative materials, further work is necessary to determine appropriate design criteria.

  2. Assessment of failure of cemented polyethylene acetabular component due to bone remodeling: A finite element study.

    PubMed

    Ghosh, Rajesh

    2016-09-01

    The aim of the study is to determine failure of the cemented polyethylene acetabular component, which might occur due to excessive bone resorption, cement-bone interface debonding and fatigue failure of the cement mantle. Three-dimensional finite element models of intact and implanted pelvic bone were developed and bone remodeling algorithm was implemented for present analysis. Soderberg fatigue failure diagram was used for fatigue assessment of the cement mantle. Hoffman failure criterion was considered for prediction of cement-bone interface debonding. Results indicate fatigue failure of the cement mantle and implant-bone interface debonding might not occur due to bone remodeling.

  3. Failure Assessment of Brazed Structures

    NASA Technical Reports Server (NTRS)

    Flom, Yuri

    2012-01-01

    Despite the great advances in analytical methods available to structural engineers, designers of brazed structures have great difficulties in addressing fundamental questions related to the loadcarrying capabilities of brazed assemblies. In this chapter we will review why such common engineering tools as Finite Element Analysis (FEA) as well as many well-established theories (Tresca, von Mises, Highest Principal Stress, etc) don't work well for the brazed joints. This chapter will show how the classic approach of using interaction equations and the less known Coulomb-Mohr failure criterion can be employed to estimate Margins of Safety (MS) in brazed joints.

  4. Nanoparticles in the environment: assessment using the causal diagram approach

    PubMed Central

    2012-01-01

    Nanoparticles (NPs) cause concern for health and safety as their impact on the environment and humans is not known. Relatively few studies have investigated the toxicological and environmental effects of exposure to naturally occurring NPs (NNPs) and man-made or engineered NPs (ENPs) that are known to have a wide variety of effects once taken up into an organism. A review of recent knowledge (between 2000-2010) on NP sources, and their behaviour, exposure and effects on the environment and humans was performed. An integrated approach was used to comprise available scientific information within an interdisciplinary logical framework, to identify knowledge gaps and to describe environment and health linkages for NNPs and ENPs. The causal diagram has been developed as a method to handle the complexity of issues on NP safety, from their exposure to the effects on the environment and health. It gives an overview of available scientific information starting with common sources of NPs and their interactions with various environmental processes that may pose threats to both human health and the environment. Effects of NNPs on dust cloud formation and decrease in sunlight intensity were found to be important environmental changes with direct and indirect implication in various human health problems. NNPs and ENPs exposure and their accumulation in biological matrices such as microbiota, plants and humans may result in various adverse effects. The impact of some NPs on human health by ROS generation was found to be one of the major causes to develop various diseases. A proposed cause-effects diagram for NPs is designed considering both NNPs and ENPs. It represents a valuable information package and user-friendly tool for various stakeholders including students, researchers and policy makers, to better understand and communicate on issues related to NPs. PMID:22759495

  5. Assessment of Nonorganic Failure To Thrive.

    ERIC Educational Resources Information Center

    Wooster, Donna M.

    1999-01-01

    This article describes basic assessment considerations for infants and toddlers exhibiting nonorganic failure to thrive. The evaluation process must examine feeding, maternal-child interactions, child temperament, and environmental risks and behaviors. Early identification and intervention are necessary to minimize the long-term developmental…

  6. Insulation failure assessment under random energization overvoltages

    SciTech Connect

    Mahdy, A.M.; Anis, H.I.; El-Morshedy, A.

    1996-03-01

    This paper offers a new simple approach to the evaluation of the risk of failure of external insulation in view of their known probabilistic nature. This is applied to EHV transmission systems subjected to energization overvoltages. The randomness, both in the applied stresses and insulation`s withstand characteristics are numerically simulated and then integrated to assess the risk of failure. Overvoltage control methods are accounted for, such as the use of pre-insertion breaker resistors, series capacitive compensation, and the installation of shunt reactors.

  7. Failure detection system risk reduction assessment

    NASA Technical Reports Server (NTRS)

    Aguilar, Robert B. (Inventor); Huang, Zhaofeng (Inventor)

    2012-01-01

    A process includes determining a probability of a failure mode of a system being analyzed reaching a failure limit as a function of time to failure limit, determining a probability of a mitigation of the failure mode as a function of a time to failure limit, and quantifying a risk reduction based on the probability of the failure mode reaching the failure limit and the probability of the mitigation.

  8. Derivation of Failure Rates and Probability of Failures for the International Space Station Probabilistic Risk Assessment Study

    NASA Technical Reports Server (NTRS)

    Vitali, Roberto; Lutomski, Michael G.

    2004-01-01

    National Aeronautics and Space Administration s (NASA) International Space Station (ISS) Program uses Probabilistic Risk Assessment (PRA) as part of its Continuous Risk Management Process. It is used as a decision and management support tool to not only quantify risk for specific conditions, but more importantly comparing different operational and management options to determine the lowest risk option and provide rationale for management decisions. This paper presents the derivation of the probability distributions used to quantify the failure rates and the probability of failures of the basic events employed in the PRA model of the ISS. The paper will show how a Bayesian approach was used with different sources of data including the actual ISS on orbit failures to enhance the confidence in results of the PRA. As time progresses and more meaningful data is gathered from on orbit failures, an increasingly accurate failure rate probability distribution for the basic events of the ISS PRA model can be obtained. The ISS PRA has been developed by mapping the ISS critical systems such as propulsion, thermal control, or power generation into event sequences diagrams and fault trees. The lowest level of indenture of the fault trees was the orbital replacement units (ORU). The ORU level was chosen consistently with the level of statistically meaningful data that could be obtained from the aerospace industry and from the experts in the field. For example, data was gathered for the solenoid valves present in the propulsion system of the ISS. However valves themselves are composed of parts and the individual failure of these parts was not accounted for in the PRA model. In other words the failure of a spring within a valve was considered a failure of the valve itself.

  9. Assessment of treatment failure in endodontic therapy.

    PubMed

    Bergenholtz, G

    2016-10-01

    There is a paucity of guidelines for the dental profession to assess failure of endodontic therapy. While a successful treatment can be well defined by the absence of apical periodontitis and clinical symptoms after a period of observation, failed treatment has escaped a distinct standing over the years. This article highlights aspects of significance and concludes that research ought to better explore the general health properties of persistent apical periodontitis on root-filled teeth and finally confirm the extent there is an association between apical periodontitis and adverse systemic health effects. Clearing this condition will determine whether clinicians should take a serious or relaxed attitude to persistent apical periodontitis subsequent to endodontic treatment. PMID:27519460

  10. [A toxicometric assessment of pneumonias and acute respiratory failure in poisonings].

    PubMed

    Iskandarov, A I

    1993-01-01

    The author analyzes clinical and morphologic manifestations of pneumonia and the conditions under which acute respiratory failure formed in 572 subjects who suffered poisoning with psychotropic and soporific drugs, chlorinated hydrocarbons, organophosphorus insecticides, caustic poisons, alcohol and its surrogates. Toxicometric (quantitative) assessment of the toxic effects and measurement of the toxins concentrations under which respiratory failure developed helped detect new mechanisms in the patho- and thanatogenesis of pneumonias and acute respiratory failure in poisonings. These data are of great interest for practical forensic medicine, since they permit substantiating the causes of death in various types of poisonings. The diagram proposed by the author permits assessment of the initial chemical trauma from the clinical and morphologic picture of poisoning.

  11. Assessment of hoist failure rate for Payload Transporter III

    SciTech Connect

    Demmie, P.N.

    1994-02-01

    Assessment of the hoist failure rate for the Payload Transporter Type III (PT-III) hoist was completed as one of the ground transportation tasks for the Minuteman II (MMIII) Weapon System Safety Assessment. The failures of concern are failures that lead to dropping a reentry system (RS) during hoist operations in a silo or the assembly, storage, and inspection building for a MMIII wing. After providing a brief description of the PT-III hoist system, the author summarizes his search for historical data from industry and the military services for failures of electric hoist systems. Since such information was not found, the strategy for assessing a failure rate was to consider failure mechanisms which lead to load-drop accidents, estimate their rates, and sum the rates for the PT-III hoist failure rate. The author discusses failure mechanisms and describes his assessment of a chain failure rate that is based on data from destructive testing of a chain of the type used for the PT-III hoist and projected usage rates for hoist operations involving the RS. The main result provides upper bounds for chain failure rates that are based on these data. No test data were found to estimate failure rates due to mechanisms other than chain failure. The author did not attempt to quantify the effects of human factors on the PT-III hoist failure rate.

  12. Potential application of influence diagram as a risk assessment tool in Brownfields sites

    SciTech Connect

    Attoh-Okine, N.O.

    1998-12-31

    Brownfields are vacant, abandoned, or underutilized commercial and industrial sites and facilities where real or perceived environmental contamination is an obstacle to redevelopment. These sites are vacant because they often do not meet the strict remediation requirements of the Superfund Law. The sites are accessible locations with much of the infrastructure, albeit deteriorated, in place. Thus they also represent an opportunity to slow down suburban and rural sprawl. As a liability, the concern stems from the environment liability of both known and unknown site contamination. Influence diagrams are tools used to represent complex decision problems based on incomplete and uncertain information from a variety of sources. The influence diagrams can be used to divide all uncertainties (Brownfields site infrastructure impact assessment) into subfactors until the level has been reached at which intuitive functions are most effective. Given the importance of uncertainties and the utilities of the Brownfields infrastructure, the use of influence diagrams seem more appropriate for representing and solving risks involved in Brownfields infrastructure assessment.

  13. Time-dependent Benioff strain release diagrams

    NASA Astrophysics Data System (ADS)

    Frid, V.; Goldbaum, J.; Rabinovitch, A.; Bahat, D.

    2011-04-01

    New time-dependent Benioff strain (TDBS) release diagrams were analyzed for acoustic emission during various loading tests and for electromagnetic (EM) radiation emanating during compression and, tension, which end in failure. TDBS diagrams are Benioff diagrams that are built consecutively, each time using a greater number of events (acoustic or EM emissions) using the last event as if it were associated with the 'actual failure'. An examination of such TDBS diagrams shows that at a certain time point (this time point is denoted by the term 'alarm' time), a comparatively short interval prior to actual collapse, their decreasing part is broken by a positive 'bulge'. This 'bulge' is quantified and an algorithm proposed for its assessment. Using the alarm time and other parameters of the failure process (fall, bulge size and escalation factors, bulge slope and slope fall time), a criterion for estimating the time of the actual collapse is developed and shown to agree well with laboratory experimental results.

  14. Assessment of Steel Reinforcement Corrosion State by Parameters of Potentiodynamic Diagrams

    NASA Astrophysics Data System (ADS)

    Krajči, Ľudovít; Jerga, Ján

    2015-12-01

    The environment of the steel reinforcement has a significant impact on the durability and service life of a concrete structure. It is not only the presence of aggressive substances from the environment, but also the own composition of concrete mixture. The use of new types of cements, additives and admixtures must be preceded by verification, if they themselves shall not initiate the corrosion. There is a need for closer physical expression of the parameters of the potentiodynamic diagrams allowing reliable assessment of the influence of the surrounding environment on electrochemical behaviour of reinforcement. The analysis of zero retardation limits of potentiodynamic curves is presented.

  15. Improving FMEA risk assessment through reprioritization of failures

    NASA Astrophysics Data System (ADS)

    Ungureanu, A. L.; Stan, G.

    2016-08-01

    Most of the current methods used to assess the failure and to identify the industrial equipment defects are based on the determination of Risk Priority Number (RPN). Although conventional RPN calculation is easy to understand and use, the methodology presents some limitations, such as the large number of duplicates and the difficulty of assessing the RPN indices. In order to eliminate the afore-mentioned shortcomings, this paper puts forward an easy and efficient computing method, called Failure Developing Mode and Criticality Analysis (FDMCA), which takes into account the failures and the defect evolution in time, from failure appearance to a breakdown.

  16. Common-Cause Failure Analysis in Event Assessment

    SciTech Connect

    Dana L. Kelly; Dale M. Rasmuson

    2008-09-01

    This paper describes the approach taken by the U. S. Nuclear Regulatory Commission to the treatment of common-cause failure in probabilistic risk assessment of operational events. The approach is based upon the Basic Parameter Model for common-cause failure, and examples are illustrated using the alpha-factor parameterization, the approach adopted by the NRC in their Standardized Plant Analysis Risk (SPAR) models. The cases of a failed component (with and without shared common-cause failure potential) and a component being unavailable due to preventive maintenance or testing are addressed. The treatment of two related failure modes (e.g., failure to start and failure to run) is a new feature of this paper. These methods are being applied by the NRC in assessing the risk significance of operational events for the Significance Determination Process (SDP) and the Accident Sequence Precursor (ASP) program.

  17. Reliability assessment on interfacial failure of thermal barrier coatings

    NASA Astrophysics Data System (ADS)

    Guo, Jin-Wei; Yang, Li; Zhou, Yi-Chun; He, Li-Min; Zhu, Wang; Cai, Can-Ying; Lu, Chun-Sheng

    2016-08-01

    Thermal barrier coatings (TBCs) usually exhibit an uncertain lifetime owing to their scattering mechanical properties and severe service conditions. To consider these uncertainties, a reliability assessment method is proposed based on failure probability analysis. First, a limit state equation is established to demarcate the boundary between failure and safe regions, and then the failure probability is calculated by the integration of a probability density function in the failure area according to the first- or second-order moment. It is shown that the parameters related to interfacial failure follow a Weibull distribution in two types of TBC. The interfacial failure of TBCs is significantly affected by the thermal mismatch of material properties and the temperature drop in service.

  18. Ranking of ecotoxisity tests for underground water assessment using the Hasse diagram technique.

    PubMed

    Kudłak, Błażej; Tsakovski, Stefan; Simeonov, Vasil; Sagajdakow, Agnieszka; Wolska, Lidia; Namieśnik, Jacek

    2014-01-01

    The present study deals with the novel application of the Hasse diagram technique (HDT) for the specific ranking of ecotoxicity tests capable of assessment of underground water quality. The area studied is a multi-municipal landfill in the northern Poland. The monitoring network of the landfill constitutes of 27 piezometers for underground water monitoring and two observation points at surface water courses. After sampling, chemical analysis of various water parameters was performed (pH, conductivity, temperature, turbidity (TURB), color, taste, smell and atmospheric conditions: temperature, precipitation and cloud cover, heavy metals content (Cu, Zn, Pb, Cd, Cr(6+), Hg), total organic carbon (TOC), sum of Polycyclic Aromatic Hydrocarbons (PAHs), Na, Mg, K, Ca, Mn, Fe, Ni, alkalinity (Alkal), general hardness, total suspended matter (SUSP), Biological Oxygen Demand (BOD), Chemical Oxygen Demand (COD), chlorides, fluorides, sulphides, sulphates, ammonium nitrogen, total nitrogen, nitrate and nitrite nitrogen, volatile phenols, ether extracts (ETHER), dry residues (DRY_RES), dissolved compounds). Parallel to the chemical parameters assessment six different ecotoxicity tests were applied (% root length(PG)/germination(PR) inhibition of Sorghum saccharatum (respectively PGSS/PRSS), Sinapis alba (respectively PGSA/PRSA), Lepidium sativum (respectively PGLS/PRLS), % bioluminescence inhibition of Vibrio fischeri (MT), % mortality of Daphnia magna (DM), % mortality of Thamnocephalus platyrus (TN)). In order to determine the applicability of the various ecotoxicity tests, a ranking of samples from different monitoring levels according to the test used (attributes) is done by using HDT. Further, the sensitivity of the biotests was determined and compared. From the sensitivity analysis of the both monitoring levels was evident that the choice of ecotoxicity tests could be optimized by the use of HDT strategy. Most reliable results could be expected by the application of root

  19. Ranking of ecotoxisity tests for underground water assessment using the Hasse diagram technique.

    PubMed

    Kudłak, Błażej; Tsakovski, Stefan; Simeonov, Vasil; Sagajdakow, Agnieszka; Wolska, Lidia; Namieśnik, Jacek

    2014-01-01

    The present study deals with the novel application of the Hasse diagram technique (HDT) for the specific ranking of ecotoxicity tests capable of assessment of underground water quality. The area studied is a multi-municipal landfill in the northern Poland. The monitoring network of the landfill constitutes of 27 piezometers for underground water monitoring and two observation points at surface water courses. After sampling, chemical analysis of various water parameters was performed (pH, conductivity, temperature, turbidity (TURB), color, taste, smell and atmospheric conditions: temperature, precipitation and cloud cover, heavy metals content (Cu, Zn, Pb, Cd, Cr(6+), Hg), total organic carbon (TOC), sum of Polycyclic Aromatic Hydrocarbons (PAHs), Na, Mg, K, Ca, Mn, Fe, Ni, alkalinity (Alkal), general hardness, total suspended matter (SUSP), Biological Oxygen Demand (BOD), Chemical Oxygen Demand (COD), chlorides, fluorides, sulphides, sulphates, ammonium nitrogen, total nitrogen, nitrate and nitrite nitrogen, volatile phenols, ether extracts (ETHER), dry residues (DRY_RES), dissolved compounds). Parallel to the chemical parameters assessment six different ecotoxicity tests were applied (% root length(PG)/germination(PR) inhibition of Sorghum saccharatum (respectively PGSS/PRSS), Sinapis alba (respectively PGSA/PRSA), Lepidium sativum (respectively PGLS/PRLS), % bioluminescence inhibition of Vibrio fischeri (MT), % mortality of Daphnia magna (DM), % mortality of Thamnocephalus platyrus (TN)). In order to determine the applicability of the various ecotoxicity tests, a ranking of samples from different monitoring levels according to the test used (attributes) is done by using HDT. Further, the sensitivity of the biotests was determined and compared. From the sensitivity analysis of the both monitoring levels was evident that the choice of ecotoxicity tests could be optimized by the use of HDT strategy. Most reliable results could be expected by the application of root

  20. Development and validation of standard area diagrams to aide assessment of pecan scab symptoms on pecan fruit

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Pecan scab (Fusicladium effusum) causes losses of pecan nutmeat yield and quality in the southeastern U.S. Disease assessment relies on visual rating, which can be inaccurate, imprecise with poor inter-rater reliability. A standard area diagram (SAD) set for pecan scab on fruit valves was develope...

  1. Failure rate data for fusion safety and risk assessment

    SciTech Connect

    Cadwallader, L.C.

    1993-01-01

    The Fusion Safety Program (FSP) at the Idaho National Engineering Laboratory (INEL) conducts safety research in materials, chemical reactions, safety analysis, risk assessment, and in component research and development to support existing magnetic fusion experiments and also to promote safety in the design of future experiments. One of the areas of safety research is applying probabilistic risk assessment (PRA) methods to fusion experiments. To apply PRA, we need a fusion-relevant radiological dose code and a component failure rate data base. This paper describes the FSP effort to develop a failure rate data base for fusion-specific components.

  2. Failure rate data for fusion safety and risk assessment

    SciTech Connect

    Cadwallader, L.C.

    1993-04-01

    The Fusion Safety Program (FSP) at the Idaho National Engineering Laboratory (INEL) conducts safety research in materials, chemical reactions, safety analysis, risk assessment, and in component research and development to support existing magnetic fusion experiments and also to promote safety in the design of future experiments. One of the areas of safety research is applying probabilistic risk assessment (PRA) methods to fusion experiments. To apply PRA, we need a fusion-relevant radiological dose code and a component failure rate data base. This paper describes the FSP effort to develop a failure rate data base for fusion-specific components.

  3. Methods for Assessing Honeycomb Sandwich Panel Wrinkling Failures

    NASA Technical Reports Server (NTRS)

    Zalewski, Bart F.; Dial, William B.; Bednarcyk, Brett A.

    2012-01-01

    Efficient closed-form methods for predicting the facesheet wrinkling failure mode in sandwich panels are assessed. Comparisons were made with finite element model predictions for facesheet wrinkling, and a validated closed-form method was implemented in the HyperSizer structure sizing software.

  4. Methods for assessing nutritional status of patients with renal failure.

    PubMed

    Blumenkrantz, M J; Kopple, J D; Gutman, R A; Chan, Y K; Barbour, G L; Roberts, C; Shen, F H; Gandhi, V C; Tucker, C T; Curtis, F K; Coburn, J W

    1980-07-01

    Since wasting and malnutrition are common problems in patients with renal failure, it is important to develop techniques for the longitudinal assessment of nutritional status. This paper reviews available methods for assessing the nutritional status; their possible limitations when applied to uremic patients are discussed. If carefully done, dietary intake can be estimated by recall interviews augmented with dietary diaries. Also, in a stable patient with chronic renal failure, the serum urea nitrogen (N)/creatinine ratio and the rate of urea N appearance reflect dietary protein intake. A comparison of N intake and urea N appearance will give an estimate of N balance. Anthropometric parameters such as the relationship between height and weight, thickness of subcutaneous skinfolds, and midarm muscle circumference are simple methods for evaluating body composition. Other methods for assessing body composition, such as densitometry and total body potassium, may not be readily applicable in patients with renal failure. More traditional biochemical estimates of nutritional status such as serum protein, albumin, transferrin, and selected serum complement determinations show that abnormalities are common among uremic patients. Certain anthropometric and biochemical measurements of nutritional status are abnormal in chronically uremic patients who appear to be particularly robust; thus, factors other than altered nutritional intake may lead to abnormal parameters in such patients. Serial monitoring of selected nutritional parameters in the same individual may improve the sensitivity of these measurements to detect changes. Standards for measuring nutritional status are needed for patients with renal failure so that realistic goals can be established optimal body nutriture.

  5. Methods of failure and reliability assessment for mechanical heart pumps.

    PubMed

    Patel, Sonna M; Allaire, Paul E; Wood, Houston G; Throckmorton, Amy L; Tribble, Curt G; Olsen, Don B

    2005-01-01

    Artificial blood pumps are today's most promising bridge-to-recovery (BTR), bridge-to-transplant (BTT), and destination therapy solutions for patients suffering from intractable congestive heart failure (CHF). Due to an increased need for effective, reliable, and safe long-term artificial blood pumps, each new design must undergo failure and reliability testing, an important step prior to approval from the United States Food and Drug Administration (FDA), for clinical testing and commercial use. The FDA has established no specific standards or protocols for these testing procedures and there are only limited recommendations provided by the scientific community when testing an overall blood pump system and individual system components. Product development of any medical device must follow a systematic and logical approach. As the most critical aspects of the design phase, failure and reliability assessments aid in the successful evaluation and preparation of medical devices prior to clinical application. The extent of testing, associated costs, and lengthy time durations to execute these experiments justify the need for an early evaluation of failure and reliability. During the design stages of blood pump development, a failure modes and effects analysis (FMEA) should be completed to provide a concise evaluation of the occurrence and frequency of failures and their effects on the overall support system. Following this analysis, testing of any pump typically involves four sequential processes: performance and reliability testing in simple hydraulic or mock circulatory loops, acute and chronic animal experiments, human error analysis, and ultimately, clinical testing. This article presents recommendations for failure and reliability testing based on the National Institutes of Health (NIH), Society for Thoracic Surgeons (STS) and American Society for Artificial Internal Organs (ASAIO), American National Standards Institute (ANSI), the Association for Advancement of

  6. The Assessment of the Right Heart Failure Syndrome.

    PubMed

    Kholdani, Cyrus A; Oudiz, Ronald J; Fares, Wassim H

    2015-12-01

    The right heart failure (RHF) syndrome is a pathophysiologically complex state commonly associated with dysfunction of the right ventricle (RV). The normal RV is suited for its purposes of distributing venous blood to the low-resistance pulmonary circulation. Myriad stresses imposed upon it, though, can ultimately result in its failure, with the threat of cardiovascular collapse being the most dreaded outcome. Decreased cardiac output with increased central venous pressures are hemodynamic hallmarks of this highly morbid condition. Proper management of RHF is predicated on the accurate assessment of the key hemodynamic and clinical components signaling the syndrome that is the result of the failing RV. Appropriate use of diagnostic tools is paramount for understanding the key components of RV function: the preload state of the RV, its contractility, and the afterload burden placed on it. In making these assessments, it remains crucial to understand the limitations of these tools when managing RHF in the intensive care unit. An understanding of each of these components allows for the understanding of the physiology and the clinical presentation which can guide the use of therapies appropriately tailored to manage the condition. PMID:26595052

  7. Selected Component Failure Rate Values from Fusion Safety Assessment Tasks

    SciTech Connect

    Cadwallader, Lee Charles

    1998-09-01

    This report is a compilation of component failure rate and repair rate values that can be used in magnetic fusion safety assessment tasks. Several safety systems are examined, such as gas cleanup systems and plasma shutdown systems. Vacuum system component reliability values, including large vacuum chambers, have been reviewed. Values for water cooling system components have also been reported here. The report concludes with the examination of some equipment important to personnel safety, atmospheres, combustible gases, and airborne releases of radioactivity. These data should be useful to system designers to calculate scoping values for the availability and repair intervals for their systems, and for probabilistic safety or risk analysts to assess fusion systems for safety of the public and the workers.

  8. Selected component failure rate values from fusion safety assessment tasks

    SciTech Connect

    Cadwallader, L.C.

    1998-09-01

    This report is a compilation of component failure rate and repair rate values that can be used in magnetic fusion safety assessment tasks. Several safety systems are examined, such as gas cleanup systems and plasma shutdown systems. Vacuum system component reliability values, including large vacuum chambers, have been reviewed. Values for water cooling system components have also been reported here. The report concludes with the examination of some equipment important to personnel safety, atmospheres, combustible gases, and airborne releases of radioactivity. These data should be useful to system designers to calculate scoping values for the availability and repair intervals for their systems, and for probabilistic safety or risk analysts to assess fusion systems for safety of the public and the workers.

  9. Assessment of facial form modifications in orthodontics: proposal of a modified computerized mesh diagram analysis.

    PubMed

    Ferrario, V F; Sforza, C; Dalloca, L L; DeFranco, D J

    1996-03-01

    A modified computerized mesh diagram analysis enables rapid, easy, and independent quantifications of facial size, shape, and head position. A lateral cephalometric radiograph is traced, positioned in natural head position (NHP) according to the NHP registered on a lateral photograph, and superimposed by the standard grid of rectangles determined by the patient's upper face height and face depth. The patient's tracing is compared with a reference tracing of a norm. The two tracings are registered at nasion in NHP, and the size difference of the patient's mesh rectangles from the norm is quantified by a size normalization. The shape discrepancies are evaluated by calculating relevant displacement vectors for each cephalometric landmark, and a global difference factor. This computerized method provides a rapid graphic evaluation of the patient's sagittal and vertical discrepancies and can also be applied to determine the effects of orthodontic therapy and/or growth on the skeletal and dental relationships in a single patient.

  10. Ab-initio calculations and phase diagram assessments of An-Al systems (An = U, Np, Pu)

    NASA Astrophysics Data System (ADS)

    Sedmidubský, D.; Konings, R. J. M.; Souček, P.

    2010-02-01

    The enthalpies of formation of binary intermetallic compounds AnAl n(n=2,3,4, An=U,Np,Pu) were assessed from first principle calculations of total energies performed using full potential APW + lo technique within density functional theory ( WIEN2k). The substantial contribution to entropies, S298°, arising from lattice vibrations was calculated by direct method within harmonic crystal approximation ( Phonon software + VASP for obtaining Hellmann-Feynman forces). The electronic heat capacity and the corresponding contribution to entropy were estimated from the density of states at Fermi level obtained from electronic structure calculations. The phase diagrams of the relevant systems An-Al were calculated based on the thermodynamic data assessed from ab-initio calculations, known equilibrium and calorimetry data by employing the FactSage program.

  11. Comparison of Body Composition Assessment Methods in Pediatric Intestinal Failure

    PubMed Central

    Mehta, Nilesh M.; Raphael, Bram; Guteirrez, Ivan; Quinn, Nicolle; Mitchell, Paul D.; Litman, Heather J.; Jaksic, Tom; Duggan, Christopher P.

    2015-01-01

    Objectives To examine the agreement of multifrequency bioelectric impedance analysis (BIA) and anthropometry with reference methods for body composition assessment in children with intestinal failure (IF). Methods We conducted a prospective pilot study in children 14 years of age or younger with IF resulting from either short bowel syndrome (SBS) or motility disorders. Bland Altman analysis was used to examine the agreement between BIA and deuterium dilution in measuring total body water (TBW) and lean body mass (LBM); and between BIA and dual X-ray absorptiometry (DXA) techniques in measuring LBM and FM. Fat mass (FM) and percent body fat (%BF) measurements by BIA and anthropometry, were also compared in relation to those measured by deuterium dilution. Results Fifteen children with IF, median (IQR) age 7.2 (5.0, 10.0) years, 10 (67%) male, were studied. BIA and deuterium dilution were in good agreement with a mean bias (limits of agreement) of 0.9 (-3.2, 5.0) for TBW (L) and 0.1 (-5.4 to 5.6) for LBM (kg) measurements. The mean bias (limits) for FM (kg) and %BF measurements were 0.4 (-3.8, 4.6) kg and 1.7 (-16.9, 20.3)% respectively. The limits of agreement were within 1 SD of the mean bias in 12/14 (86%) subjects for TBW and LBM, and in 11/14 (79%) for FM and %BF measurements. Mean bias (limits) for LBM (kg) and FM (kg) between BIA and DXA were 1.6 (-3.0 to 6.3) kg and -0.1 (-3.2 to 3.1) kg, respectively. Mean bias (limits) for FM (kg) and %BF between anthropometry and deuterium dilution were 0.2 (-4.2, 4.6) and -0.2 (-19.5 to 19.1), respectively. The limits of agreement were within 1 SD of the mean bias in 10/14 (71%) subjects. Conclusions In children with intestinal failure, TBW and LBM measurements by multifrequency BIA method were in agreement with isotope dilution and DXA methods, with small mean bias. In comparison to deuterium dilution, BIA was comparable to anthropometry for FM and %BF assessments with small mean bias. However, the limits of agreement

  12. A failure diagnosis and impact assessment prototype for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Baker, Carolyn G.; Marsh, Christopher A.

    1991-01-01

    NASA is investigating the use of advanced automation to enhance crew productivity for Space Station Freedom in numerous areas, one being failure management. A prototype is described that diagnoses failure sources and assesses the future impacts of those failures on other Freedom entities.

  13. Thermodynamic Diagrams

    NASA Astrophysics Data System (ADS)

    Chaston, Scot

    1999-02-01

    Thermodynamic data such as equilibrium constants, standard cell potentials, molar enthalpies of formation, and standard entropies of substances can be a very useful basis for an organized presentation of knowledge in diverse areas of applied chemistry. Thermodynamic data can become particularly useful when incorporated into thermodynamic diagrams that are designed to be easy to recall, to serve as a basis for reconstructing previous knowledge, and to determine whether reactions can occur exergonically or only with the help of an external energy source. Few students in our chemistry-based courses would want to acquire the depth of knowledge or rigor of professional thermodynamicists. But they should nevertheless learn how to make good use of thermodynamic data in their professional occupations that span the chemical, biological, environmental, and medical laboratory fields. This article discusses examples of three thermodynamic diagrams that have been developed for this purpose. They are the thermodynamic energy account (TEA), the total entropy scale, and the thermodynamic scale diagrams. These diagrams help in the teaching and learning of thermodynamics by bringing the imagination into the process of developing a better understanding of abstract thermodynamic functions, and by allowing the reader to keep track of specialist thermodynamic discourses in the literature.

  14. Cutting Up Text To Make Moveable, Magnetic Diagrams: A Way of Teaching and Assessing Biological Processes.

    ERIC Educational Resources Information Center

    Britton, Lynda A.; Wandersee, James H.

    1997-01-01

    Describes a method of cutting up pages in texts to make moveable, magnetic cards that can be used for instruction and assessment. Uses the diagrammatic illustrations associated with the complicated biotechnology procedures of the polymerase chain reaction and restriction fragment length polymorphism. (JRH)

  15. Hubble Diagram

    NASA Astrophysics Data System (ADS)

    Djorgovski, S.; Murdin, P.

    2000-11-01

    Initially introduced as a way to demonstrate the expansion of the universe, and subsequently to determine the expansion rate (the HUBBLE CONSTANT H0), the Hubble diagram is one of the classical cosmological tests. It is a plot of apparent fluxes (usually expressed as magnitudes) of some types of objects at cosmological distances, against their REDSHIFTS. It is used as a tool to measure the glob...

  16. Consequence assessment of large rock slope failures in Norway

    NASA Astrophysics Data System (ADS)

    Oppikofer, Thierry; Hermanns, Reginald L.; Horton, Pascal; Sandøy, Gro; Roberts, Nicholas J.; Jaboyedoff, Michel; Böhme, Martina; Yugsi Molina, Freddy X.

    2014-05-01

    Steep glacially carved valleys and fjords in Norway are prone to many landslide types, including large rockslides, rockfalls, and debris flows. Large rockslides and their secondary effects (rockslide-triggered displacement waves, inundation behind landslide dams and outburst floods from failure of landslide dams) pose a significant hazard to the population living in the valleys and along the fjords shoreline. The Geological Survey of Norway performs systematic mapping of unstable rock slopes in Norway and has detected more than 230 unstable slopes with significant postglacial deformation. This large number necessitates prioritisation of follow-up activities, such as more detailed investigations, periodic displacement measurements, continuous monitoring and early-warning systems. Prioritisation is achieved through a hazard and risk classification system, which has been developed by a panel of international and Norwegian experts (www.ngu.no/en-gb/hm/Publications/Reports/2012/2012-029). The risk classification system combines a qualitative hazard assessment with a consequences assessment focusing on potential life losses. The hazard assessment is based on a series of nine geomorphological, engineering geological and structural criteria, as well as displacement rates, past events and other signs of activity. We present a method for consequence assessment comprising four main steps: 1. computation of the volume of the unstable rock slope; 2. run-out assessment based on the volume-dependent angle of reach (Fahrböschung) or detailed numerical run-out modelling; 3. assessment of possible displacement wave propagation and run-up based on empirical relations or modelling in 2D or 3D; and 4. estimation of the number of persons exposed to rock avalanches or displacement waves. Volume computation of an unstable rock slope is based on the sloping local base level technique, which uses a digital elevation model to create a second-order curved surface between the mapped extent of

  17. Reconciling Streamflow Uncertainty Estimation and River Bed Morphology Dynamics. Insights from a Probabilistic Assessment of Streamflow Uncertainties Using a Reliability Diagram

    NASA Astrophysics Data System (ADS)

    Morlot, T.; Mathevet, T.; Perret, C.; Favre Pugin, A. C.

    2014-12-01

    Streamflow uncertainty estimation has recently received a large attention in the literature. A dynamic rating curve assessment method has been introduced (Morlot et al., 2014). This dynamic method allows to compute a rating curve for each gauging and a continuous streamflow time-series, while calculating streamflow uncertainties. Streamflow uncertainty takes into account many sources of uncertainty (water level, rating curve interpolation and extrapolation, gauging aging, etc.) and produces an estimated distribution of streamflow for each days. In order to caracterise streamflow uncertainty, a probabilistic framework has been applied on a large sample of hydrometric stations of the Division Technique Générale (DTG) of Électricité de France (EDF) hydrometric network (>250 stations) in France. A reliability diagram (Wilks, 1995) has been constructed for some stations, based on the streamflow distribution estimated for a given day and compared to a real streamflow observation estimated via a gauging. To build a reliability diagram, we computed the probability of an observed streamflow (gauging), given the streamflow distribution. Then, the reliability diagram allows to check that the distribution of probabilities of non-exceedance of the gaugings follows a uniform law (i.e., quantiles should be equipropables). Given the shape of the reliability diagram, the probabilistic calibration is caracterised (underdispersion, overdispersion, bias) (Thyer et al., 2009). In this paper, we present case studies where reliability diagrams have different statistical properties for different periods. Compared to our knowledge of river bed morphology dynamic of these hydrometric stations, we show how reliability diagram gives us invaluable information on river bed movements, like a continuous digging or backfilling of the hydraulic control due to erosion or sedimentation processes. Hence, the careful analysis of reliability diagrams allows to reconcile statistics and long

  18. Heart failure in 3 patients with acromegaly: echocardiographic assessment.

    PubMed

    Aono, J; Nobuoka, S; Nagashima, J; Hatano, S; Yoshida, A; Ando, H; Miyake, F; Murayama, M

    1998-07-01

    We evaluated 3 patients with acromegaly who developed heart failure. Heart failure appeared to be due to acromegalic cardiomyopathy in 2 patients who did not have hypertension or evidence of coronary artery disease, and it was possibly due to acromegalic cardiomyopathy combined with familiar hypertrophic cardiomyopathy in 1 patient. The common echocardiographic findings in the present three cases were: 1) enlargement of the left atrium, 2) markedly dilated left ventricular cavity with diffuse hypokinesis, 3) decrease of indices of the left ventricular systolic function, and 4) no evidence of left ventricular hypertrophy. Echocardiographic findings in acromegaly with congestive heart failure resemble those of idiopathic dilated cardiomyopathy.

  19. Development of failure scenarios for biosolids land application risk assessment.

    PubMed

    Galada, Heather C; Gurian, Patrick L; Olson, Mira S; Teng, Jingjie; Kumar, Arun; Wardell, Michael; Eggers, Sara; Casman, Elizabeth

    2013-02-01

    Although deviations from standard guidance for land application of biosolids occur in practice, their importance is largely unknown. A list of such deviations (plausible failure scenarios) were identified at a workshop of industry, regulators, and academic professionals. Next, a survey of similar professionals was conducted to rank the plausible failure scenarios according to their severity, frequency, incentive to ignore control measures, gaps in existing control processes, public concern, and overall concern. Survey participants rated intentional dumping (unpermitted disposal) as the most severe of the failure scenarios, lack of worker protection as the most frequent scenario, and application of Class A biosolids that have failed to meet treatment standards as the scenario for which incentives to ignore control measures are highest. Failure of public access restrictions to application sites was the scenario for which existing controls were judged the weakest; application of biosolids too close to wells was ranked highest for public concern and for overall concern. Two scenarios for which existing controls were considered weaker, site restriction violations and animal contact leading to human exposure, were also rated as frequently occurring. Both scenarios are related in that they (1) involve inappropriate access to a site before the required time has elapsed, and (2) could be addressed through similar biosolids management measures. PMID:23472330

  20. Proof test diagrams for Zerodur glass-ceramic

    NASA Technical Reports Server (NTRS)

    Tucker, D. S.

    1991-01-01

    Proof test diagrams for Zerodur glass-ceramics are calculated from available fracture mechanics data. It is shown that the environment has a large effect on minimum time-to-failure as predicted by proof test diagrams.

  1. Common-Cause Failure Treatment in Event Assessment: Basis for a Proposed New Model

    SciTech Connect

    Dana Kelly; Song-Hua Shen; Gary DeMoss; Kevin Coyne; Don Marksberry

    2010-06-01

    Event assessment is an application of probabilistic risk assessment in which observed equipment failures and outages are mapped into the risk model to obtain a numerical estimate of the event’s risk significance. In this paper, we focus on retrospective assessments to estimate the risk significance of degraded conditions such as equipment failure accompanied by a deficiency in a process such as maintenance practices. In modeling such events, the basic events in the risk model that are associated with observed failures and other off-normal situations are typically configured to be failed, while those associated with observed successes and unchallenged components are assumed capable of failing, typically with their baseline probabilities. This is referred to as the failure memory approach to event assessment. The conditioning of common-cause failure probabilities for the common cause component group associated with the observed component failure is particularly important, as it is insufficient to simply leave these probabilities at their baseline values, and doing so may result in a significant underestimate of risk significance for the event. Past work in this area has focused on the mathematics of the adjustment. In this paper, we review the Basic Parameter Model for common-cause failure, which underlies most current risk modelling, discuss the limitations of this model with respect to event assessment, and introduce a proposed new framework for common-cause failure, which uses a Bayesian network to model underlying causes of failure, and which has the potential to overcome the limitations of the Basic Parameter Model with respect to event assessment.

  2. A general cause based methodology for analysis of dependent failures in system risk and reliability assessments

    NASA Astrophysics Data System (ADS)

    O'Connor, Andrew N.

    Traditional parametric Common Cause Failure (CCF) models quantify the soft dependencies between component failures through the use of empirical ratio relationships. Furthermore CCF modeling has been essentially restricted to identical components in redundant formations. While this has been advantageous in allowing the prediction of system reliability with little or no data, it has been prohibitive in other applications such as modeling the characteristics of a system design or including the characteristics of failure when assessing the risk significance of a failure or degraded performance event (known as an event assessment). This dissertation extends the traditional definition of CCF to model soft dependencies between like and non-like components. It does this through the explicit modeling of soft dependencies between systems (coupling factors) such as sharing a maintenance team or sharing a manufacturer. By modeling the soft dependencies explicitly these relationships can be individually quantified based on the specific design of the system and allows for more accurate event assessment given knowledge of the failure cause. Since the most data informed model in use is the Alpha Factor Model (AFM), it has been used as the baseline for the proposed solutions. This dissertation analyzes the US Nuclear Regulatory Commission's Common Cause Failure Database event data to determine the suitability of the data and failure taxonomy for use in the proposed cause-based models. Recognizing that CCF events are characterized by full or partial presence of "root cause" and "coupling factor" a refined failure taxonomy is proposed which provides a direct link between the failure cause category and the coupling factors. This dissertation proposes two CCF models (a) Partial Alpha Factor Model (PAFM) that accounts for the relevant coupling factors based on system design and provide event assessment with knowledge of the failure cause, and (b)General Dependency Model (GDM),which uses

  3. A Probabilistic-Micro-mechanical Methodology for Assessing Zirconium Alloy Cladding Failure

    SciTech Connect

    Pan, Y.M.; Chan, K.S.; Riha, D.S.

    2007-07-01

    Cladding failure of fuel rods caused by hydride-induced embrittlement is a reliability concern for spent nuclear fuel after extended burnup. Uncertainties in the cladding temperature, cladding stress, oxide layer thickness, and the critical stress value for hydride reorientation preclude an assessment of the cladding failure risk. A set of micro-mechanical models for treating oxide cracking, blister cracking, delayed hydride cracking, and cladding fracture was developed and incorporated in a computer model. Results obtained from the preliminary model calculations indicate that at temperatures below a critical temperature of 318.5 deg. C [605.3 deg. F], the time to failure by delayed hydride cracking in Zr-2.5%Nb decreased with increasing cladding temperature. The overall goal of this project is to develop a probabilistic-micro-mechanical methodology for assessing the probability of hydride-induced failure in Zircaloy cladding and thereby establish performance criteria. (authors)

  4. Assessment and management of right ventricular failure in left ventricular assist device patients.

    PubMed

    Holman, William L; Acharya, Deepak; Siric, Franjo; Loyaga-Rendon, Renzo Y

    2015-01-01

    Mechanical circulatory support devices, including ventricular assist devices (VADs) and the total artificial heart, have evolved to become accepted therapeutic options for patients with severe congestive heart failure. Continuous-flow left VADs are the most prevalent option for mechanical circulatory assistance and reliably provide years of support. However, problems related to acute and chronic right heart failure in patients with left VADs continue to cause important mortality and morbidity. This review discusses the assessment and management of right ventricular failure in left VAD patients. The goal is to summarize current knowledge and suggest new approaches to managing this problem.

  5. Development and validation of a questionnaire to assess fear of kidney failure following living donation.

    PubMed

    Rodrigue, James R; Fleishman, Aaron; Vishnevsky, Tanya; Whiting, James; Vella, John P; Garrison, Krista; Moore, Deonna; Kayler, Liise; Baliga, Prabhakar; Chavin, Kenneth D; Karp, Seth; Mandelbrot, Didier A

    2014-06-01

    Living kidney donors (LKDs) may feel more anxious about kidney failure now that they have only one kidney and the security of a second kidney is gone. The aim of this cross-sectional study was to develop and empirically validate a self-report scale for assessing fear of kidney failure in former LKDs. Participants were 364 former LKDs within the past 10 years at five US transplant centers and 219 healthy nondonor controls recruited through Mechanical Turk who completed several questionnaires. Analyses revealed a unidimensional factor structure, excellent internal consistency (α = 0.88), and good convergent validity for the Fear of Kidney Failure questionnaire. Only 13% of former donors reported moderate to high fear of kidney failure. Nonwhite race (OR = 2.9, P = 0.01), genetic relationship with the recipient (OR = 2.46, P = 0.04), and low satisfaction with the donation experience (OR = 0.49, P = 0.002) were significant predictors of higher fear of kidney failure. We conclude that while mild anxiety about kidney failure is common, high anxiety about future renal failure among former LKDs is uncommon. The Fear of Kidney Failure questionnaire is reliable, valid, and easy to use in the clinical setting.

  6. Development and validation of a questionnaire to assess fear of kidney failure following living donation.

    PubMed

    Rodrigue, James R; Fleishman, Aaron; Vishnevsky, Tanya; Whiting, James; Vella, John P; Garrison, Krista; Moore, Deonna; Kayler, Liise; Baliga, Prabhakar; Chavin, Kenneth D; Karp, Seth; Mandelbrot, Didier A

    2014-06-01

    Living kidney donors (LKDs) may feel more anxious about kidney failure now that they have only one kidney and the security of a second kidney is gone. The aim of this cross-sectional study was to develop and empirically validate a self-report scale for assessing fear of kidney failure in former LKDs. Participants were 364 former LKDs within the past 10 years at five US transplant centers and 219 healthy nondonor controls recruited through Mechanical Turk who completed several questionnaires. Analyses revealed a unidimensional factor structure, excellent internal consistency (α = 0.88), and good convergent validity for the Fear of Kidney Failure questionnaire. Only 13% of former donors reported moderate to high fear of kidney failure. Nonwhite race (OR = 2.9, P = 0.01), genetic relationship with the recipient (OR = 2.46, P = 0.04), and low satisfaction with the donation experience (OR = 0.49, P = 0.002) were significant predictors of higher fear of kidney failure. We conclude that while mild anxiety about kidney failure is common, high anxiety about future renal failure among former LKDs is uncommon. The Fear of Kidney Failure questionnaire is reliable, valid, and easy to use in the clinical setting. PMID:24606048

  7. Failure modes effects and criticality analysis (FMECA) approach to the crystalline silicon photovoltaic module reliability assessment

    NASA Astrophysics Data System (ADS)

    Kuitche, Joseph M.; Tamizh-Mani, Govindasamy; Pan, Rong

    2011-09-01

    Traditional degradation or reliability analysis of photovoltaic (PV) modules has historically consisted of some combination of accelerated stress and field testing, including field deployment and monitoring of modules over long time periods, and analyzing commercial warranty returns. This has been effective in identifying failure mechanisms and developing stress tests that accelerate those failures. For example, BP Solar assessed the long term reliability of modules deployed outdoor and modules returned from the field in 2003; and presented the types of failures observed. Out of about 2 million modules, the total number of returns over nine year period was only 0.13%. An analysis on these returns resulted that 86% of the field failures were due to corrosion and cell or interconnect break. These failures were eliminated through extended thermal cycling and damp heat tests. Considering that these failures are observed even on modules that have successfully gone through conventional qualification tests, it is possible that known failure modes and mechanisms are not well understood. Moreover, when a defect is not easily identifiable, the existing accelerated tests might no longer be sufficient. Thus, a detailed study of all known failure modes existed in field test is essential. In this paper, we combine the physics of failure analysis with an empirical study of the field inspection data of PV modules deployed in Arizona to develop a FMECA model. This technique examines the failure rates of individual components of fielded modules, along with their severities and detectabilities, to determine the overall effect of a defect on the module's quality and reliability.

  8. System Reliability Assessment for a Rock Tunnel with Multiple Failure Modes

    NASA Astrophysics Data System (ADS)

    Lü, Qing; Chan, Chin Loong; Low, Bak Kong

    2013-07-01

    This paper presents a practical procedure for assessing the system reliability of a rock tunnel. Three failure modes, namely, inadequate support capacity, excessive tunnel convergence, and insufficient rockbolt length, are considered and investigated using a deterministic model of ground-support interaction analysis based on the convergence-confinement method (CCM). The failure probability of each failure mode is evaluated from the first-order reliability method (FORM) and the response surface method (RSM) via an iterative procedure. The system failure probability bounds are estimated using the bimodal bounds approach suggested by Ditlevsen (1979), based on the reliability index and design point inferred from the FORM. The proposed approach is illustrated with an example of a circular rock tunnel. The computed system failure probability bounds compare favorably with those generated from Monte Carlo simulations. The results show that the relative importance of different failure modes to the system reliability of the tunnel mainly depends on the timing of support installation relative to the advancing tunnel face. It is also shown that reliability indices based on the second-order reliability method (SORM) can be used to achieve more accurate bounds on the system failure probability for nonlinear limit state surfaces. The system reliability-based design for shotcrete thickness is also demonstrated.

  9. 26 CFR 301.6685-1 - Assessable penalties with respect to private foundations' failure to comply with section 6104(d).

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... foundations' failure to comply with section 6104(d). 301.6685-1 Section 301.6685-1 Internal Revenue INTERNAL... Additional Amounts § 301.6685-1 Assessable penalties with respect to private foundations' failure to comply... private foundations' annual returns, and who fails so to comply, if such failure is willful, shall pay...

  10. 26 CFR 301.6685-1 - Assessable penalties with respect to private foundations' failure to comply with section 6104(d).

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... foundations' failure to comply with section 6104(d). 301.6685-1 Section 301.6685-1 Internal Revenue INTERNAL... Additional Amounts § 301.6685-1 Assessable penalties with respect to private foundations' failure to comply... private foundations' annual returns, and who fails so to comply, if such failure is willful, shall pay...

  11. 26 CFR 301.6685-1 - Assessable penalties with respect to private foundations' failure to comply with section 6104(d).

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... foundations' failure to comply with section 6104(d). 301.6685-1 Section 301.6685-1 Internal Revenue INTERNAL... Additional Amounts § 301.6685-1 Assessable penalties with respect to private foundations' failure to comply... private foundations' annual returns, and who fails so to comply, if such failure is willful, shall pay...

  12. 26 CFR 301.6685-1 - Assessable penalties with respect to private foundations' failure to comply with section 6104(d).

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... foundations' failure to comply with section 6104(d). 301.6685-1 Section 301.6685-1 Internal Revenue INTERNAL... Additional Amounts § 301.6685-1 Assessable penalties with respect to private foundations' failure to comply... private foundations' annual returns, and who fails so to comply, if such failure is willful, shall pay...

  13. 26 CFR 301.6685-1 - Assessable penalties with respect to private foundations' failure to comply with section 6104(d).

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... foundations' failure to comply with section 6104(d). 301.6685-1 Section 301.6685-1 Internal Revenue INTERNAL... Additional Amounts § 301.6685-1 Assessable penalties with respect to private foundations' failure to comply... private foundations' annual returns, and who fails so to comply, if such failure is willful, shall pay...

  14. ANALYSIS OF SEQUENTIAL FAILURES FOR ASSESSMENT OF RELIABILITY AND SAFETY OF MANUFACTURING SYSTEMS. (R828541)

    EPA Science Inventory

    Assessment of reliability and safety of a manufacturing system with sequential failures is an important issue in industry, since the reliability and safety of the system depend not only on all failed states of system components, but also on the sequence of occurrences of those...

  15. When Summative Computer-Aided Assessments Go Wrong: Disaster Recovery after a Major Failure

    ERIC Educational Resources Information Center

    Harwood, Ian

    2005-01-01

    This case study outlines the events of a recent summative computer-aided assessment (CAA) failure involving 280 first-year undergraduate students. Post-test analysis found that the central server had become unexpectedly overloaded, thereby causing the CAA to be abandoned. Practical advice on just what to do in the event of a summative CAA failure…

  16. Application of ISO22000 and Failure Mode and Effect Analysis (fmea) for Industrial Processing of Poultry Products

    NASA Astrophysics Data System (ADS)

    Varzakas, Theodoros H.; Arvanitoyannis, Ioannis S.

    Failure Mode and Effect Analysis (FMEA) model has been applied for the risk assessment of poultry slaughtering and manufacturing. In this work comparison of ISO22000 analysis with HACCP is carried out over poultry slaughtering, processing and packaging. Critical Control points and Prerequisite programs (PrPs) have been identified and implemented in the cause and effect diagram (also known as Ishikawa, tree diagram and fishbone diagram).

  17. 77 FR 5857 - Common-Cause Failure Analysis in Event and Condition Assessment: Guidance and Research, Draft...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-06

    ... COMMISSION Common-Cause Failure Analysis in Event and Condition Assessment: Guidance and Research, Draft...: On November 2, 2011 (76 FR 67764), the U.S. Nuclear Regulatory Commission (NRC) published for public comment Draft NUREG, ``Common- Cause Failure Analysis in Event and Condition Assessment: Guidance...

  18. 76 FR 67764 - Common-Cause Failure Analysis in Event and Condition Assessment: Guidance and Research, Draft...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-02

    ... COMMISSION Common-Cause Failure Analysis in Event and Condition Assessment: Guidance and Research, Draft...-xxxx, Revision 0, ``Common-Cause Failure Analysis in Event and Condition Assessment: Guidance and...-0254 in the subject line of your comments. For additional instructions on submitting comments...

  19. Assessing performance and validating finite element simulations using probabilistic knowledge

    SciTech Connect

    Dolin, Ronald M.; Rodriguez, E. A.

    2002-01-01

    Two probabilistic approaches for assessing performance are presented. The first approach assesses probability of failure by simultaneously modeling all likely events. The probability each event causes failure along with the event's likelihood of occurrence contribute to the overall probability of failure. The second assessment method is based on stochastic sampling using an influence diagram. Latin-hypercube sampling is used to stochastically assess events. The overall probability of failure is taken as the maximum probability of failure of all the events. The Likelihood of Occurrence simulation suggests failure does not occur while the Stochastic Sampling approach predicts failure. The Likelihood of Occurrence results are used to validate finite element predictions.

  20. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples, volume 1

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  1. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples, volume 1

    NASA Astrophysics Data System (ADS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-06-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  2. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 2: Software documentation

    NASA Astrophysics Data System (ADS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-06-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes, These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  3. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes, These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  4. Caution: Venn Diagrams Ahead!

    ERIC Educational Resources Information Center

    Kimmins, Dovie L.; Winters, J. Jeremy

    2015-01-01

    Two perspectives of the term "Venn diagram" reflect the typical differences in the uses of Venn diagrams in the subject areas of mathematics and language arts. These differences are subtle; nevertheless, they can potentially be confusing. In language arts, the circles in a Venn diagram typically represent things that can be compared and…

  5. Assessment of current structural design methodology for high-temperature reactors based on failure tests

    SciTech Connect

    Corum, J.M.; Sartory, W.K.

    1985-01-01

    A mature design methodology, consisting of inelastic analysis methods, provided in Department of Energy guidelines, and failure criteria, contained in ASME Code Case N-47, exists in the United States for high-temperature reactor components. The objective of this paper is to assess the adequacy of this overall methodology by comparing predicted inelastic deformations and lifetimes with observed results from structural failure tests and from an actual service failure. Comparisons are presented for three types of structural situations: (1) nozzle-to-spherical shell specimens, where stresses at structural discontinuities lead to cracking, (2) welded structures, where metallurgical discontinuities play a key role in failures, and (3) thermal shock loadings of cylinders and pipes, where thermal discontinuities can lead to failure. The comparison between predicted and measured inelastic responses are generally reasonalbly good; quantities are sometimes overpredicted somewhat, and, sometimes underpredicted. However, even seemingly small discrepancies can have a significant effect on structural life, and lifetimes are not always as closely predicted. For a few cases, the lifetimes are substantially overpredicted, which raises questions regarding the adequacy of existing design margins.

  6. Phase Equilibria Diagrams Database

    National Institute of Standards and Technology Data Gateway

    SRD 31 NIST/ACerS Phase Equilibria Diagrams Database (PC database for purchase)   The Phase Equilibria Diagrams Database contains commentaries and more than 21,000 diagrams for non-organic systems, including those published in all 21 hard-copy volumes produced as part of the ACerS-NIST Phase Equilibria Diagrams Program (formerly titled Phase Diagrams for Ceramists): Volumes I through XIV (blue books); Annuals 91, 92, 93; High Tc Superconductors I & II; Zirconium & Zirconia Systems; and Electronic Ceramics I. Materials covered include oxides as well as non-oxide systems such as chalcogenides and pnictides, phosphates, salt systems, and mixed systems of these classes.

  7. Soft tissue facial growth and development as assessed by the three-dimensional computerized mesh diagram analysis.

    PubMed

    Ferrario, V F; Sforza, C; Serrao, G; Colombo, A; Ciusa, V

    1999-08-01

    The normal growth and development of facial soft tissues from 6 years to adulthood has been studied by the 3D computerized mesh diagram analysis. The analysis allows independent quantifications of size and shape modifications both between different age groups, and between males and females. Normal age-related and sex-related references are provided. The three-dimensional facial morphometry method has been used for the collection of the x, y, z coordinates of 22 soft tissue landmarks in 2023 examinations performed on 1157 healthy white children and adolescents between 6 and 17 years of age and 191 young adults. The method detects the three-dimensional coordinates of retroreflective, wireless markers positioned on selected facial landmarks using two charge-coupled device cameras working in the infrared field. For each sex and age class, mean values were computed, and a standardized mesh of equidistant horizontal, vertical, and anteroposterior lines was consequently constructed. Within each age group, male meshes were superimposed on female meshes. Moreover, within each sex, the adult reference mesh was superimposed on the reference mesh of each age group. The global (size plus shape) difference was then evaluated by the calculation of the relevant displacement vectors for each soft tissue landmark. A global difference factor was calculated as the sum of the modules of all the displacement vectors. Consequently, a size normalization was performed, and the shape difference (size standardized) was then evaluated by the calculation of new relevant displacement vectors for each landmark, as well as a shape-global difference factor. When compared to the young adult situation, the largest child discrepancies were found in the soft tissue profile. After size standardization, shape differences were found in the forehead, nose, and chin. The soft tissue facial dimensions of boys and girls grow with similar characteristics and at the same rate between 6 and 11 years of age, but

  8. The assessment of low probability containment failure modes using dynamic PRA

    NASA Astrophysics Data System (ADS)

    Brunett, Acacia Joann

    Although low probability containment failure modes in nuclear power plants may lead to large releases of radioactive material, these modes are typically crudely modeled in system level codes and have large associated uncertainties. Conventional risk assessment techniques (i.e. the fault-tree/event-tree methodology) are capable of accounting for these failure modes to some degree, however, they require the analyst to pre-specify the ordering of events, which can vary within the range of uncertainty of the phenomena. More recently, dynamic probabilistic risk assessment (DPRA) techniques have been developed which remove the dependency on the analyst. Through DPRA, it is now possible to perform a mechanistic and consistent analysis of low probability phenomena, with the timing of the possible events determined by the computational model simulating the reactor behavior. The purpose of this work is to utilize DPRA tools to assess low probability containment failure modes and the driving mechanisms. Particular focus is given to the risk-dominant containment failure modes considered in NUREG-1150, which has long been the standard for PRA techniques. More specifically, this work focuses on the low probability phenomena occurring during a station blackout (SBO) with late power recovery in the Zion Nuclear Power Plant, a Westinghouse pressurized water reactor (PWR). Subsequent to the major risk study performed in NUREG-1150, significant experimentation and modeling regarding the mechanisms driving containment failure modes have been performed. In light of this improved understanding, NUREG-1150 containment failure modes are reviewed in this work using the current state of knowledge. For some unresolved mechanisms, such as containment loading from high pressure melt ejection and combustion events, additional analyses are performed using the accident simulation tool MELCOR to explore the bounding containment loads for realistic scenarios. A dynamic treatment in the

  9. Gravity wave transmission diagram

    NASA Astrophysics Data System (ADS)

    Tomikawa, Yoshihiro

    2016-07-01

    A possibility of gravity wave propagation from a source region to the airglow layer around the mesopause has been discussed based on the gravity wave blocking diagram taking into account the critical level filtering alone. This paper proposes a new gravity wave transmission diagram in which both the critical level filtering and turning level reflection of gravity waves are considered. It shows a significantly different distribution of gravity wave transmissivity from the blocking diagram.

  10. Endothelial dysfunction as assessed with magnetic resonance imaging - A major determinant in chronic heart failure.

    PubMed

    Kovačić, Slavica; Plazonić, Željko; Batinac, Tanja; Miletić, Damir; Ružić, Alen

    2016-05-01

    Chronic heart failure (CHF) is a clinical syndrome resulting from interaction of different structure and functional disturbances leading to decreased heart ability to ensure adequate supply of oxygenized blood to tissues and ensure adequate metabolic needs in the cases of normal or increased afterload. Endothelial dysfunction (ED) is a pathological condition characterized by general imbalance of all major endothelial mechanisms with key role in development and progression of atherosclerotic disease. ED has been associated with most cardiovascular risk factors. There is increasing interest in assessing endothelial function non-invasively, leading to development and evaluation of new diagnostic methods. We suggest that MRI is safe and reliable test that offers important advantages over ultrasound for the detection of ED and monitoring of the expected therapeutic effect. We believe that ED plays a pivotal role in chronic heart failure development and progression, regardless of its etiology, and that MRI should be introduced as a "gold standard" in diagnostic procedure and treatment.

  11. MRI Assessment of Diastolic and Systolic Intraventricular Pressure Gradients in Heart Failure.

    PubMed

    Jain, Snigdha; Londono, Francisco J; Segers, Patrick; Gillebert, Thierry C; De Buyzere, Marc; Chirinos, Julio A

    2016-02-01

    A deep phenotypic characterization of heart failure (HF) is important for a better understanding of its pathophysiology. In particular, novel noninvasive techniques for the characterization of functional abnormalities in HF with preserved ejection fraction are currently needed. While echocardiography is widely used to assess ventricular function, standard echocardiographic techniques provide a limited understanding of ventricular filling. The application of fluid dynamics theory, along with assessments of flow velocity fields in multiple dimensions in the ventricle, can be used to assess intraventricular pressure gradients (IVPGs), which in turn may provide valuable insights into ventricular diastolic and systolic function. Advances in imaging techniques now allow for accurate estimations of systolic and diastolic IVPGs, using noninvasive methods that are easily applicable in clinical research. In this review, we describe the basic concepts regarding intraventricular flow measurements and the derivation of IVPGs. We also review existing literature exploring the role of IVPGs in HF. PMID:26780916

  12. Factors affecting nurses' intent to assess for depression in heart failure patients.

    PubMed

    Lea, Patricia

    2014-01-01

    The association between depression and cardiovascular disease has been well established and has been shown to decrease patients' quality of life and increase the risk of mortality, frequency and duration of hospitalization, and health care costs. The inpatient setting provides a potentially valuable opportunity to assess and treat depression among patients with acute cardiac illness, allowing for daily monitoring of treatment side effects. Although systematic depression screening appears to be feasible, efficient, and well accepted on inpatient cardiac units, the current lack of consistent inpatient assessment for depression in heart failure patients suggests the presence of barriers influencing the effective diagnosis and treatment of depression among inpatients with heart failure. The theory of planned behavior describes the cognitive mechanism by which behavioral intent is formed, giving some insight into how nurses' attitudes and beliefs affect their performance of routine depression screening. In addition, application of this cognitive theory suggests that nurses may be influenced to adopt more positive attitudes and beliefs about depression through educational intervention, leading to greater likelihood of routine assessment for depression, ultimately leading to more timely diagnosis and treatment and improved patient outcomes.

  13. Hemodynamic assessment in heart failure: role of physical examination and noninvasive methods.

    PubMed

    Almeida Junior, Gustavo Luiz; Xavier, Sérgio Salles; Garcia, Marcelo Iorio; Clausell, Nadine

    2012-01-01

    Among the cardiovascular diseases, heart failure (HF) has a high rate of hospitalization, morbidity and mortality, consuming vast resources of the public health system in Brazil and other countries. The correct determination of the filling pressures of the left ventricle by noninvasive or invasive assessment is critical to the proper treatment of patients with decompensated chronic HF, considering that congestion is the main determinant of symptoms and hospitalization. Physical examination has shown to be inadequate to predict the hemodynamic pattern. Several studies have suggested that agreement on physical findings by different physicians is small and that, ultimately, adaptive physiological alterations in chronic HF mask important aspects of the physical examination. As the clinical assessment fails to predict hemodynamic aspects and because the use of Swan-Ganz catheter is not routinely recommended for this purpose in patients with HF, noninvasive hemodynamic assessment methods, such as BNP, echocardiography and cardiographic bioimpedance, are being increasingly used. The present study intends to carry out, for the clinician, a review of the role of each of these tools when defining the hemodynamic status of patients with decompensated heart failure, aiming at a more rational and individualized treatment.

  14. Factors affecting nurses' intent to assess for depression in heart failure patients.

    PubMed

    Lea, Patricia

    2014-01-01

    The association between depression and cardiovascular disease has been well established and has been shown to decrease patients' quality of life and increase the risk of mortality, frequency and duration of hospitalization, and health care costs. The inpatient setting provides a potentially valuable opportunity to assess and treat depression among patients with acute cardiac illness, allowing for daily monitoring of treatment side effects. Although systematic depression screening appears to be feasible, efficient, and well accepted on inpatient cardiac units, the current lack of consistent inpatient assessment for depression in heart failure patients suggests the presence of barriers influencing the effective diagnosis and treatment of depression among inpatients with heart failure. The theory of planned behavior describes the cognitive mechanism by which behavioral intent is formed, giving some insight into how nurses' attitudes and beliefs affect their performance of routine depression screening. In addition, application of this cognitive theory suggests that nurses may be influenced to adopt more positive attitudes and beliefs about depression through educational intervention, leading to greater likelihood of routine assessment for depression, ultimately leading to more timely diagnosis and treatment and improved patient outcomes. PMID:25280199

  15. Hertzsprung-Russell Diagram

    NASA Astrophysics Data System (ADS)

    Chiosi, C.; Murdin, P.

    2000-11-01

    The Hertzsprung-Russell diagram (HR-diagram), pioneered independently by EJNAR HERTZSPRUNG and HENRY NORRIS RUSSELL, is a plot of the star luminosity versus the surface temperature. It stems from the basic relation for an object emitting thermal radiation as a black body: ...

  16. Risk assessment for Industrial Control Systems quantifying availability using mean failure cost (MFC)

    DOE PAGES

    Chen, Qian; Abercrombie, Robert K; Sheldon, Frederick T.

    2015-09-23

    Industrial Control Systems (ICS) are commonly used in industries such as oil and natural gas, transportation, electric, water and wastewater, chemical, pharmaceutical, pulp and paper, food and beverage, as well as discrete manufacturing (e.g., automotive, aerospace, and durable goods.) SCADA systems are generally used to control dispersed assets using centralized data acquisition and supervisory control. Originally, ICS implementations were susceptible primarily to local threats because most of their components were located in physically secure areas (i.e., ICS components were not connected to IT networks or systems). The trend toward integrating ICS systems with IT networks (e.g., efficiency and the Internetmore » of Things) provides significantly less isolation for ICS from the outside world thus creating greater risk due to external threats. Albeit, the availability of ICS/SCADA systems is critical to assuring safety, security and profitability. Such systems form the backbone of our national cyber-physical infrastructure. We extend the concept of mean failure cost (MFC) to address quantifying availability to harmonize well with ICS security risk assessment. This new measure is based on the classic formulation of Availability combined with Mean Failure Cost (MFC). The metric offers a computational basis to estimate the availability of a system in terms of the loss that each stakeholder stands to sustain as a result of security violations or breakdowns (e.g., deliberate malicious failures).« less

  17. Risk assessment for Industrial Control Systems quantifying availability using mean failure cost (MFC)

    SciTech Connect

    Chen, Qian; Abercrombie, Robert K; Sheldon, Frederick T.

    2015-09-23

    Industrial Control Systems (ICS) are commonly used in industries such as oil and natural gas, transportation, electric, water and wastewater, chemical, pharmaceutical, pulp and paper, food and beverage, as well as discrete manufacturing (e.g., automotive, aerospace, and durable goods.) SCADA systems are generally used to control dispersed assets using centralized data acquisition and supervisory control. Originally, ICS implementations were susceptible primarily to local threats because most of their components were located in physically secure areas (i.e., ICS components were not connected to IT networks or systems). The trend toward integrating ICS systems with IT networks (e.g., efficiency and the Internet of Things) provides significantly less isolation for ICS from the outside world thus creating greater risk due to external threats. Albeit, the availability of ICS/SCADA systems is critical to assuring safety, security and profitability. Such systems form the backbone of our national cyber-physical infrastructure. We extend the concept of mean failure cost (MFC) to address quantifying availability to harmonize well with ICS security risk assessment. This new measure is based on the classic formulation of Availability combined with Mean Failure Cost (MFC). The metric offers a computational basis to estimate the availability of a system in terms of the loss that each stakeholder stands to sustain as a result of security violations or breakdowns (e.g., deliberate malicious failures).

  18. The assessment of low probability containment failure modes using dynamic PRA

    NASA Astrophysics Data System (ADS)

    Brunett, Acacia Joann

    Although low probability containment failure modes in nuclear power plants may lead to large releases of radioactive material, these modes are typically crudely modeled in system level codes and have large associated uncertainties. Conventional risk assessment techniques (i.e. the fault-tree/event-tree methodology) are capable of accounting for these failure modes to some degree, however, they require the analyst to pre-specify the ordering of events, which can vary within the range of uncertainty of the phenomena. More recently, dynamic probabilistic risk assessment (DPRA) techniques have been developed which remove the dependency on the analyst. Through DPRA, it is now possible to perform a mechanistic and consistent analysis of low probability phenomena, with the timing of the possible events determined by the computational model simulating the reactor behavior. The purpose of this work is to utilize DPRA tools to assess low probability containment failure modes and the driving mechanisms. Particular focus is given to the risk-dominant containment failure modes considered in NUREG-1150, which has long been the standard for PRA techniques. More specifically, this work focuses on the low probability phenomena occurring during a station blackout (SBO) with late power recovery in the Zion Nuclear Power Plant, a Westinghouse pressurized water reactor (PWR). Subsequent to the major risk study performed in NUREG-1150, significant experimentation and modeling regarding the mechanisms driving containment failure modes have been performed. In light of this improved understanding, NUREG-1150 containment failure modes are reviewed in this work using the current state of knowledge. For some unresolved mechanisms, such as containment loading from high pressure melt ejection and combustion events, additional analyses are performed using the accident simulation tool MELCOR to explore the bounding containment loads for realistic scenarios. A dynamic treatment in the

  19. Engineering holographic phase diagrams

    NASA Astrophysics Data System (ADS)

    Chen, Jiunn-Wei; Dai, Shou-Huang; Maity, Debaprasad; Zhang, Yun-Long

    2016-10-01

    By introducing interacting scalar fields, we tried to engineer physically motivated holographic phase diagrams which may be interesting in the context of various known condensed matter systems. We introduce an additional scalar field in the bulk which provides a tunable parameter in the boundary theory. By exploiting the way the tuning parameter changes the effective masses of the bulk interacting scalar fields, desired phase diagrams can be engineered for the boundary order parameters dual to those scalar fields. We give a few examples of generating phase diagrams with phase boundaries which are strikingly similar to the known quantum phases at low temperature such as the superconducting phases. However, the important difference is that all the phases we have discussed are characterized by neutral order parameters. At the end, we discuss if there exists any emerging scaling symmetry associated with a quantum critical point hidden under the dome in this phase diagram.

  20. Square Source Type Diagram

    NASA Astrophysics Data System (ADS)

    Aso, N.; Ohta, K.; Ide, S.

    2014-12-01

    Deformation in a small volume of earth interior is expressed by a symmetric moment tensor located on a point source. The tensor contains information of characteristic directions, source amplitude, and source types such as isotropic, double-couple, or compensated-linear-vector-dipole (CLVD). Although we often assume a double couple as the source type of an earthquake, significant non-double-couple component including isotropic component is often reported for induced earthquakes and volcanic earthquakes. For discussions on source types including double-couple and non-double-couple components, it is helpful to display them using some visual diagrams. Since the information of source type has two degrees of freedom, it can be displayed onto a two-dimensional flat plane. Although the diagram developed by Hudson et al. [1989] is popular, the trace corresponding to the mechanism combined by two mechanisms is not always a smooth line. To overcome this problem, Chapman and Leaney [2012] developed a new diagram. This diagram has an advantage that a straight line passing through the center corresponds to the mechanism obtained by a combination of an arbitrary mechanism and a double-couple [Tape and Tape, 2012], but this diagram has some difficulties in use. First, it is slightly difficult to produce the diagram because of its curved shape. Second, it is also difficult to read out the ratios among isotropic, double-couple, and CLVD components, which we want to obtain from the estimated moment tensors, because they do not appear directly on the horizontal or vertical axes. In the present study, we developed another new square diagram that overcomes the difficulties of previous diagrams. This diagram is an orthogonal system of isotropic and deviatoric axes, so it is easy to get the ratios among isotropic, double-couple, and CLVD components. Our diagram has another advantage that the probability density is obtained simply from the area within the diagram if the probability density

  1. Risk assessment of turbine rotor failure using probabilistic ultrasonic non-destructive evaluations

    SciTech Connect

    Guan, Xuefei; Zhang, Jingdan; Zhou, S. Kevin; Rasselkorde, El Mahjoub; Abbasi, Waheed A.

    2014-02-18

    The study presents a method and application of risk assessment methodology for turbine rotor fatigue failure using probabilistic ultrasonic nondestructive evaluations. A rigorous probabilistic modeling for ultrasonic flaw sizing is developed by incorporating the model-assisted probability of detection, and the probability density function (PDF) of the actual flaw size is derived. Two general scenarios, namely the ultrasonic inspection with an identified flaw indication and the ultrasonic inspection without flaw indication, are considered in the derivation. To perform estimations for fatigue reliability and remaining useful life, uncertainties from ultrasonic flaw sizing and fatigue model parameters are systematically included and quantified. The model parameter PDF is estimated using Bayesian parameter estimation and actual fatigue testing data. The overall method is demonstrated using a realistic application of steam turbine rotor, and the risk analysis under given safety criteria is provided to support maintenance planning.

  2. Noninvasive assessment of right and left ventricular function in acute and chronic respiratory failure

    SciTech Connect

    Matthay, R.A.; Berger, H.J.

    1983-05-01

    This review evaluates noninvasive techniques for assessing cardiovascular performance in acute and chronic respiratory failure. Radiographic, radionuclide, and echocardiographic methods for determining ventricular volumes, right (RV) and left ventricular (LV) ejection fractions, and pulmonary artery pressure (PAP) are emphasized. These methods include plain chest radiography, radionuclide angiocardiography, thallium-201 myocardial imaging, and M mode and 2-dimensional echocardiography, which have recently been applied in patients to detect pulmonary artery hypertension (PAH), right ventricular enlargement, and occult ventricular performance abnormalities at rest or exercise. Moreover, radionuclide angiocardiography has proven useful in combination with hemodynamic measurements, for evaluating the short-and long-term cardiovascular effects of therapeutic agents, such as oxygen, digitalis, theophylline, beta-adrenergic agents, and vasodilators.

  3. Modeling of Electrical Cable Failure in a Dynamic Assessment of Fire Risk

    NASA Astrophysics Data System (ADS)

    Bucknor, Matthew D.

    complexity to existing cable failure techniques and tuned to empirical data can better approximate the temperature response of a cables located in tightly packed cable bundles. The new models also provide a way to determine the conditions insides a cable bundle which allows for separate treatment of cables on the interior of the bundle from cables on the exterior of the bundle. The results from the DET analysis show that the overall assessed probability of cable failure can be significantly reduced by more realistically accounting for the influence that the fire brigade has on a fire progression scenario. The shielding analysis results demonstrate a significant reduction in the temperature response of a shielded versus a non-shielded cable bundle; however the computational cost of using a fire progression model that can capture these effects may be prohibitive for performing DET analyses with currently available computational fluid dynamics models and computational resources.

  4. Upgrading Diagnostic Diagrams

    NASA Astrophysics Data System (ADS)

    Proxauf, B.; Kimeswenger, S.; Öttl, S.

    2014-04-01

    Diagnostic diagrams of forbidden lines have been a useful tool for observers in astrophysics for many decades now. They are used to obtain information on the basic physical properties of thin gaseous nebulae. Moreover they are also the initial tool to derive thermodynamic properties of the plasma from observations to get ionization correction factors and thus to obtain proper abundances of the nebulae. Some diagnostic diagrams are in wavelengths domains which were difficult to take either due to missing wavelength coverage or low resolution of older spectrographs. Thus they were hardly used in the past. An upgrade of this useful tool is necessary because most of the diagrams were calculated using only the species involved as a single atom gas, although several are affected by well-known fluorescence mechanisms as well. Additionally the atomic data have improved up to the present time. The new diagnostic diagrams are calculated by using large grids of parameter space in the photoionization code CLOUDY. For a given basic parameter the input radiation field is varied to find the solutions with cooling-heating-equilibrium. Empirical numerical functions are fitted to provide formulas usable in e.g. data reduction pipelines. The resulting diagrams differ significantly from those used up to now and will improve the thermodynamic calculations.

  5. Weyl card diagrams

    SciTech Connect

    Jones, Gregory; Wang, John E.

    2005-06-15

    To capture important physical properties of a spacetime we construct a new diagram, the card diagram, which accurately draws generalized Weyl spacetimes in arbitrary dimensions by encoding their global spacetime structure, singularities, horizons, and some aspects of causal structure including null infinity. Card diagrams draw only nontrivial directions providing a clearer picture of the geometric features of spacetimes as compared to Penrose diagrams, and can change continuously as a function of the geometric parameters. One of our main results is to describe how Weyl rods are traversable horizons and the entirety of the spacetime can be mapped out. We review Weyl techniques and as examples we systematically discuss properties of a variety of solutions including Kerr-Newman black holes, black rings, expanding bubbles, and recent spacelike-brane solutions. Families of solutions will share qualitatively similar cards. In addition we show how card diagrams not only capture information about a geometry but also its analytic continuations by providing a geometric picture of analytic continuation. Weyl techniques are generalized to higher dimensional charged solutions and applied to generate perturbations of bubble and S-brane solutions by Israel-Khan rods.

  6. Performance of the Automated Neuropsychological Assessment Metrics (ANAM) in Detecting Cognitive Impairment in Heart Failure Patients

    PubMed Central

    Xie, Susan S.; Goldstein, Carly M.; Gathright, Emily C.; Gunstad, John; Dolansky, Mary A.; Redle, Joseph; Hughes, Joel W.

    2015-01-01

    Objective Evaluate capacity of the Automated Neuropsychological Assessment Metrics (ANAM) to detect cognitive impairment (CI) in heart failure (HF) patients. Background CI is a key prognostic marker in HF. Though the most widely used cognitive screen in HF, the Mini-Mental State Examination (MMSE) is insufficiently sensitive. The ANAM has demonstrated sensitivity to cognitive domains affected by HF, but has not been assessed in this population. Methods Investigators administered the ANAM and MMSE to 57 HF patients, compared against a composite model of cognitive function. Results ANAM efficiency (p < .05) and accuracy scores (p < .001) successfully differentiated CI and non-CI. ANAM efficiency and accuracy scores classified 97.7% and 93.0% of non-CI patients, and 14.3% and 21.4% with CI, respectively. Conclusions The ANAM is more effective than the MMSE for detecting CI, but further research is needed to develop a more optimal cognitive screen for routine use in HF patients. PMID:26354858

  7. A Model for Assessment of Failure of LWR Fuel during an RIA

    SciTech Connect

    Liu, Wenfeng; Kazimi, Mujid S.

    2007-07-01

    This paper presents a model for Pellet-Cladding Mechanical Interaction (PCMI) failure of LWR fuel during an RIA. The model uses the J-integral as a driving parameter to characterize the failure potential during PCMI. The model is implemented in the FRAPTRAN code and is validated by CABRI and NSRR simulated RIA test data. Simulation of PWR and BWR conditions are conducted by FRAPTRAN to evaluate the fuel failure potential using this model. Model validation and simulation results are compared with the strain-based failure model of PNNL and the SED/CSED model of EPRI. Our fracture mechanics model has good capability to differentiate failure from non-failure cases. The results reveal significant effects of power pulse width: a wider pulse width generally increases the threshold for fuel failure. However, this effect is less obvious for highly corroded cladding. (authors)

  8. Impulse-Momentum Diagrams

    ERIC Educational Resources Information Center

    Rosengrant, David

    2011-01-01

    Multiple representations are a valuable tool to help students learn and understand physics concepts. Furthermore, representations help students learn how to think and act like real scientists. These representations include: pictures, free-body diagrams, energy bar charts, electrical circuits, and, more recently, computer simulations and…

  9. Assessment and prevalence of pulmonary oedema in contemporary acute heart failure trials: a systematic review

    PubMed Central

    Platz, Elke; Jhund, Pardeep S.; Campbell, Ross T.; McMurray, John J.

    2015-01-01

    Aims Pulmonary oedema is a common and important finding in acute heart failure (AHF). We conducted a systematic review to describe the methods used to assess pulmonary oedema in recent randomized AHF trials and report its prevalence in these trials. Methods and results Of 23 AHF trials published between 2002 and 2013, six were excluded because they were very small or not randomized, or missing full-length publications. Of the remaining 17 (n = 200–7141) trials, six enrolled patients with HF and reduced ejection fraction (HF-REF) and 11, patients with both HF-REF and HF with preserved ejection fraction (HF-PEF). Pulmonary oedema was an essential inclusion criterion, in most trials, based upon findings on physical examination (‘rales’), radiographic criteria (‘signs of congestion’), or both. The prevalence of pulmonary oedema in HF-REF trials ranged from 75% to 83% and in combined HF-REF and HF-PEF trials from 51% to 100%. Five trials did not report the prevalence or extent of pulmonary oedema assessed by either clinical examination or chest x-ray. Improvement of pulmonary congestion with treatment was inconsistently reported and commonly grouped with other signs of congestion into a score. One trial suggested that patients with rales over >2/3 of the lung fields on admission were at higher risk of adverse outcomes than those without. Conclusion Although pulmonary oedema is a common finding in AHF, represents a therapeutic target, and may be of prognostic importance, recent trials used inconsistent criteria to define it, and did not consistently report its severity at baseline or its response to treatment. Consistent and ideally quantitative, methods for the assessment of pulmonary oedema in AHF trials are needed. PMID:26230356

  10. Probabilistic exposure assessment model to estimate aseptic-UHT product failure rate.

    PubMed

    Pujol, Laure; Albert, Isabelle; Magras, Catherine; Johnson, Nicholas Brian; Membré, Jeanne-Marie

    2015-01-01

    Aseptic-Ultra-High-Temperature (UHT) products are manufactured to be free of microorganisms capable of growing in the food at normal non-refrigerated conditions at which the food is likely to be held during manufacture, distribution and storage. Two important phases within the process are widely recognised as critical in controlling microbial contamination: the sterilisation steps and the following aseptic steps. Of the microbial hazards, the pathogen spore formers Clostridium botulinum and Bacillus cereus are deemed the most pertinent to be controlled. In addition, due to a relatively high thermal resistance, Geobacillus stearothermophilus spores are considered a concern for spoilage of low acid aseptic-UHT products. A probabilistic exposure assessment model has been developed in order to assess the aseptic-UHT product failure rate associated with these three bacteria. It was a Modular Process Risk Model, based on nine modules. They described: i) the microbial contamination introduced by the raw materials, either from the product (i.e. milk, cocoa and dextrose powders and water) or the packaging (i.e. bottle and sealing component), ii) the sterilisation processes, of either the product or the packaging material, iii) the possible recontamination during subsequent processing of both product and packaging. The Sterility Failure Rate (SFR) was defined as the sum of bottles contaminated for each batch, divided by the total number of bottles produced per process line run (10(6) batches simulated per process line). The SFR associated with the three bacteria was estimated at the last step of the process (i.e. after Module 9) but also after each module, allowing for the identification of modules, and responsible contamination pathways, with higher or lower intermediate SFR. The model contained 42 controlled settings associated with factory environment, process line or product formulation, and more than 55 probabilistic inputs corresponding to inputs with variability

  11. Probabilistic exposure assessment model to estimate aseptic-UHT product failure rate.

    PubMed

    Pujol, Laure; Albert, Isabelle; Magras, Catherine; Johnson, Nicholas Brian; Membré, Jeanne-Marie

    2015-01-01

    Aseptic-Ultra-High-Temperature (UHT) products are manufactured to be free of microorganisms capable of growing in the food at normal non-refrigerated conditions at which the food is likely to be held during manufacture, distribution and storage. Two important phases within the process are widely recognised as critical in controlling microbial contamination: the sterilisation steps and the following aseptic steps. Of the microbial hazards, the pathogen spore formers Clostridium botulinum and Bacillus cereus are deemed the most pertinent to be controlled. In addition, due to a relatively high thermal resistance, Geobacillus stearothermophilus spores are considered a concern for spoilage of low acid aseptic-UHT products. A probabilistic exposure assessment model has been developed in order to assess the aseptic-UHT product failure rate associated with these three bacteria. It was a Modular Process Risk Model, based on nine modules. They described: i) the microbial contamination introduced by the raw materials, either from the product (i.e. milk, cocoa and dextrose powders and water) or the packaging (i.e. bottle and sealing component), ii) the sterilisation processes, of either the product or the packaging material, iii) the possible recontamination during subsequent processing of both product and packaging. The Sterility Failure Rate (SFR) was defined as the sum of bottles contaminated for each batch, divided by the total number of bottles produced per process line run (10(6) batches simulated per process line). The SFR associated with the three bacteria was estimated at the last step of the process (i.e. after Module 9) but also after each module, allowing for the identification of modules, and responsible contamination pathways, with higher or lower intermediate SFR. The model contained 42 controlled settings associated with factory environment, process line or product formulation, and more than 55 probabilistic inputs corresponding to inputs with variability

  12. 76 FR 70768 - Common-Cause Failure Analysis in Event and Condition Assessment: Guidance and Research, Draft...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-15

    ... From the Federal Register Online via the Government Publishing Office NUCLEAR REGULATORY COMMISSION Common-Cause Failure Analysis in Event and Condition Assessment: Guidance and Research, Draft... November 2, 2011 (76 FR 67764). This action is necessary to correct an erroneous date for submission...

  13. Assessing the Value-Added by the Environmental Testing Process with the Aide of Physics/Engineering of Failure Evaluations

    NASA Technical Reports Server (NTRS)

    Cornford, S.; Gibbel, M.

    1997-01-01

    NASA's Code QT Test Effectiveness Program is funding a series of applied research activities focused on utilizing the principles of physics and engineering of failure and those of engineering economics to assess and improve the value-added by the various validation and verification activities to organizations.

  14. Curricular Change Agenda for Failure-Experienced Mathematics Students: Can Success-Promoting Assessment Make a Difference?

    ERIC Educational Resources Information Center

    Tzur, Ron; Movshovitz-Hadar, Nitsa

    1998-01-01

    Whether a success-promoting assessment schema (SPAS) can be designed to have a positive impact on the mathematics learning of failure-experienced students was studied with 200 noncollege-bound students in Israel. Results suggest that the SPAS in conjunction with suitable curricular materials could support student learning by changing student…

  15. Materials Degradation & Failure: Assessment of Structure and Properties. Resources in Technology.

    ERIC Educational Resources Information Center

    Technology Teacher, 1991

    1991-01-01

    This module provides information on materials destruction (through corrosion, oxidation, and degradation) and failure. A design brief includes objective, student challenge, resources, student outcomes, and quiz. (SK)

  16. Direct and indirect assessment of skeletal muscle blood flow in chronic congestive heart failure

    SciTech Connect

    LeJemtel, T.H.; Scortichini, D.; Katz, S.

    1988-09-09

    In patients with chronic congestive heart failure (CHF), skeletal muscle blood flow can be measured directly by the continuous thermodilution technique and by the xenon-133 clearance method. The continuous thermodilution technique requires retrograde catheterization of the femoral vein and, thus, cannot be repeated conveniently in patients during evaluation of pharmacologic interventions. The xenon-133 clearance, which requires only an intramuscular injection, allows repeated determination of skeletal muscle blood flow. In patients with severe CHF, a fixed capacity of the skeletal muscle vasculature to dilate appears to limit maximal exercise performance. Moreover, the changes in peak skeletal muscle blood flow noted during long-term administration of captopril, an angiotensin-converting enzyme inhibitor, appears to correlate with the changes in aerobic capacity. In patients with CHF, resting supine deep femoral vein oxygen content can be used as an indirect measurement of resting skeletal muscle blood flow. The absence of a steady state complicates the determination of peak skeletal muscle blood flow reached during graded bicycle or treadmill exercise in patients with chronic CHF. Indirect assessments of skeletal muscle blood flow and metabolism during exercise performed at submaximal work loads are currently developed in patients with chronic CHF.

  17. Tectonic discrimination diagrams revisited

    NASA Astrophysics Data System (ADS)

    Vermeesch, Pieter

    2006-06-01

    The decision boundaries of most tectonic discrimination diagrams are drawn by eye. Discriminant analysis is a statistically more rigorous way to determine the tectonic affinity of oceanic basalts based on their bulk-rock chemistry. This method was applied to a database of 756 oceanic basalts of known tectonic affinity (ocean island, mid-ocean ridge, or island arc). For each of these training data, up to 45 major, minor, and trace elements were measured. Discriminant analysis assumes multivariate normality. If the same covariance structure is shared by all the classes (i.e., tectonic affinities), the decision boundaries are linear, hence the term linear discriminant analysis (LDA). In contrast with this, quadratic discriminant analysis (QDA) allows the classes to have different covariance structures. To solve the statistical problems associated with the constant-sum constraint of geochemical data, the training data must be transformed to log-ratio space before performing a discriminant analysis. The results can be mapped back to the compositional data space using the inverse log-ratio transformation. An exhaustive exploration of 14,190 possible ternary discrimination diagrams yields the Ti-Si-Sr system as the best linear discrimination diagram and the Na-Nb-Sr system as the best quadratic discrimination diagram. The best linear and quadratic discrimination diagrams using only immobile elements are Ti-V-Sc and Ti-V-Sm, respectively. As little as 5% of the training data are misclassified by these discrimination diagrams. Testing them on a second database of 182 samples that were not part of the training data yields a more reliable estimate of future performance. Although QDA misclassifies fewer training data than LDA, the opposite is generally true for the test data. Therefore LDA is a cruder but more robust classifier than QDA. Another advantage of LDA is that it provides a powerful way to reduce the dimensionality of the multivariate geochemical data in a similar

  18. Assessing Strategies for Heart Failure with Preserved Ejection Fraction at the Outpatient Clinic

    PubMed Central

    Jorge, Antonio José Lagoeiro; Rosa, Maria Luiza Garcia; Ribeiro, Mario Luiz; Fernandes, Luiz Claudio Maluhy; Freire, Monica Di Calafiori; Correia, Dayse Silva; Teixeira, Patrick Duarte; Mesquita, Evandro Tinoco

    2014-01-01

    Background: Heart failure with preserved ejection fraction (HFPEF) is the most common form of heart failure (HF), its diagnosis being a challenge to the outpatient clinic practice. Objective: To describe and compare two strategies derived from algorithms of the European Society of Cardiology Diastology Guidelines for the diagnosis of HFPEF. Methods: Cross-sectional study with 166 consecutive ambulatory patients (67.9±11.7 years; 72% of women). The strategies to confirm HFPEF were established according to the European Society of Cardiology Diastology Guidelines criteria. In strategy 1 (S1), tissue Doppler echocardiography (TDE) and electrocardiography (ECG) were used; in strategy 2 (S2), B-type natriuretic peptide (BNP) measurement was included. Results: In S1, patients were divided into groups based on the E/E'ratio as follows: GI, E/E'> 15 (n = 16; 9%); GII, E/E'8 to 15 (n = 79; 48%); and GIII, E/E'< 8 (n = 71; 43%). HFPEF was confirmed in GI and excluded in GIII. In GII, TDE [left atrial volume index (LAVI) ≥ 40 mL/m2; left ventricular mass index LVMI) > 122 for women and > 149 g/m2 for men] and ECG (atrial fibrillation) parameters were assessed, confirming HFPEF in 33 more patients, adding up to 49 (29%). In S2, patients were divided into three groups based on BNP levels. GI (BNP > 200 pg/mL) consisted of 12 patients, HFPEF being confirmed in all of them. GII (BNP ranging from 100 to 200 pg/mL) consisted of 20 patients with LAVI > 29 mL/m2, or LVMI ≥ 96 g/m2 for women or ≥ 116 g/m2 for men, or E/E'≥ 8 or atrial fibrillation on ECG, and the diagnosis of HFPEF was confirmed in 15. GIII (BNP < 100 pg/mL) consisted of 134 patients, 26 of whom had the diagnosis of HFPEF confirmed when GII parameters were used. Measuring BNP levels in S2 identified 4 more patients (8%) with HFPEF as compared with those identified in S1. Conclusion: The association of BNP measurement and TDE data is better than the isolated use of those parameters. BNP can be useful in

  19. Impulse-Momentum Diagrams

    NASA Astrophysics Data System (ADS)

    Rosengrant, David

    2011-01-01

    Multiple representations are a valuable tool to help students learn and understand physics concepts. Furthermore, representations help students learn how to think and act like real scientists.2 These representations include: pictures, free-body diagrams,3 energy bar charts,4 electrical circuits, and, more recently, computer simulations and animations.5 However, instructors have limited choices when they want to help their students understand impulse and momentum. One of the only available options is the impulse-momentum bar chart.6 The bar charts can effectively show the magnitude of the momentum as well as help students understand conservation of momentum, but they do not easily show the actual direction. This paper highlights a new representation instructors can use to help their students with momentum and impulse—the impulse-momentum diagram (IMD).

  20. TEP process flow diagram

    SciTech Connect

    Wilms, R Scott; Carlson, Bryan; Coons, James; Kubic, William

    2008-01-01

    This presentation describes the development of the proposed Process Flow Diagram (PFD) for the Tokamak Exhaust Processing System (TEP) of ITER. A brief review of design efforts leading up to the PFD is followed by a description of the hydrogen-like, air-like, and waterlike processes. Two new design values are described; the mostcommon and most-demanding design values. The proposed PFD is shown to meet specifications under the most-common and mostdemanding design values.

  1. The Construction of Venn Diagrams.

    ERIC Educational Resources Information Center

    Grunbaum, Branko

    1984-01-01

    The study and use of "Venn diagrams" can lead to many interesting problems of a geometric, topological, or combinatorial character. The general nature of these diagrams is discussed and two new results are formulated. (JN)

  2. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 3: Structure and listing of programs

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  3. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 3: Structure and listing of programs

    NASA Astrophysics Data System (ADS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-06-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  4. A Modified Sequential Organ Failure Assessment (MSOFA) Score for Critical Care Triage

    PubMed Central

    Grissom, Colin K.; Brown, Samuel M.; Kuttler, Kathryn G.; Boltax, Jonathan P.; Jones, Jason; Jephson, Al R.; Orme, James F.

    2013-01-01

    Objective The Sequential Organ Failure Assessment (SOFA) score has been recommended for triage during a mass influx of critically-ill patients, but requires laboratory measurement of four parameters which may be impractical with constrained resources. We hypothesized that a modified SOFA (MSOFA) score that requires only one laboratory measurement would predict patient outcome as well as the SOFA score. Methods After a retrospective derivation, in a prospective observational study in a 24-bed medical, surgical, and trauma intensive care unit, we determined serial SOFA and MSOFA scores on all patients admitted during calendar year 2008 and compared ability to predict mortality and need for mechanical ventilation. Results 1,770 patients (56% male) with a 30-day mortality of 10.5% were included in the study. Day 1 SOFA and MSOFA scores performed equally well at predicting mortality with an area under the receiver operating curve (AUC) of 0.83 (95% CI: 0.81-0.85) and 0.84 (95% CI 0.82-0.85) respectively (p=0.33 for comparison). Day 3 SOFA and MSOFA predicted mortality for the 828 patients remaining in the ICU with an AUC of 0.78 and 0.79 respectively. Day 5 scores performed less well at predicting mortality. Day 1 SOFA and MSOFA predicted need for mechanical ventilation on Day 3 with an AUC of 0.83 and 0.82 respectively. Mortality for the highest category of SOFA and MSOFA score (>11 points) was 53% and 58% respectively. Conclusions The MSOFA predicts mortality as well as the SOFA and is easier to implement in resource-constrained settings, but using either score as a triage tool would exclude many patients who would otherwise survive. PMID:21149228

  5. Assessing the interaction of respiration and heart rate in heart failure and controls using ambulatory Holter recordings.

    PubMed

    Haigney, Mark; Zareba, Wojceich; La Rovere, Maria Teresa; Grasso, Ian; Mortara, David

    2014-01-01

    Breathing is a critical component of cardiopulmonary function, but few tools exist to evaluate respiration in ambulatory patients. Holter monitoring allows accurate diagnosis of a host of cardiac issues, and several investigators have demonstrated the ability to detect respiratory effort on the electrocardiogram. In this study we introduce a myogram signal derived from 12-lead, high frequency Holter as a means of detecting respiratory effort. Using the combined myogram and ECG signal, four novel variables were created: total number of Cheyne-Stokes episodes; the BWRatio, the ratio of power (above baseline) measured one second after peak-to-peak respiratory power, an assessment of the "shape" of the respiratory effort; DRR, the change in RR interval centering around peak inspiration; and minutes of synchronized breathing, a fixed ratio of heart beats to respiratory cycles. These variables were assessed in 24-hour recordings from three cohorts: healthy volunteers (n=33), heart failure subjects from the GISSI HF trial (n=383), and subjects receiving implantable defibrillators with severely depressed left ventricular function enrolled in the M2Risk trial (n=470). We observed a statistically significant 6-fold increase in the number of Cheyne-Stokes episodes (p=0.01 by ANOVA), decreases in BWRatio (p<0.001), as well as decrease in DRR in heart failure subjects; only minutes of synchronized breathing was not significantly decreased in heart failure. This study provides "proof of concept" that novel variables incorporating Holter-derived respiration can distinguish healthy subjects from heart failure. The utility of these variables for predicting heart failure, arrhythmia, and death risk in prospective studies needs to be assessed.

  6. Radiographic and echocardiographic assessment of left atrial size in 100 cats with acute left-sided congestive heart failure.

    PubMed

    Schober, Karsten E; Wetli, Ellen; Drost, Wm Tod

    2014-01-01

    The aims of this study were to evaluate left atrial size in cats with acute left-sided congestive heart failure. We hypothesized that left atrial size as determined by thoracic radiography can be normal in cats with acute left-sided congestive heart failure. One hundred cats with acute left-sided congestive heart failure in which thoracic radiography and echocardiography were performed within 12 h were identified. Left atrial size was evaluated using right lateral and ventrodorsal radiographs. Measurements were compared to two-dimensional echocardiographic variables of left atrial size and left ventricular size. On echocardiography, left atrial enlargement was observed in 96% cats (subjective assessment) whereas maximum left atrial dimension was increased (>15.7 mm) in 93% cats. On radiographs left atrial enlargement (subjective assessment) was found in 48% (lateral view), 53% (ventrodorsal view), and 64% (any view) of cats whereas left atrial enlargement was absent in 36% of cats in both views. Agreement between both methods of left atrial size estimation was poor (Cohen's kappa 0.17). Receiver operating characteristic curve analysis identified a maximum echocardiographic left atrial dimension of approximately 20 mm as the best compromise (Youden index) between sensitivity and specificity in the prediction of radiographic left atrial enlargement. Left atrial enlargement as assessed by thoracic radiography may be absent in a clinically relevant number of cats with congestive heart failure. Therefore, normal left atrial size on thoracic radiographs does not rule out presence of left-sided congestive heart failure in cats with clinical signs of respiratory distress.

  7. Fluid Volume Overload and Congestion in Heart Failure: Time to Reconsider Pathophysiology and How Volume Is Assessed.

    PubMed

    Miller, Wayne L

    2016-08-01

    Volume regulation, assessment, and management remain basic issues in patients with heart failure. The discussion presented here is directed at opening a reassessment of the pathophysiology of congestion in congestive heart failure and the methods by which we determine volume overload status. Peer-reviewed historical and contemporary literatures are reviewed. Volume overload and fluid congestion remain primary issues for patients with chronic heart failure. The pathophysiology is complex, and the simple concept of intravascular fluid accumulation is not adequate. The dynamics of interstitial and intravascular fluid compartment interactions and fluid redistribution from venous splanchnic beds to central pulmonary circulation need to be taken into account in strategies of volume management. Clinical bedside evaluations and right heart hemodynamic assessments can alert clinicians of changes in volume status, but only the quantitative measurement of total blood volume can help identify the heterogeneity in plasma volume and red blood cell mass that are features of volume overload in patients with chronic heart failure and help guide individualized, appropriate therapy-not all volume overload is the same.

  8. Phase diagram of QCD

    SciTech Connect

    Halasz, M.A.; Verbaarschot, J.J.; Jackson, A.D.; Shrock, R.E.; Stephanov, M.A.

    1998-11-01

    We analyze the phase diagram of QCD with two massless quark flavors in the space of temperature T and chemical potential of the baryon charge {mu} using available experimental knowledge of QCD, insights gained from various models, as well as general and model independent arguments including continuity, universality, and thermodynamic relations. A random matrix model is used to describe the chiral symmetry restoration phase transition at finite T and {mu}. In agreement with general arguments, this model predicts a tricritical point in the T{mu} plane. Certain critical properties at such a point are universal and can be relevant to heavy ion collision experiments. {copyright} {ital 1998} {ital The American Physical Society}

  9. Assessment of the probability of failure for EC nondestructive testing based on intrusive spectral stochastic finite element method

    NASA Astrophysics Data System (ADS)

    Oudni, Zehor; Féliachi, Mouloud; Mohellebi, Hassane

    2014-06-01

    This work is undertaken to study the reliability of eddy current nondestructive testing (ED-NDT) when the defect concerns a change of physical property of the material. So, an intrusive spectral stochastic finite element method (SSFEM) is developed in the case of 2D electromagnetic harmonic equation. The electrical conductivity is considered as random variable and is developed in series of Hermite polynomials. The developed model is validated from measurements on NDT device and is applied to the assessment of the reliability of failure in steam generator tubing of nuclear power plants. The exploitation of the model concerns the impedance calculation of the sensor and the assessment of the reliability of failure. The random defect geometry is also considered and results are given.

  10. Knot probabilities in random diagrams

    NASA Astrophysics Data System (ADS)

    Cantarella, Jason; Chapman, Harrison; Mastin, Matt

    2016-10-01

    We consider a natural model of random knotting—choose a knot diagram at random from the finite set of diagrams with n crossings. We tabulate diagrams with 10 and fewer crossings and classify the diagrams by knot type, allowing us to compute exact probabilities for knots in this model. As expected, most diagrams with 10 and fewer crossings are unknots (about 78% of the roughly 1.6 billion 10 crossing diagrams). For these crossing numbers, the unknot fraction is mostly explained by the prevalence of ‘tree-like’ diagrams which are unknots for any assignment of over/under information at crossings. The data shows a roughly linear relationship between the log of knot type probability and the log of the frequency rank of the knot type, analogous to Zipf’s law for word frequency. The complete tabulation and all knot frequencies are included as supplementary data.

  11. Wilson Loop Diagrams and Positroids

    NASA Astrophysics Data System (ADS)

    Agarwala, Susama; Marin-Amat, Eloi

    2016-07-01

    In this paper, we study a new application of the positive Grassmannian to Wilson loop diagrams (or MHV diagrams) for scattering amplitudes in N= 4 Super Yang-Mill theory (N = 4 SYM). There has been much interest in studying this theory via the positive Grassmannians using BCFW recursion. This is the first attempt to study MHV diagrams for planar Wilson loop calculations (or planar amplitudes) in terms of positive Grassmannians. We codify Wilson loop diagrams completely in terms of matroids. This allows us to apply the combinatorial tools in matroid theory used to identify positroids (non-negative Grassmannians) to Wilson loop diagrams. In doing so, we find that certain non-planar Wilson loop diagrams define positive Grassmannians. While non-planar diagrams do not have physical meaning, this finding suggests that they may have value as an algebraic tool, and deserve further investigation.

  12. Proactive Risk Assessment of Blood Transfusion Process, in Pediatric Emergency, Using the Health Care Failure Mode and Effects Analysis (HFMEA)

    PubMed Central

    Dehnavieh, Reza; Ebrahimipour, Hossein; Molavi-Taleghani, Yasamin; Vafaee-Najar, Ali; Hekmat, Somayeh Noori; Esmailzdeh, Hamid

    2015-01-01

    Introduction: Pediatric emergency has been considered as a high risk area, and blood transfusion is known as a unique clinical measure, therefore this study was conducted with the purpose of assessing the proactive risk assessment of blood transfusion process in Pediatric Emergency of Qaem education- treatment center in Mashhad, by the Healthcare Failure Mode and Effects Analysis (HFMEA) methodology. Methodology: This cross-sectional study analyzed the failure mode and effects of blood transfusion process by a mixture of quantitative-qualitative method. The proactive HFMEA was used to identify and analyze the potential failures of the process. The information of the items in HFMEA forms was collected after obtaining a consensus of experts’ panel views via the interview and focus group discussion sessions. Results: The Number of 77 failure modes were identified for 24 sub-processes enlisted in 8 processes of blood transfusion. Totally 13 failure modes were identified as non-acceptable risk (a hazard score above 8) in the blood transfusion process and were transferred to the decision tree. Root causes of high risk modes were discussed in cause-effect meetings and were classified based on the UK national health system (NHS) approved classifications model. Action types were classified in the form of acceptance (11.6%), control (74.2%) and elimination (14.2%). Recommendations were placed in 7 categories using TRIZ (“Theory of Inventive Problem Solving.”) Conclusion: The re-engineering process for the required changes, standardizing and updating the blood transfusion procedure, root cause analysis of blood transfusion catastrophic events, patient identification bracelet, training classes and educational pamphlets for raising awareness of personnel, and monthly gathering of transfusion medicine committee have all been considered as executive strategies in work agenda in pediatric emergency. PMID:25560332

  13. Warped penguin diagrams

    SciTech Connect

    Csaki, Csaba; Grossman, Yuval; Tanedo, Philip; Tsai, Yuhsin

    2011-04-01

    We present an analysis of the loop-induced magnetic dipole operator in the Randall-Sundrum model of a warped extra dimension with anarchic bulk fermions and an IR brane-localized Higgs. These operators are finite at one-loop order and we explicitly calculate the branching ratio for {mu}{yields}e{gamma} using the mixed position/momentum space formalism. The particular bound on the anarchic Yukawa and Kaluza-Klein (KK) scales can depend on the flavor structure of the anarchic matrices. It is possible for a generic model to either be ruled out or unaffected by these bounds without any fine-tuning. We quantify how these models realize this surprising behavior. We also review tree-level lepton flavor bounds in these models and show that these are on the verge of tension with the {mu}{yields}e{gamma} bounds from typical models with a 3 TeV Kaluza-Klein scale. Further, we illuminate the nature of the one-loop finiteness of these diagrams and show how to accurately determine the degree of divergence of a five-dimensional loop diagram using both the five-dimensional and KK formalism. This power counting can be obfuscated in the four-dimensional Kaluza-Klein formalism and we explicitly point out subtleties that ensure that the two formalisms agree. Finally, we remark on the existence of a perturbative regime in which these one-loop results give the dominant contribution.

  14. Traditional and new composite endpoints in heart failure clinical trials: facilitating comprehensive efficacy assessments and improving trial efficiency.

    PubMed

    Anker, Stefan D; Schroeder, Stefan; Atar, Dan; Bax, Jeroen J; Ceconi, Claudio; Cowie, Martin R; Crisp, Adam; Dominjon, Fabienne; Ford, Ian; Ghofrani, Hossein-Ardeschir; Gropper, Savion; Hindricks, Gerhard; Hlatky, Mark A; Holcomb, Richard; Honarpour, Narimon; Jukema, J Wouter; Kim, Albert M; Kunz, Michael; Lefkowitz, Martin; Le Floch, Chantal; Landmesser, Ulf; McDonagh, Theresa A; McMurray, John J; Merkely, Bela; Packer, Milton; Prasad, Krishna; Revkin, James; Rosano, Giuseppe M C; Somaratne, Ransi; Stough, Wendy Gattis; Voors, Adriaan A; Ruschitzka, Frank

    2016-05-01

    Composite endpoints are commonly used as the primary measure of efficacy in heart failure clinical trials to assess the overall treatment effect and to increase the efficiency of trials. Clinical trials still must enrol large numbers of patients to accrue a sufficient number of outcome events and have adequate power to draw conclusions about the efficacy and safety of new treatments for heart failure. Additionally, the societal and health system perspectives on heart failure have raised interest in ascertaining the effects of therapy on outcomes such as repeat hospitalization and the patient's burden of disease. Thus, novel methods for using composite endpoints in clinical trials (e.g. clinical status composite endpoints, recurrent event analyses) are being applied in current and planned trials. Endpoints that measure functional status or reflect the patient experience are important but used cautiously because heart failure treatments may improve function yet have adverse effects on mortality. This paper discusses the use of traditional and new composite endpoints, identifies qualities of robust composites, and outlines opportunities for future research. PMID:27071916

  15. Software Tools for Lifetime Assessment of Thermal Barrier Coatings Part II — Bond Coat Aluminum Depletion Failure

    NASA Astrophysics Data System (ADS)

    Renusch, Daniel; Rudolphi, Mario; Schütze, Michael

    The use of thermal barrier coatings (TBCs) made from yttria stabilized zirconia (YSZ) on superalloy base materials has been a significant step to a new level of operational limits in high temperature applications. By the application of a TBC in conjunction with cooling of the component material the operating temperature can be raised and higher efficiencies are achieved. As a consequence of the raised temperature failure of a TBC leads to an increased oxidative attack of the underlying bond coat material and therefore needs to be avoided. The lifetime prediction of thermal barrier coatings is therefore of interest to ensure safe operation within the inspection intervals. Several mechanisms have been identified to play a critical role in the degradation of TBC systems. Here we discuss failure of TBC systems due to bond coat aluminum depletion. This type of chemical failure may occur when the bond coat material is critically depleted of aluminum and instead of a dense slow growing α-alumina the formation of voluminous and fast growing spinels is promoted. Lifetime prediction for this failure mode requires a fundamental understanding of diffusion mechanisms and in particular the interaction of different diffusion rates in the bond coat and substrate material. Aim of this work was therefore to develop software tools that allow user friendly analysis of measured Al profiles for the assessment of diffusion rates and consequently for lifetime prediction.

  16. Pregnancy in women with heart disease: risk assessment and management of heart failure.

    PubMed

    Grewal, Jasmine; Silversides, Candice K; Colman, Jack M

    2014-01-01

    Heart disease, present in 0.5% to 3% of pregnant women, is an important cause of morbidity and the leading cause of death among pregnant women in the developed world. Certain heart conditions are associated with an increased risk of heart failure during pregnancy or the postpartum period; for these conditions, management during pregnancy benefits from multidisciplinary care at a center with expertise in pregnancy and heart disease. This article focuses on cardiac risks and management strategies for women with acquired and congenital heart disease who are at increased risk of heart failure during pregnancy.

  17. Using Eye Tracking to Investigate Semantic and Spatial Representations of Scientific Diagrams during Text-Diagram Integration

    ERIC Educational Resources Information Center

    Jian, Yu-Cin; Wu, Chao-Jung

    2015-01-01

    We investigated strategies used by readers when reading a science article with a diagram and assessed whether semantic and spatial representations were constructed while reading the diagram. Seventy-one undergraduate participants read a scientific article while tracking their eye movements and then completed a reading comprehension test. Our…

  18. Heart Failure

    MedlinePlus

    ... version of this page please turn Javascript on. Heart Failure What is Heart Failure? In heart failure, the heart cannot pump enough ... failure often experience tiredness and shortness of breath. Heart Failure is Serious Heart failure is a serious and ...

  19. Using Eye Tracking to Investigate Semantic and Spatial Representations of Scientific Diagrams During Text-Diagram Integration

    NASA Astrophysics Data System (ADS)

    Jian, Yu-Cin; Wu, Chao-Jung

    2015-02-01

    We investigated strategies used by readers when reading a science article with a diagram and assessed whether semantic and spatial representations were constructed while reading the diagram. Seventy-one undergraduate participants read a scientific article while tracking their eye movements and then completed a reading comprehension test. Our results showed that the text-diagram referencing strategy was commonly used. However, some readers adopted other reading strategies, such as reading the diagram or text first. We found all readers who had referred to the diagram spent roughly the same amount of time reading and performed equally well. However, some participants who ignored the diagram performed more poorly on questions that tested understanding of basic facts. This result indicates that dual coding theory may be a possible theory to explain the phenomenon. Eye movement patterns indicated that at least some readers had extracted semantic information of the scientific terms when first looking at the diagram. Readers who read the scientific terms on the diagram first tended to spend less time looking at the same terms in the text, which they read after. Besides, presented clear diagrams can help readers process both semantic and spatial information, thereby facilitating an overall understanding of the article. In addition, although text-first and diagram-first readers spent similar total reading time on the text and diagram parts of the article, respectively, text-first readers had significantly less number of saccades of text and diagram than diagram-first readers. This result might be explained as text-directed reading.

  20. Risk assessment of the emergency processes: Healthcare failure mode and effect analysis

    PubMed Central

    Taleghani, Yasamin Molavi; Rezaei, Fatemeh; Sheikhbardsiri, Hojat

    2016-01-01

    BACKGROUND: Ensuring about the patient’s safety is the first vital step in improving the quality of care and the emergency ward is known as a high-risk area in treatment health care. The present study was conducted to evaluate the selected risk processes of emergency surgery department of a treatment-educational Qaem center in Mashhad by using analysis method of the conditions and failure effects in health care. METHODS: In this study, in combination (qualitative action research and quantitative cross-sectional), failure modes and effects of 5 high-risk procedures of the emergency surgery department were identified and analyzed according to Healthcare Failure Mode and Effects Analysis (HFMEA). To classify the failure modes from the “nursing errors in clinical management model (NECM)”, the classification of the effective causes of error from “Eindhoven model” and determination of the strategies to improve from the “theory of solving problem by an inventive method” were used. To analyze the quantitative data of descriptive statistics (total points) and to analyze the qualitative data, content analysis and agreement of comments of the members were used. RESULTS: In 5 selected processes by “voting method using rating”, 23 steps, 61 sub-processes and 217 potential failure modes were identified by HFMEA. 25 (11.5%) failure modes as the high risk errors were detected and transferred to the decision tree. The most and the least failure modes were placed in the categories of care errors (54.7%) and knowledge and skill (9.5%), respectively. Also, 29.4% of preventive measures were in the category of human resource management strategy. CONCLUSION: “Revision and re-engineering of processes”, “continuous monitoring of the works”, “preparation and revision of operating procedures and policies”, “developing the criteria for evaluating the performance of the personnel”, “designing a suitable educational content for needs of employee”,

  1. Program Synthesizes UML Sequence Diagrams

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Osborne, Richard N.

    2006-01-01

    A computer program called "Rational Sequence" generates Universal Modeling Language (UML) sequence diagrams of a target Java program running on a Java virtual machine (JVM). Rational Sequence thereby performs a reverse engineering function that aids in the design documentation of the target Java program. Whereas previously, the construction of sequence diagrams was a tedious manual process, Rational Sequence generates UML sequence diagrams automatically from the running Java code.

  2. Feynchois:. a Feynman Diagram Generator

    NASA Astrophysics Data System (ADS)

    Choi, Chul-Woo; Gonsalves, Richard J.

    A Feynman diagram generator, named FeynChois, is described. It provides the user with a full GUI (Graphical User Interface) environment which enables the generation diagrams automatically with several mouse operations. The diagram generator is built on an Application Programming Interface (API) called ViewableBeans which provides a framework for programming graphically representable objects. We also present a means for describing Feynman rules in a computer friendly manner using the XML (Extensible Markup Language) format.

  3. Micro-compression: a novel technique for the nondestructive assessment of local bone failure.

    PubMed

    Müller, R; Gerber, S C; Hayes, W C

    1998-12-01

    Many bones within the axial and appendicular skeleton are subjected to repetitive, cyclic loading during the course of ordinary daily activities. If this repetitive loading is of sufficient magnitude or duration, fatigue failure of the bone tissue may result. In clinical orthopedics, trabecular fatigue fractures are observed as compressive stress fractures in the proximal femur, vertebrae, calcaneus and tibia, and are often preceded by buckling and bending of microstructural elements. However, the relative importance of bone density and architecture in the etiology of these fractures is poorly understood. The aim of the study was to investigate failure mechanisms of 3D trabecular bone using micro-computed tomography (microCT). Because of its nondestructive nature, microCT represents an ideal approach for performing not only static measurements of bone architecture but also dynamic measurements of failure initiation and propagation as well as damage accumulation. For the purpose of the study, a novel micro-compression device was devised to measure loaded trabecular bone specimens directly in a micro-tomographic system. The measurement window in the device was made of a radiolucent, highly stiff plastic to enable X-rays to penetrate the material. The micro-compressor has an outer diameter of 19 mm and a total length of 65 mm. The internal load chamber fits wet or dry bone specimens with maximal diameters of 9 mm and maximal lengths of 22 mm. For the actual measurement, first, the unloaded bone is measured in the microCT. Second, a load-displacement curve is recorded where the load is measured with an integrated mini-button load cell and the displacement is computed directly from the microCT scout-view. For each load case, a 3D snap-shot of the structure under load is taken providing 34 microm nominal resolution. Initial measurements included specimens from bovine tibiae and whale spine to investigate the influence of the structure type on the failure mechanism. In a

  4. Diagonal Slices of 3D Young Diagrams in the Approach of Maya Diagrams

    NASA Astrophysics Data System (ADS)

    Cai, Li-Qiang; Wang, Li-Fang; Wu, Ke; Yang, Jie

    2014-09-01

    According to the correspondence between 2D Young diagrams and Maya diagrams and the relation between 2D and 3D Young diagrams, we construct 3D Young diagrams in the approach of Maya diagrams. Moreover, we formulate the generating function of 3D Young diagrams, which is the MacMahon function in terms of Maya diagrams.

  5. Potential-pH Diagrams.

    ERIC Educational Resources Information Center

    Barnum, Dennis W.

    1982-01-01

    Potential-pH diagrams show the domains of redoxpotential and pH in which major species are most stable. Constructing such diagrams provides students with opportunities to decide what species must be considered, search literature for equilibrium constants and free energies of formation, and practice in using the Nernst equation. (Author/JN)

  6. Risk Assessment of Using Entonox for the Relief of Labor Pain: A Healthcare Failure Modes and Effects Analysis Approach

    PubMed Central

    Najafi, Tahereh Fathi; Bahri, Narjes; Ebrahimipour, Hosein; Najar, Ali Vafaee; Taleghani, Yasamin Molavi

    2016-01-01

    Introduction In order to prevent medical errors, it is important to know why they occur and to identify their causes. Healthcare failure modes and effects analysis (HFMEA) is a type of qualitative descriptive that is used to evaluate the risk. The aim of this study was to assess the risks of using Entonox for labor pain by HFMEA. Methods A mixed-methods design (qualitative action research and quantitative cross-sectional research) was used. The modes and effects of failures in the process of using Entonox were detected and analyzed during 2013–2014 at Hefdahe Shahrivar Hospital, Mashhad, Iran. Overall, 52 failure modes were identified, with 25 being recognized as high-risk modes. Results The results revealed that 48.5% of these errors fall into the care process type, 22.05% belong to the communicative type, 19.1% fall into the administrative type, and 10.2% are of the knowledge and skills type. Strategies were presented in the forms of acceptance (3.2%), control (90.3%), and elimination (6.4%). Conclusion The following actions are suggested for improving the process of using Entonox: Close supervision by the midwife, precise recording of all the stages of the process in the woman’s medical record, the necessity of the presence of the anesthesiologist at the woman’s bedside during labor, confirming the indications for use of Entonox, and close monitoring to ensure the safety of the gas cylinder guards. PMID:27123224

  7. Failure assessment of aluminum liner based filament-wound hybrid riser subjected to internal hydrostatic pressure

    NASA Astrophysics Data System (ADS)

    Dikshit, Vishwesh; Seng, Ong Lin; Maheshwari, Muneesh; Asundi, A.

    2015-03-01

    The present study describes the burst behavior of aluminum liner based prototype filament-wound hybrid riser under internal hydrostatic pressure. The main objective of present study is to developed an internal pressure test rig set-up for filament-wound hybrid riser and investigate the failure modes of filament-wound hybrid riser under internal hydrostatic burst pressure loading. The prototype filament-wound hybrid riser used for burst test consists of an internal aluminum liner and outer composite layer. The carbon-epoxy composites as part of the filament-wound hybrid risers were manufactured with [±55o] lay-up pattern with total composite layer thickness of 1.6 mm using a CNC filament-winding machine. The burst test was monitored by video camera which helps to analyze the failure mechanism of the fractured filament-wound hybrid riser. The Fiber Bragg Grating (FBG) sensor was used to monitor and record the strain changes during burst test of prototype filament-wound hybrid riser. This study shows good improvements in burst strength of filament-wound hybrid riser compared to the monolithic metallic riser. Since, strain measurement using FBG sensors has been testified as a reliable method, we aim to further understand in detail using this technique.

  8. Development of Methodology to Assess the Failure Behaviour of Bamboo Single Fibre by Acoustic Emission Technique

    NASA Astrophysics Data System (ADS)

    Alam, Md. Saiful; Gulshan, Fahmida; Ahsan, Qumrul; Wevers, Martine; Pfeiffer, Helge; van Vuure, Aart-Willem; Osorio, Lina; Verpoest, Ignaas

    2016-06-01

    Acoustic emission (AE) was used as a tool for detecting, evaluating and for better understanding of the damage mechanism and failure behavior in composites during mechanical loading. Methodology was developed for tensile test of natural fibres (bamboo single fibre). A series of experiments were performed and load drops (one or two) were observed in the load versus time graphs. From the observed AE parameters such as amplitude, energy, duration etc. significant information corresponding to the load drops were found. These AE signals from the load drop occurred from such failure as debonding between two elementary fibre or from join of elementary fibre at edge. The various sources of load at first load drop was not consistent for the different samples (for a particular sample the value is 8 N, stress: 517.51 MPa). Final breaking of fibre corresponded to saturated level AE amplitude of preamplifier (99.9 dB) for all samples. Therefore, it was not possible to determine the exact AE energy value for final breaking. Same methodology was used for tensile test of three single fibres, which gave clear indication of load drop before the final breaking of first and second fibre.

  9. An assessment of BWR (boiling water reactor) Mark-II containment challenges, failure modes, and potential improvements in performance

    SciTech Connect

    Kelly, D.L.; Jones, K.R.; Dallman, R.J. ); Wagner, K.C. )

    1990-07-01

    This report assesses challenges to BWR Mark II containment integrity that could potentially arise from severe accidents. Also assessed are some potential improvements that could prevent core damage or containment failure, or could mitigate the consequences of such failure by reducing the release of fission products to the environment. These challenges and improvements are analyzed via a limited quantitative risk/benefit analysis of a generic BWR/4 reactor with Mark II containment. Point estimate frequencies of the dominant core damage sequences are obtained and simple containment event trees are constructed to evaluate the response of the containment to these severe accident sequences. The resulting containment release modes are then binned into source term release categories, which provide inputs to the consequence analysis. The output of the consequences analysis is used to construct an overall base case risk profile. Potential improvements and sensitivities are evaluated by modifying the event tree spilt fractions, thus generating a revised risk profile. Several important sensitivity cases are examined to evaluate the impact of phenomenological uncertainties on the final results. 75 refs., 25 figs., 65 tabs.

  10. Students' different understandings of class diagrams

    NASA Astrophysics Data System (ADS)

    Boustedt, Jonas

    2012-03-01

    The software industry needs well-trained software designers and one important aspect of software design is the ability to model software designs visually and understand what visual models represent. However, previous research indicates that software design is a difficult task to many students. This article reports empirical findings from a phenomenographic investigation on how students understand class diagrams, Unified Modeling Language (UML) symbols, and relations to object-oriented (OO) concepts. The informants were 20 Computer Science students from four different universities in Sweden. The results show qualitatively different ways to understand and describe UML class diagrams and the "diamond symbols" representing aggregation and composition. The purpose of class diagrams was understood in a varied way, from describing it as a documentation to a more advanced view related to communication. The descriptions of class diagrams varied from seeing them as a specification of classes to a more advanced view, where they were described to show hierarchic structures of classes and relations. The diamond symbols were seen as "relations" and a more advanced way was seeing the white and the black diamonds as different symbols for aggregation and composition. As a consequence of the results, it is recommended that UML should be adopted in courses. It is briefly indicated how the phenomenographic results in combination with variation theory can be used by teachers to enhance students' possibilities to reach advanced understanding of phenomena related to UML class diagrams. Moreover, it is recommended that teachers should put more effort in assessing skills in proper usage of the basic symbols and models and students should be provided with opportunities to practise collaborative design, e.g. using whiteboards.

  11. Scattering equations and Feynman diagrams

    NASA Astrophysics Data System (ADS)

    Baadsgaard, Christian; Bjerrum-Bohr, N. E. J.; Bourjaily, Jacob L.; Damgaard, Poul H.

    2015-09-01

    We show a direct matching between individual Feynman diagrams and integration measures in the scattering equation formalism of Cachazo, He and Yuan. The connection is most easily explained in terms of triangular graphs associated with planar Feynman diagrams in φ 3-theory. We also discuss the generalization to general scalar field theories with φ p interactions, corresponding to polygonal graphs involving vertices of order p. Finally, we describe how the same graph-theoretic language can be used to provide the precise link between individual Feynman diagrams and string theory integrands.

  12. Assessing cell fusion and cytokinesis failure as mechanisms of clone 9 hepatocyte multinucleation in vitro.

    PubMed

    Simic, Damir; Euler, Catherine; Thurby, Christina; Peden, Mike; Tannehill-Gregg, Sarah; Bunch, Todd; Sanderson, Thomas; Van Vleet, Terry

    2012-08-01

    In this in vitro model of hepatocyte multinucleation, separate cultures of rat Clone 9 cells are labeled with either red or green cell tracker dyes (Red Cell Tracker CMPTX or Vybrant CFDA SE Cell Tracer), plated together in mixed-color colonies, and treated with positive or negative control agents for 4 days. The fluorescent dyes become cell-impermeant after entering cells and are not transferred to adjacent cells in a population, but are inherited by daughter cells after fusion. The mixed-color cultures are then evaluated microscopically for multinucleation and analysis of the underlying mechanism (cell fusion/cytokinesis). Multinucleated cells containing only one dye have undergone cytokinesis failure, whereas dual-labeled multinucleated cells have resulted from fusion.

  13. Outpatient Use of Focused Cardiac Ultrasound to Assess the Inferior Vena Cava in Patients With Heart Failure.

    PubMed

    Saha, Narayan M; Barbat, Julian J; Fedson, Savitri; Anderson, Allen; Rich, Jonathan D; Spencer, Kirk T

    2015-10-15

    Accurate assessment of volume status is critical in the management of patients with heart failure (HF). We studied the utility of a pocket-sized ultrasound device in an outpatient cardiology clinic as a tool to guide volume assessment. Inferior vena cava (IVC) size and collapsibility were assessed in 95 patients by residents briefly trained in focused cardiac ultrasound (FCU). Cardiologist assessment of volume status and changes in diuretic medication were also recorded. Patients were followed for occurrence of 30-day events. There was a 94% success rate of obtaining IVC size and collapsibility, and agreement between visual and calculated IVC parameters was excellent. Most patients were euvolemic by both FCU IVC and clinical bedside assessment (51%) and had no change in diuretic dose. Thirty-two percent had discrepant FCU IVC and clinical volume assessments. In clinically hypervolemic patients, the FCU evaluation of the IVC suggested that the wrong diuretic management plan might have been made 46% of the time. At 30 days, 14 events occurred. The incidence of events increased significantly with FCU IVC imaging categorization, from 11% to 23% to 36% in patients with normal, intermediate, and plethoric IVCs. By comparison, when grouped in a binary manner, there was no significant difference in event rates for patients who were deemed to be clinically volume overloaded. Assessment of volume status in an outpatient cardiology clinic using FCU imaging of the IVC is feasible in a high percentage of patients. A group of patients were identified with volume status discordant between FCU IVC and routine clinic assessment, suggesting that IVC parameters may provide a valuable supplement to the in-office physical examination.

  14. Reliability of burst superimposed technique to assess central activation failure during fatiguing contraction.

    PubMed

    Dousset, Erick; Jammes, Yves

    2003-04-01

    Recording a superimposed electrically-induced contraction at the limit of endurance during voluntary contraction is used as an indicator of failure of muscle activation by the central nervous system and discards the existence of peripheral muscle fatigue. We questioned on the reliability of this method by using other means to explore peripheral muscle failure. Fifteen normal subjects sustained handgrip at 60% of maximal voluntary contraction (MVC) until exhaustion. During sustained contraction, the power spectrum analysis of the flexor digitorum surface electromyogram allowed us to calculate the leftward shift of median frequency (MF). A superimposed 60 Hz 3 s pulse train (burst superimposition) was delivered to the muscle when force levelled off close to the preset value. Immediately after the fatigue trial had ended, the subject was asked to perform a 5 s 60% MVC and we measured the peak contractile response to a 60 Hz 3 s burst stimulation. Recordings of the compound evoked muscle action potential (M-wave) allowed us to explore an impairment of neuromuscular propagation. A superimposed contraction was measured in 7 subjects in their two forearms, whereas it was absent in the 8 others. Despite these discrepancies, all subjects were able to reproduce a 3 s 60% MVC immediately after the fatigue trial ended and there was no post-fatigue decrease of contraction elicited by the 60 Hz 3 s burst stimulation, as well as no M-wave decrease in amplitude and conduction time. Thus, there was no indication of peripheral muscle fatigue. MF decrease was present in all individuals throughout the fatiguing contraction and it was not correlated with the magnitude of superimposed force. These observations indicate that an absence of superimposed electrically-induced muscle contraction does not allow us to conclude the existence of a sole peripheral muscle fatigue in these circumstances.

  15. Alterations in left ventricular diastolic function in chronic ischemic heart failure. Assessment by radionuclide angiography.

    PubMed

    Bareiss, P; Facello, A; Constantinesco, A; Demangeat, J L; Brunot, B; Arbogast, R; Roul, G

    1990-02-01

    Using radionuclide angiography at rest, we studied several parameters of left ventricular systolic and diastolic function in 60 patients divided into three groups, a control group (G1) of 15 patients and two groups of patients with chronic ischemic heart disease and previous anterior wall myocardial infarction but without aneurysm or dyskinetic wall motion, a second group (G2) of 23 patients with no history of heart failure, and a third group (G3) of 22 patients in New York Heart Association (NYHA) class II or III of heart failure. Ejection fraction, peak emptying, and peak filling rates, in addition to times to reach peak rates, were evaluated after constructing a global time-activity curve and its first time derivative. In addition, we computed the first time-derivative curves for each image pixel and obtained functional images (MIN/MAX images) representing the distribution of times to peak emptying or filling rates Using a left ventricular region of interest, time histograms were generated, and indexes of dispersion of times to peak rates, defined as the full width at half maximum of the histograms, were obtained. Significant (p less than or equal to 0.01) differences were observed among all groups for ejection fraction, peak emptying rate, and peak filling rate. The decrease of the peak filling rate still remained significant from group G1 to group G3 even after adjustment for differences in ejection fraction and heart rate. Peak filling rate was linearly correlated with ejection fraction in the population with ischemic heart disease (G2 + G3) (r = 0.68, p less than or equal to 0.0001).(ABSTRACT TRUNCATED AT 250 WORDS) PMID:2297884

  16. Particles, Feynman Diagrams and All That

    ERIC Educational Resources Information Center

    Daniel, Michael

    2006-01-01

    Quantum fields are introduced in order to give students an accurate qualitative understanding of the origin of Feynman diagrams as representations of particle interactions. Elementary diagrams are combined to produce diagrams representing the main features of the Standard Model.

  17. The Hertzsprung-Russell Diagram.

    ERIC Educational Resources Information Center

    Woodrow, Janice

    1991-01-01

    Describes a classroom use of the Hertzsprung-Russell diagram to infer not only the properties of a star but also the star's probable stage in evolution, life span, and age of the cluster in which it is located. (ZWH)

  18. Atemporal diagrams for quantum circuits

    SciTech Connect

    Griffiths, Robert B.; Wu Shengjun; Yu Li; Cohen, Scott M.

    2006-05-15

    A system of diagrams is introduced that allows the representation of various elements of a quantum circuit, including measurements, in a form which makes no reference to time (hence 'atemporal'). It can be used to relate quantum dynamical properties to those of entangled states (map-state duality), and suggests useful analogies, such as the inverse of an entangled ket. Diagrams clarify the role of channel kets, transition operators, dynamical operators (matrices), and Kraus rank for noisy quantum channels. Positive (semidefinite) operators are represented by diagrams with a symmetry that aids in understanding their connection with completely positive maps. The diagrams are used to analyze standard teleportation and dense coding, and for a careful study of unambiguous (conclusive) teleportation. A simple diagrammatic argument shows that a Kraus rank of 3 is impossible for a one-qubit channel modeled using a one-qubit environment in a mixed state.

  19. Program Helps In Analysis Of Failures

    NASA Technical Reports Server (NTRS)

    Stevenson, R. W.; Austin, M. E.; Miller, J. G.

    1993-01-01

    Failure Environment Analysis Tool (FEAT) computer program developed to enable people to see and better understand effects of failures in system. User selects failures from either engineering schematic diagrams or digraph-model graphics, and effects or potential causes of failures highlighted in color on same schematic-diagram or digraph representation. Uses digraph models to answer two questions: What will happen to system if set of failure events occurs? and What are possible causes of set of selected failures? Helps design reviewers understand exactly what redundancies built into system and where there is need to protect weak parts of system or remove them by redesign. Program also useful in operations, where it helps identify causes of failure after they occur. FEAT reduces costs of evaluation of designs, training, and learning how failures propagate through system. Written using Macintosh Programmers Workshop C v3.1. Can be linked with CLIPS 5.0 (MSC-21927, available from COSMIC).

  20. BRIEF REPORT: Failure of an Electronic Medical Record Tool to Improve Pain Assessment Documentation

    PubMed Central

    Saigh, Orit; Triola, Marc M; Link, R Nathan

    2006-01-01

    OBJECTIVE To comply with pain management standards, Bellevue Hospital in New York City implemented a mandatory computerized pain assessment screen (PAS) in its electronic medical record (EMR) system for every outpatient encounter. We assessed provider acceptance of the instrument and examined whether the intervention led to increased documentation of pain-related diagnoses or inquiries. DESIGN Cross-sectional survey; a pre-and posthistorically controlled observational study. SUBJECTS AND MEASUREMENTS The utility of the computerized tool to medicine housestaff and attendings was assessed by an anonymous survey. We conducted an electronic chart review comparing all adult primary care patient encounters over a 2-day period 6 months prior to implementation of the PAS and on 2 days 6 months after its implementation. RESULTS Forty-seven percent of survey respondents felt that the computerized assessment tool was “somewhat difficult” or “very difficult” to use. The majority of respondents (79%) felt the tool did not change their pain assessment practice. Of 265 preintervention patients and 364 postintervention patients seen in the clinic, 42% and 37% had pain-related diagnoses, respectively (P=.29). Pain inquiry by the physician was noted for 49% of preintervention patients and 44% of the postintervention patients (P=.26). In 55% of postintervention encounters, there was discordance between the pain documentation using the PAS tool and the free text section of the medical note. CONCLUSION A mandatory computerized pain assessment tool did not lead to an increase in pain-related diagnoses and may have hindered the documentation of pain assessment because of the perceived burden of using the application. PMID:16606379

  1. Risk assessment of drain valve failure in the K-West basin south loadout pit

    SciTech Connect

    MORGAN, R.G.

    1999-06-23

    The drain valve located in the bottom of the K-West Basin South Loadout Pit (SLOP) could provide an additional leak path from the K Basins if the drain valve were damaged during construction, installation, or operation of the cask loading system. For the K-West Basin SLOP the immersion pail support structure (IPSS) has already been installed, but the immersion pail has not been installed in the IPSS. The objective of this analysis is to evaluate the risk of damaging the drain valve during the remaining installation activities or operation of the cask loading system. Valve damage, as used in this analysis, does not necessarily imply large amounts of the water will be released quickly from the basin, rather valve damage implies that the valve's integrity has been compromised. The analysis process is a risk-based uncertainty analysis where best engineering judgement is used to represent each variable in the analysis. The uncertainty associated with each variable is represented by a probability distribution. The uncertainty is propagated through the analysis by Monte Carlo convolution techniques. The corresponding results are developed as a probability distribution and the risk is expressed in terms of the corresponding complementary cumulative distribution function (''risk curve''). The total risk is the area under the ''risk curve''. The risk of potentially dropping a cask into or on the IPSS and damaging the drain valve is approximately 1 x 10{sup -4} to 2 x 10{sup -5} per year. The risk of objects falling behind the IPSS and damaging the valve is 3 x 10{sup -2} to 6 x 10{sup -3} per year. Both risks are expressed as drain value failure frequencies. The risk of objects falling behind the IPSS and damaging the valve can be significantly reduced by an impact limiter and/or installing a gating or plate over the area bounded by the back of the IPSS and the wall of the SLOP. With either of these actions there is a 90 percent confidence that the frequency of drain valve

  2. Chronic Liver Failure-Sequential Organ Failure Assessment is better than the Asia-Pacific Association for the Study of Liver criteria for defining acute-on-chronic liver failure and predicting outcome

    PubMed Central

    Dhiman, Radha K; Agrawal, Swastik; Gupta, Tarana; Duseja, Ajay; Chawla, Yogesh

    2014-01-01

    AIM: To compare the utility of the Chronic Liver Failure-Sequential Organ Failure Assessment (CLIF-SOFA) and Asia-Pacific Association for the Study of Liver (APASL) definitions of acute-on-chronic liver failure (ACLF) in predicting short-term prognosis of patients with ACLF. METHODS: Consecutive patients of cirrhosis with acute decompensation were prospectively included. They were grouped into ACLF and no ACLF groups as per CLIF-SOFA and APASL criteria. Patients were followed up for 3 mo from inclusion or mortality whichever was earlier. Mortality at 28-d and 90-d was compared between no ACLF and ACLF groups as per both criteria. Mortality was also compared between different grades of ACLF as per CLIF-SOFA criteria. Prognostic scores like CLIF-SOFA, Acute Physiology and Chronic Health Evaluation (APACHE)-II, Child-Pugh and Model for End-Stage Liver Disease (MELD) scores were evaluated for their ability to predict 28-d mortality using area under receiver operating curves (AUROC). RESULTS: Of 50 patients, 38 had ACLF as per CLIF-SOFA and 19 as per APASL criteria. Males (86%) were predominant, alcoholic liver disease (68%) was the most common etiology of cirrhosis, sepsis (66%) was the most common cause of acute decompensation while infection (66%) was the most common precipitant of acute decompensation. The 28-d mortality in no ACLF and ACLF groups was 8.3% and 47.4% (P = 0.018) as per CLIF-SOFA and 39% and 37% (P = 0.895) as per APASL criteria. The 28-d mortality in patients with no ACLF (n = 12), ACLF grade 1 (n = 11), ACLF grade 2 (n = 14) and ACLF grade 3 (n = 13) as per CLIF-SOFA criteria was 8.3%, 18.2%, 42.9% and 76.9% (χ2 for trend, P = 0.002) and 90-d mortality was 16.7%, 27.3%, 78.6% and 100% (χ2 for trend, P < 0.0001) respectively. Patients with prior decompensation had similar 28-d and 90-d mortality (39.3% and 53.6%) as patients without prior decompensation (36.4% and 63.6%) (P = NS). AUROCs for 28-d mortality were 0.795, 0.787, 0.739 and 0.710 for

  3. Ion mixing and phase diagrams

    NASA Astrophysics Data System (ADS)

    Lau, S. S.; Liu, B. X.; Nicolet, M.-A.

    1983-05-01

    Interactions induced by ion irradiation are generally considered to be non-equilibrium processes, whereas phase diagrams are determined by phase equilibria. These two entities are seemingly unrelated. However, if one assumes that quasi-equilibrium conditions prevail after the prompt events, subsequent reactions are driven toward equilibrium by thermodynamical forces. Under this assumption, ion-induced reactions are related to equilibrium and therefore to phase diagrams. This relationship can be seen in the similarity that exists in thin films between reactions induced by ion irradiation and reactions induced by thermal annealing. In the latter case, phase diagrams have been used to predict the phase sequence of stable compound formation, notably so in cases of silicide formation. Ion-induced mixing not only can lead to stable compound formation, but also to metastable alloy formation. In some metal-metal systems, terminal solubilities can be greatly extended by ion mixing. In other cases, where the two constituents of the system have different crystal structures, extension of terminal solubility from both sides of the phase diagram eventually becomes structurally incompatible and a glassy (amorphous) mixture can form. The composition range where this bifurcation is likely to occur is in the two-phase regions of the phase diagram. These concepts are potentially useful guides in selecting metal pairs that from metallic glasses by ion mixing. In this report, phenomenological correlation between stable (and metastable) phase formation and phase diagram is discussed in terms of recent experimental data.

  4. Assessing the Effects of the "Rocket Math" Program with a Primary Elementary School Student at Risk for School Failure: A Case Study

    ERIC Educational Resources Information Center

    Smith, Christina R.; Marchand-Martella, Nancy E.; Martella, Ronald C.

    2011-01-01

    This study assessed the effects of the "Rocket Math" program on the math fluency skills of a first grade student at risk for school failure. The student received instruction in the "Rocket Math" program over 6 months. He was assessed using a pre- and posttest curriculum-based measurement (CBM) and individualized fluency checkouts within the…

  5. Assessment of systolic and diastolic function in heart failure using ambulatory monitoring with acoustic cardiography.

    PubMed

    Dillier, Roger; Zuber, Michel; Arand, Patricia; Erne, Susanne; Erne, Paul

    2011-08-01

    INTRODUCTION. The circadian variation of heart function and heart sounds in patients with and without heart failure (HF) is poorly understood. We hypothesized HF patients would exhibit less circadian variation with worsened cardiac function and sleep apnea. METHODS. We studied 67 HF patients (age 67.4 ± 8.2 years; 42% acute HF) and 63 asymptomatic control subjects with no history of HF (age 61.6 ± 7.7 years). Subjects wore a heart sound/ECG/respiratory monitor. The data were analyzed for sleep apnea, diastolic heart sounds, and systolic time intervals. RESULTS. The HF group had significantly greater prevalence of the third heart sound and prolongation of electro-mechanical activation time, while the control group had an age-related increase in the prevalence of the fourth heart sound. The control group showed more circadian variation in cardiac function. The HF subjects had more sleep apnea and higher occurrence of heart rate non-dipping. CONCLUSIONS. The control subjects demonstrated an increasing incidence of diastolic dysfunction with age, while systolic function was mostly unchanged with aging. Parameters related to systolic function were significantly worse in the HF group with little diurnal variation, indicating a constant stimulation of sympathetic tone in HF and reduction of diurnal regulation. PMID:21361859

  6. Failure Impact Analysis of Key Management in AMI Using Cybernomic Situational Assessment (CSA)

    SciTech Connect

    Abercrombie, Robert K; Sheldon, Frederick T; Hauser, Katie R; Lantz, Margaret W; Mili, Ali

    2013-01-01

    In earlier work, we presented a computational framework for quantifying the security of a system in terms of the average loss a stakeholder stands to sustain as a result of threats to the system. We named this system, the Cyberspace Security Econometrics System (CSES). In this paper, we refine the framework and apply it to cryptographic key management within the Advanced Metering Infrastructure (AMI) as an example. The stakeholders, requirements, components, and threats are determined. We then populate the matrices with justified values by addressing the AMI at a higher level, rather than trying to consider every piece of hardware and software involved. We accomplish this task by leveraging the recently established NISTR 7628 guideline for smart grid security. This allowed us to choose the stakeholders, requirements, components, and threats realistically. We reviewed the literature and selected an industry technical working group to select three representative threats from a collection of 29 threats. From this subset, we populate the stakes, dependency, and impact matrices, and the threat vector with realistic numbers. Each Stakeholder s Mean Failure Cost is then computed.

  7. Assessment of filter dust characteristics that cause filter failure during hot-gas filtration

    SciTech Connect

    John P. Hurley; Biplab Mukherjee; Michael D. Mann

    2006-08-15

    The high-temperature filtration of particulates from gases is greatly limited because of the development of dust cakes that are difficult to remove and can bridge between candle filters, causing them to break. Understanding the conditions leading to the formation of cohesive dust can prevent costly filter failures and ensure higher efficiency of solid fuel, direct-fired turbine power generation systems. The University of North Dakota Energy & Environmental Research Center is working with the New Energy and Industrial Technology Development Organization and the U.S. Department of Energy to perform research to characterize and determine the factors that cause the development of such dust cakes. Changes in the tensile strength, bridging propensity, and plasticity of filter dust cakes were measured as a function of the temperature and a filter pressure drop for a coal and a biomass filter dust. The biomass filter dust indicated that potential filtering problems can exist at temperatures as low as 400{sup o}C, while the coal filter dust showed good filtering characteristics up to 750{sup o}C. A statistically valid model that can indicate the propensity of filters to fail with system operating conditions was developed. A detailed analysis of the chemical aspect of dusts is also presented in order to explore the causes of such stickiness. 16 refs., 10 figs., 3 tabs.

  8. 30 CFR 1218.40 - Assessments for incorrect or late reports and failure to report.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... as follows: (1) For coal and other solid minerals leases, a report is each line on Form MMS-4430, Solid Minerals Production and Royalty Report; or on Form MMS-2014, Report of Sales and Royalty... Form MMS-2014. (d) An assessment under this section shall not be shared with a State, Indian tribe,...

  9. 30 CFR 218.40 - Assessments for incorrect or late reports and failure to report.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... INTERIOR MINERALS REVENUE MANAGEMENT COLLECTION OF MONIES AND PROVISION FOR GEOTHERMAL CREDITS AND... MMS by the designated due date for geothermal, solid minerals, and Indian oil and gas leases. (b) An... geothermal, solid minerals, and Indian oil and gas leases. (c) For purpose of assessments discussed in...

  10. Cardiac status assessment with a multi-signal device for improved home-based congestive heart failure management.

    PubMed

    Muehlsteff, Jens; Carvalho, Paulo; Henriques, Jorge; Paiva, Rui P; Reiter, Harald

    2011-01-01

    State-of-the-Art disease management for Congestive Heart Failure (CHF) patients is still based on easy-to-acquire measures such as heart rate (HR), weight and blood pressure (BP). However, these measures respond late to changes of the patient health status and provide limited information to personalize and adapt medication therapy. This paper describes our concept called "Cardiac Status Assessment" we have been investigating within the European project "HeartCycle" towards next-generation home-based disease management of CHF. In our concept we analyze non-invasive surrogate measures of the cardio-vascular function in particular systolic time intervals and pulse wave characteristics to estimate Cardiac Output (CO) and Systemic Vascular Resistance (SVR) both are established clinical measures. We discuss the underlying concept, a developed measurement system and first results. PMID:22254450

  11. Australia's pesticide environmental risk assessment failure: the case of diuron and sugarcane.

    PubMed

    Holmes, Glen

    2014-11-15

    In November 2012, the Australian Pesticide and Veterinary Medicines Authority (APVMA) concluded a 12 year review of the PSII herbicide diuron. One of the primary concerns raised during the review was the potential impact on aquatic ecosystems, particularly in the catchments draining to the Great Barrier Reef. The environmental risk assessment process used by the APVMA utilised a runoff risk model developed and validated under European farming conditions. However, the farming conditions in the sugarcane regions of the Great Barrier Reef catchments have environmental parameters beyond the currently validated bounds of the model. The use of the model to assess environmental risk in these regions is therefore highly inappropriate, demonstrating the pitfalls of a one size fits all approach. PMID:25152182

  12. Incorporating cumulative effects into environmental assessments of mariculture: Limitations and failures of current siting methods

    SciTech Connect

    King, Sarah C. Pushchak, Ronald

    2008-11-15

    Assessing and evaluating the cumulative impacts of multiple marine aquaculture facilities has proved difficult in environmental assessment. A retrospective review of 23 existing mariculture farms in southwestern New Brunswick was conducted to determine whether cumulative interactions would have justified site approvals. Based on current scientific evidence of cumulative effects, six new criteria were added to a set of far-field impacts and other existing criteria were expanded to include regional and cumulative environmental impacts in Hargrave's [Hargrave BT. A traffic light decision system for marine finfish aquaculture siting. Ocean Coast Manag 2002; 45:215-35.] Traffic Light Decision Support System (DSS) presently used in Canadian aquaculture environmental assessments. Before mitigation, 19 of the 23 sites failed the amended set of criteria and after considering mitigation, 8 sites failed. Site and ecosystem indices yielded varying site acceptability scores; however, many sites would not have been approved if siting decisions had been made within a regional management framework and cumulative impact criteria were considered in the site evaluation process.

  13. Development and validation of a survey to assess barriers to drug use in patients with chronic heart failure.

    PubMed

    Simpson, Scot H; Johnson, Jeffrey A; Farris, Karen B; Tsuyuki, Ross T

    2002-09-01

    Scot H. Simpson, Pharm.D., M.Sc., Jeffrey A. Johnson, Ph.D., Karen B. Farris, Ph.D., and Ross T. Tsuyuki, Pharm.D., M.Sc. Objective. To report the development of and initial experience with a survey designed to assess patient-perceived barriers to drug use in ambulatory patients with heart failure. Methods. The Barriers to Medication Use (BMU) survey, developed from previous qualitative work by our group, was administered to 128 consecutive patients attending an outpatient heart failure clinic. The first 42 patients to return the survey were mailed a second survey to evaluate response stability over time. The survey contained 31 questions in five barrier domains (knowledge, previous drug therapy experiences, social support, communication, and relationship with health care professionals). Patients also completed the Minnesota Living with Heart Failure (MLHF) questionnaire and a self-reported drug use scale. Frequency of drug refills was used to estimate adherence. Reliability and construct validity of the BMU survey were assessed using correlation coefficients. Results. Response rates were 89% and 93% for the first and retest surveys, respectively The BMU survey showed modest internal consistency in the overall survey and in two of the five barrier domains. Responses to the first and retest surveys showed good stability over time in the overall survey and in four of the five barrier domains. Patients with good adherence reported few barriers; however, the association was not strong (Pearson correlation coefficient r = -0.14, p=0.14). Patients who reported few barriers also reported better MLHF scores (r = 0.42, p < 0.001), with the strongest association in the social support domain (r = 0.53; p < 0.001). All respondents reported having a good relationship with health care professionals. The most common barriers to drug use were poor support networks and previous adverse reactions. Conclusion. The BMU survey demonstrated reasonable reliability and validity

  14. Benchmark 1 - Failure Prediction after Cup Drawing, Reverse Redrawing and Expansion Part A: Benchmark Description

    NASA Astrophysics Data System (ADS)

    Watson, Martin; Dick, Robert; Huang, Y. Helen; Lockley, Andrew; Cardoso, Rui; Santos, Abel

    2016-08-01

    This Benchmark is designed to predict the fracture of a food can after drawing, reverse redrawing and expansion. The aim is to assess different sheet metal forming difficulties such as plastic anisotropic earing and failure models (strain and stress based Forming Limit Diagrams) under complex nonlinear strain paths. To study these effects, two distinct materials, TH330 steel (unstoved) and AA5352 aluminum alloy are considered in this Benchmark. Problem description, material properties, and simulation reports with experimental data are summarized.

  15. An assessment of the state of the art in predicting the failure of ceramics: Final report

    SciTech Connect

    Boulet, J.A.M.

    1988-03-01

    The greatest weakness in existing design strategies for brittle fracture is in the narrow range of conditions for which the strategies are adequate. The primary reason for this weakness is the use of simplistic mechanical models of fracture processes and unverified statistical models of materials. To improve the design methodology, the models must first be improved. Specifically recommended research goals are: to develop models of cracks with realistic geometry under arbitrary stress states; to identify and model the most important relationships between fracture processes and microstructural features; to assess the technology available for acquiring statistical data on microstructure and flaw populations, and to establish the amount of data required for verification of statistical models; and to establish a computer-based fracture simulation that can incorporate a wide variety of mechanical and statistical models and crack geometries, as well as arbitrary stress states. 204 refs., 2 tabs.

  16. Einstein-Cartan, Bianchi I and the Hubble diagram

    NASA Astrophysics Data System (ADS)

    ZouZou, Sami R.; Tilquin, André; Schücker, Thomas

    2016-04-01

    We try to solve the dark matter problem in the fit between theory and the Hubble diagram of supernovae by allowing for torsion via Einstein-Cartan's gravity and for anisotropy via the axial Bianchi I metric. Otherwise we are conservative and admit only the cosmological constant and dust. The failure of our model is quantified by the relative amount of dust in our best fit: Ω _{m0}= 27 ± 5 % at 1σ level.

  17. Bootstrapping & Separable Monte Carlo Simulation Methods Tailored for Efficient Assessment of Probability of Failure of Dynamic Systems

    NASA Astrophysics Data System (ADS)

    Jehan, Musarrat

    The response of a dynamic system is random. There is randomness in both the applied loads and the strength of the system. Therefore, to account for the uncertainty, the safety of the system must be quantified using its probability of survival (reliability). Monte Carlo Simulation (MCS) is a widely used method for probabilistic analysis because of its robustness. However, a challenge in reliability assessment using MCS is that the high computational cost limits the accuracy of MCS. Haftka et al. [2010] developed an improved sampling technique for reliability assessment called separable Monte Carlo (SMC) that can significantly increase the accuracy of estimation without increasing the cost of sampling. However, this method was applied to time-invariant problems involving two random variables only. This dissertation extends SMC to random vibration problems with multiple random variables. This research also develops a novel method for estimation of the standard deviation of the probability of failure of a structure under static or random vibration. The method is demonstrated on quarter car models and a wind turbine. The proposed method is validated using repeated standard MCS.

  18. A new Doppler method of assessing left ventricular ejection force in chronic congestive heart failure.

    PubMed

    Isaaz, K; Ethevenot, G; Admant, P; Brembilla, B; Pernot, C

    1989-07-01

    A noninvasive method using Doppler echocardiography was developed to determine the force exerted by the left ventricle in accelerating the blood into the aorta. The value of this new Doppler ejection index in the assessment of left ventricular (LV) performance was tested in 36 patients with chronic congestive heart disease undergoing cardiac catheterization and in 11 age-matched normal control subjects. The 36 patients were subgrouped into 3 groups based on angiographic ejection fraction (LV ejection fraction greater than 60, 41 to 60 and less than or equal to 40%). According to Newton's second law of motion (force = mass X acceleration), the LV ejection force was derived from the product of the mass of blood ejected during the acceleration time with the mean acceleration undergone during that time. In patients with LV ejection fraction less than or equal to 40%, LV ejection force, peak aortic velocity and mean acceleration were severely depressed when compared with the other groups (p less than 0.001). In patients with LV ejection fraction of 41 to 60%, LV ejection force was significantly reduced (22 +/- 3 kdynes) when compared with normal subjects (29 +/- 5 kdynes, p = 0.002) and with patients with LV ejection fraction greater than 60% (29 +/- 7 kdynes, p = 0.009); peak velocity and mean acceleration did not differ between these 3 groups. The LV ejection force showed a good linear correlation with LV ejection fraction (r = 0.86) and a better power fit (r = 0.91). Peak aortic blood velocity and mean acceleration showed less good linear correlations with LV ejection fraction (r = 0.73 and r = 0.66, respectively). The mass of blood ejected during the acceleration time also showed a weak linear correlation with LV ejection fraction (r = 0.64). An LV ejection force less than 20 kdynes was associated with a depressed LV performance (LV ejection fraction less than 50%) with 91% sensitivity and 90% specificity. Thus, these findings suggest that LV ejection force is a new

  19. Risk assessment of failure by stress corrosion cracking in shrunk-on disks of low pressure turbines

    SciTech Connect

    Rosario, D.A.; Roberts, B.W.; Steakley, M.F.

    1996-12-31

    Several large LP turbines in the TVA system utilize shrunk-on disks which are keyed to the shaft. The 7th, 8th, and 9th stage disks experience wetness in operation which renders them potentially subject to stress corrosion cracking (SCC) in the keyway of the shrink fit area. To minimize SCC concerns in the disk keyway, TVA has embarked on a phased approach to refurbish the LP rotors with a fat shaft and a tab design replacing the rectangular keyways. Nondestructive examinations have been performed on rotors being refurbished and those continuing in service to assure that the schedule for refurbishment does not place operating units with the original design at undue risk. Many variables are involved in the quantification of the risk of disk failure from SCC: incubation time, stress intensity distribution as a function of crack size, material SCC rate as a function of strength and environment, fatigue crack growth rate, loading history (especially overspeed events), and fracture toughness. For a given set of assumptions and parameters, a deterministic model may be used to assess the risk of failure. Unfortunately, if mostly conservative assumptions are made for the random variables, it can be concluded that small initial flaws will grow to critical sizes in two years of additional operation. As an alternative to the deterministic approach, a probabilistic Monte Carlo simulation was performed using ten critical variables. Variable distributions were examined by binning data values within three standard deviations above and below the mean. One million iterations were performed for each disk.

  20. Assessment of formulation robustness for nano-crystalline suspensions using failure mode analysis or derisking approach.

    PubMed

    Nakach, Mostafa; Authelin, Jean-René; Voignier, Cecile; Tadros, Tharwat; Galet, Laurence; Chamayou, Alain

    2016-06-15

    The small particle size of nano-crystalline suspensions can be responsible for their physical instability during drug product preparation (downstream processing), storage and administration. For that purpose, the commercial formulation needs to be sufficiently robust to various triggering conditions, such as ionic strength, shear rate, wetting/dispersing agent desorption by dilution, temperature and pH variation. In our previous work we described a systematic approach to select the suitable wetting/dispersant agent for the stabilization of nano-crystalline suspension. In this paper, we described the assessment of the formulation robustness (stabilized using a mixture of sodium dodecyl sulfate (SDS) and polyvinylpyrrolidone (PVP) and) by measuring the rate of perikinetic (diffusion-controlled) and orthokinetic (shear-induced) aggregation as a function of ionic strength, temperature, pH and dilution. The results showed that, using the SDS/PVP system, the critical coagulation concentration is about five times higher than that observed in the literature for suspension colloidaly stable at high concentration. The nano-suspension was also found to be very stable at ambient temperature and at different pH conditions. Desorption test confirmed the high affinity between API and wetting/dispersing agent. However, the suspension undergoes aggregation at high temperature due to the desorption of the wetting/dispersing agent and disaggregation of SDS micelles. Furthermore, aggregation occurs at very high shear rate (orhokinetic aggregation) by overcoming the energy barrier responsible for colloidal stability of the system. PMID:27102992

  1. Assessment of the Hf N, Zr N and Ti N phase diagrams at high pressures and temperatures: balancing between MN and M3N4 (M = Hf, Zr, Ti)

    NASA Astrophysics Data System (ADS)

    Kroll, Peter

    2004-04-01

    We study the nitrogen-rich part of the phase diagram Hf-N, Zr-N and Ti-N, employing first-principle calculations for an assessment of energy and enthalpy as a function of pressure. At zero pressure the novel cubic Th3P4-type structures are metastable modifications of M3N4 (M = Hf,Zr). The lowest energy configuration of both compounds is an orthorhombic Zr3N4-type. This orthorhombic structure will transform into the Th3P4-type at 9 and 6 GPa, for Hf3N4 and Zr3N4, respectively. The lowest energy configuration of Ti3N4 is a CaTi2O4-type structure. It will first transform into the orthorhombic Zr3N4-type at 3.8 GPa, then further transform into the cubic Th3P4-type at 15 GPa. The spinel type is metastable throughout the phase diagram for all three systems. The phase boundary between mononitrides MN and the M3N4-phases is accessed as a function of pressure. We include the entropy of gaseous nitrogen from tabulated data to estimate the free enthalpy DgrG of the nitride phases. The orthorhombic modification of Hf3N4 turns out to be thermodynamically stable with respect to a decomposition into the mononitrides and nitrogen for temperatures up to about 1000 °C. The stability of Zr3N4 is in question; within the estimated error no final conclusion can be drawn. Ti3N4, on the other hand, will only be metastable. At higher pressures, however, the free energy of nitrogen is substantially reduced and the 3:4 compositions become more stable. We reproduce the experimental requirements (18 GPa and 2800 K) for the synthesis of the novel Hf3N4. At 2800 K the pressures needed to synthesize cubic phases of Zr3N4 and Ti3N4 are estimated to be 40 and 100 GPa, respectively.

  2. Voronoi Diagrams and Spring Rain

    ERIC Educational Resources Information Center

    Perham, Arnold E.; Perham, Faustine L.

    2011-01-01

    The goal of this geometry project is to use Voronoi diagrams, a powerful modeling tool across disciplines, and the integration of technology to analyze spring rainfall from rain gauge data over a region. In their investigation, students use familiar equipment from their mathematical toolbox: triangles and other polygons, circumcenters and…

  3. Reliability analysis based on the losses from failures.

    PubMed

    Todinov, M T

    2006-04-01

    The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the

  4. Kidney Failure

    MedlinePlus

    ... if You Have Kidney Disease Kidney Failure Expand Dialysis Kidney Transplant Preparing for Kidney Failure Treatment Choosing Not to Treat with Dialysis or Transplant Paying for Kidney Failure Treatment Contact ...

  5. 30 CFR 1218.41 - Assessments for failure to submit payment of same amount as Form MMS-2014 or bill document or to...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... same amount as Form MMS-2014 or bill document or to provide adequate information. 1218.41 Section 1218... General Provisions § 1218.41 Assessments for failure to submit payment of same amount as Form MMS-2014 or... Form MMS-2014, Form MMS-4430, or a bill document, unless ONRR has authorized the difference in...

  6. Sequential organ failure assessment scoring and prediction of patient's outcome in Intensive Care Unit of a tertiary care hospital

    PubMed Central

    Jain, Aditi; Palta, Sanjeev; Saroa, Richa; Palta, Anshu; Sama, Sonu; Gombar, Satinder

    2016-01-01

    Background and Aims: The objective was to determine the accuracy of sequential organ failure assessment (SOFA) score in predicting outcome of patients in Intensive Care Unit (ICU). Material and Methods: Forty-four consecutive patients between 15 and 80 years admitted to ICU over 8 weeks period were studied prospectively. Three patients were excluded. SOFA score was determined 24 h postadmission to ICU and subsequently every 48 h for the first 10 days. Patients were followed till discharge/death/transfer from the ICU. Initial SOFA score, highest and mean SOFA scores were calculated and correlated with mortality and duration of stay in ICU. Results: The mortality rate was 39% and the mean duration of stay in the ICU was 9 days. The maximum score in survivors (3.92 ± 2.17) was significantly lower than nonsurvivors (8.9 ± 3.45). The initial SOFA score had a strong statistical correlation with mortality. Cardiovascular score on day 1 and 3, respiratory score on day 7, and coagulation profile on day 3 correlated significantly with the outcome. Duration of the stay did not correlate with the survival (P = 0.461). Conclusion: SOFA score is a simple, but effective prognostic indicator and evaluator for patient progress in ICU. Day 1 SOFA can triage the patients into risk categories. For further management, mean and maximum score help determine the severity of illness and can act as a guide for the intensity of therapy required for each patient. PMID:27625487

  7. White is green: new schematic diagrams

    NASA Astrophysics Data System (ADS)

    Glicksman, Hal

    2002-06-01

    Two new schematic diagrams are presented here that derive from the study of the value relationships of the primary colors of RGB computer and video color. The first diagram is a 'Truth Table' that presents true-false, on-off states of the three colors of RGB so that the colors are presented in the order of their brightness values. The second diagram is a triple Venn diagram based on the perception of color. This diagram is presented as an alternative to the Venn diagrams of additive and subtractive color usually used to explain color.

  8. Determination of forming limit diagram for tube hydroforming based on the strain rate change criterion

    NASA Astrophysics Data System (ADS)

    Hu, Guolin; Yang, Lianfa; Liu, Jianwei

    2013-12-01

    Tube hydroforming (THF) is an attractive manufacturing process in automotive industry, and forming limit diagram (FLD) is a significant strategy to assess the formability of THF. In present study, a method of predicting the FLD for THF is developed based on finite element (FE) simulation with the strain rate change criterion (SRCC) as a failure criteria of identify localized necking. FE simulations under various linear loading paths are carried out to obtain the strain information. The equivalent strain rates at potential fracturing and its adjacent nodes are calculated and utilized with SRCC to distinguish the onset of fracturing in FE simulation. The fracture strains at the two nodes under various linear loading paths are abstracted for establishing FLD. Tube hydo-bulging experiments under the linear loading paths have been conducted to verify the prediction method of FLD, and the results show that this prediction method bears good agreement with experimental data.

  9. Failure of Passive Immune Transfer in Calves: A Meta-Analysis on the Consequences and Assessment of the Economic Impact.

    PubMed

    Raboisson, Didier; Trillat, Pauline; Cahuzac, Clélia

    2016-01-01

    Low colostrum intake at birth results in the failure of passive transfer (FPT) due to the inadequate ingestion of colostral immunoglobulins (Ig). FPT is associated with an increased risk of mortality and decreased health and longevity. Despite the known management practices associated with low FPT, it remains an important issue in the field. Neither a quantitative analysis of FPT consequences nor an assessment of its total cost are available. To address this point, a meta-analysis on the adjusted associations between FPT and its outcomes was first performed. Then, the total costs of FPT in European systems were calculated using a stochastic method with adjusted values as the input parameters. The adjusted risks (and 95% confidence intervals) for mortality, bovine respiratory disease, diarrhoea and overall morbidity in the case of FPT were 2.12 (1.43-3.13), 1.75 (1.50-2.03), 1.51 (1.05-2.17) and 1.91 (1.63-2.24), respectively. The mean (and 95% prediction interval) total costs per calf with FPT were estimated to be €60 (€10-109) and €80 (€20-139) for dairy and beef, respectively. As a result of the double-step stochastic method, the proposed economic estimation constitutes the first estimate available for FPT. The results are presented in a way that facilitates their use in the field and, with limited effort, combines the cost of each contributor to increase the applicability of the economic assessment to the situations farm-advisors may face. The present economic estimates are also an important tool to evaluate the profitability of measures that aim to improve colostrum intake and FPT prevention. PMID:26986832

  10. Failure of Passive Immune Transfer in Calves: A Meta-Analysis on the Consequences and Assessment of the Economic Impact

    PubMed Central

    Raboisson, Didier; Trillat, Pauline; Cahuzac, Clélia

    2016-01-01

    Low colostrum intake at birth results in the failure of passive transfer (FPT) due to the inadequate ingestion of colostral immunoglobulins (Ig). FPT is associated with an increased risk of mortality and decreased health and longevity. Despite the known management practices associated with low FPT, it remains an important issue in the field. Neither a quantitative analysis of FPT consequences nor an assessment of its total cost are available. To address this point, a meta-analysis on the adjusted associations between FPT and its outcomes was first performed. Then, the total costs of FPT in European systems were calculated using a stochastic method with adjusted values as the input parameters. The adjusted risks (and 95% confidence intervals) for mortality, bovine respiratory disease, diarrhoea and overall morbidity in the case of FPT were 2.12 (1.43–3.13), 1.75 (1.50–2.03), 1.51 (1.05–2.17) and 1.91 (1.63–2.24), respectively. The mean (and 95% prediction interval) total costs per calf with FPT were estimated to be €60 (€10–109) and €80 (€20–139) for dairy and beef, respectively. As a result of the double-step stochastic method, the proposed economic estimation constitutes the first estimate available for FPT. The results are presented in a way that facilitates their use in the field and, with limited effort, combines the cost of each contributor to increase the applicability of the economic assessment to the situations farm-advisors may face. The present economic estimates are also an important tool to evaluate the profitability of measures that aim to improve colostrum intake and FPT prevention. PMID:26986832

  11. Failure of Passive Immune Transfer in Calves: A Meta-Analysis on the Consequences and Assessment of the Economic Impact.

    PubMed

    Raboisson, Didier; Trillat, Pauline; Cahuzac, Clélia

    2016-01-01

    Low colostrum intake at birth results in the failure of passive transfer (FPT) due to the inadequate ingestion of colostral immunoglobulins (Ig). FPT is associated with an increased risk of mortality and decreased health and longevity. Despite the known management practices associated with low FPT, it remains an important issue in the field. Neither a quantitative analysis of FPT consequences nor an assessment of its total cost are available. To address this point, a meta-analysis on the adjusted associations between FPT and its outcomes was first performed. Then, the total costs of FPT in European systems were calculated using a stochastic method with adjusted values as the input parameters. The adjusted risks (and 95% confidence intervals) for mortality, bovine respiratory disease, diarrhoea and overall morbidity in the case of FPT were 2.12 (1.43-3.13), 1.75 (1.50-2.03), 1.51 (1.05-2.17) and 1.91 (1.63-2.24), respectively. The mean (and 95% prediction interval) total costs per calf with FPT were estimated to be €60 (€10-109) and €80 (€20-139) for dairy and beef, respectively. As a result of the double-step stochastic method, the proposed economic estimation constitutes the first estimate available for FPT. The results are presented in a way that facilitates their use in the field and, with limited effort, combines the cost of each contributor to increase the applicability of the economic assessment to the situations farm-advisors may face. The present economic estimates are also an important tool to evaluate the profitability of measures that aim to improve colostrum intake and FPT prevention.

  12. Hero's journey in bifurcation diagram

    NASA Astrophysics Data System (ADS)

    Monteiro, L. H. A.; Mustaro, P. N.

    2012-06-01

    The hero's journey is a narrative structure identified by several authors in comparative studies on folklore and mythology. This storytelling template presents the stages of inner metamorphosis undergone by the protagonist after being called to an adventure. In a simplified version, this journey is divided into three acts separated by two crucial moments. Here we propose a discrete-time dynamical system for representing the protagonist's evolution. The suffering along the journey is taken as the control parameter of this system. The bifurcation diagram exhibits stationary, periodic and chaotic behaviors. In this diagram, there are transition from fixed point to chaos and transition from limit cycle to fixed point. We found that the values of the control parameter corresponding to these two transitions are in quantitative agreement with the two critical moments of the three-act hero's journey identified in 10 movies appearing in the list of the 200 worldwide highest-grossing films.

  13. Quantum Dimer Model: Phase Diagrams

    NASA Astrophysics Data System (ADS)

    Goldstein, Garry; Chamon, Claudio; Castelnovo, Claudio

    We present new theoretical analysis of the Quantum Dimer Model. We study dimer models on square, cubic and triangular lattices and we reproduce their phase diagrams (which were previously known only numerically). We show that there are several types of dimer liquids and solids. We present preliminary analysis of several other models including doped dimers and planar spin ice, and some results on the Kagome and hexagonal lattices.

  14. INCONEL 718: A solidification diagram

    NASA Astrophysics Data System (ADS)

    Knorovsky, G. A.; Cieslak, M. J.; Headley, T. J.; Romig, A. D.; Hammetter, W. F.

    1989-10-01

    As part of a program studying weldability of Ni-base superalloys, results of an integrated analytical approach are used to generate a constitution diagram for INCONEL 718* in the temperature range associated with solidification. Differential thermal analysis of wrought material and optical and scanning electron microscopy, electron probe microanalysis, and analytical electron microscopy of gas tungsten arc welds are used in conjunction with solidification theory to generate data points for this diagram. The important features of the diagram are an austenite (γ)/Laves phase eutectic which occurs at ≈19.1 wt pct Nb between austenite containing ≈9.3 wt pct Nb and a Laves phase which contains ≈22.4 wt pct Nb. The distribution coefficient for Nb was found to be ≈0.5. The solidification sequence of INCONEL 718 was found to be (1) proeutectic γ, followed by (2) a γ/NbC eutectic at ≈1250°C, followed by (3) continued γ solidification, followed by (4) a γ/Laves phase eutectic at ≈1200°C. An estimate of the volume fraction eutectic is made using the Scheil solidification model, and the fraction of each phase in the eutectic is calculated via the lever rule. These are compared with experimentally determined values and found to be in good agreement.

  15. Causal diagrams in systems epidemiology

    PubMed Central

    2012-01-01

    Methods of diagrammatic modelling have been greatly developed in the past two decades. Outside the context of infectious diseases, systematic use of diagrams in epidemiology has been mainly confined to the analysis of a single link: that between a disease outcome and its proximal determinant(s). Transmitted causes ("causes of causes") tend not to be systematically analysed. The infectious disease epidemiology modelling tradition models the human population in its environment, typically with the exposure-health relationship and the determinants of exposure being considered at individual and group/ecological levels, respectively. Some properties of the resulting systems are quite general, and are seen in unrelated contexts such as biochemical pathways. Confining analysis to a single link misses the opportunity to discover such properties. The structure of a causal diagram is derived from knowledge about how the world works, as well as from statistical evidence. A single diagram can be used to characterise a whole research area, not just a single analysis - although this depends on the degree of consistency of the causal relationships between different populations - and can therefore be used to integrate multiple datasets. Additional advantages of system-wide models include: the use of instrumental variables - now emerging as an important technique in epidemiology in the context of mendelian randomisation, but under-used in the exploitation of "natural experiments"; the explicit use of change models, which have advantages with respect to inferring causation; and in the detection and elucidation of feedback. PMID:22429606

  16. Assessment of the risk of failure of high voltage substations due to environmental conditions and pollution on insulators.

    PubMed

    Castillo Sierra, Rafael; Oviedo-Trespalacios, Oscar; Candelo, John E; Soto, Jose D

    2015-07-01

    Pollution on electrical insulators is one of the greatest causes of failure of substations subjected to high levels of salinity and environmental pollution. Considering leakage current as the main indicator of pollution on insulators, this paper focuses on establishing the effect of the environmental conditions on the risk of failure due to pollution on insulators and determining the significant change in the magnitude of the pollution on the insulators during dry and humid periods. Hierarchical segmentation analysis was used to establish the effect of environmental conditions on the risk of failure due to pollution on insulators. The Kruskal-Wallis test was utilized to determine the significant changes in the magnitude of the pollution due to climate periods. An important result was the discovery that leakage current was more common on insulators during dry periods than humid ones. There was also a higher risk of failure due to pollution during dry periods. During the humid period, various temperatures and wind directions produced a small change in the risk of failure. As a technical result, operators of electrical substations can now identify the cause of an increase in risk of failure due to pollution in the area. The research provides a contribution towards the behaviour of the leakage current under conditions similar to those of the Colombian Caribbean coast and how they affect the risk of failure of the substation due to pollution.

  17. {sup 18}F-Fluorodeoxyglucose Positron Emission Tomography-Based Assessment of Local Failure Patterns in Non-Small-Cell Lung Cancer Treated With Definitive Radiotherapy

    SciTech Connect

    Sura, Sonal; Greco, Carlo; Gelblum, Daphna; Yorke, Ellen D.; Jackson, Andrew; Rosenzweig, Kenneth E.

    2008-04-01

    Purpose: To assess the pattern of local failure using {sup 18}F-fluorodeoxyglucose (FDG)-positron emission tomography (PET) scans after radiotherapy (RT) in non-small-cell lung cancer (NSCLC) patients treated with definitive RT whose gross tumor volumes (GTVs) were defined with the aid of pre-RT PET data. Method and Materials: The data from 26 patients treated with involved-field RT who had local failure and a post-RT PET scan were analyzed. The patterns of failure were visually scored and defined as follows: (1) within the GTV/planning target volume (PTV); (2) within the GTV, PTV, and outward; (3) within the PTV and outward; and (4) outside the PTV. Local failure was also evaluated as originating from nodal areas vs. the primary tumor. Results: We analyzed 34 lesions. All 26 patients had recurrence originating from their primary tumor. Of the 34 lesions, 8 (24%) were in nodal areas, 5 of which (63%) were marginal or geographic misses compared with only 1 (4%) of the 26 primary recurrences (p = 0.001). Of the eight primary tumors that had received a dose of <60 Gy, six (75%) had failure within the GTV and two (25%) at the GTV margin. At doses of {>=}60 Gy, 6 (33%) of 18 had failure within the GTV and 11 (61%) at the GTV margin, and 1 (6%) was a marginal miss (p < 0.05). Conclusion: At lower doses, the pattern of recurrences was mostly within the GTV, suggesting that the dose might have been a factor for tumor control. At greater doses, the treatment failures were mostly at the margin of the GTV. This suggests that visual incorporation of PET data for GTV delineation might be inadequate, and more sophisticated approaches of PET registration should be evaluated.

  18. Differential Effectiveness of Two Science Diagram Types.

    ERIC Educational Resources Information Center

    Holliday, William G.

    Reported is an Aptitude Treatment Instruction (ATI) Study designed to evaluate the aptitude of verbal comprehension in terms of two unitary complex science diagram types: a single complex block word diagram and a single complex picture word diagram.. ATI theory and research indicate that different effective instructional treatments tend to help…

  19. Relation of Longitudinal Changes in Quality of Life Assessments to Changes in Functional Capacity in Patients With Heart Failure With and Without Anemia.

    PubMed

    Cooper, Trond J; Anker, Stefan D; Comin-Colet, Josep; Filippatos, Gerasimos; Lainscak, Mitja; Lüscher, Thomas F; Mori, Claudio; Johnson, Patrick; Ponikowski, Piotr; Dickstein, Kenneth

    2016-05-01

    Clinical status in heart failure is conventionally assessed by the physician's evaluation, patients' own perception of their symptoms, quality of life (QoL) tools, and a measure of functional capacity. These aspects can be measured with tools such as the New York Heart Association functional class, QoL tools such as the EuropeanQoL-5 dimension, the Kansas City Cardiomyopathy Questionnaire, patient global assessment (PGA), and by 6-minute walk test (6MWT), respectively. The ferric carboxymaltose in patients with heart failure and iron deficiency (FAIR-HF) trial demonstrated that treatment with intravenous ferric carboxymaltose in iron-deficient patients with symptomatic heart failure with reduced left ventricular function, significantly improved all 5 outcome measures. This analysis assessed the correlations between the longitudinal changes in the measures of clinical status, as measured by QoL tools and the changes in the measures of functional capacity as measured by the 6MWT. This analysis used the database from the FAIR-HF trial, which randomized 459 patients with chronic heart failure (reduced left ventricular ejection fraction) and iron deficiency, with or without anemia to ferrous carboxymaltose or placebo. The degree of correlation between QoL tools and the 6MWT was assessed at 4, 12, and 24 weeks. The data demonstrate highly significant correlations between QoL and functional capacity, as measured by the 6MWT, at all time points (p <0.001). Changes in PGA, Kansas City Cardiomyopathy Questionnaire, and EuroQoL-5D correlated increasingly over time with changes in 6MWT performance. Interestingly, the strongest correlation at 24 weeks is for the PGA, which is a simple numerical scale (r = -0.57, p <0.001). This analysis provides evidence that QoL assessment show a significant correlation with functional capacity, as measured by the 6MWT. The strength of these correlations increased over time. PMID:27015889

  20. Relation of Longitudinal Changes in Quality of Life Assessments to Changes in Functional Capacity in Patients With Heart Failure With and Without Anemia.

    PubMed

    Cooper, Trond J; Anker, Stefan D; Comin-Colet, Josep; Filippatos, Gerasimos; Lainscak, Mitja; Lüscher, Thomas F; Mori, Claudio; Johnson, Patrick; Ponikowski, Piotr; Dickstein, Kenneth

    2016-05-01

    Clinical status in heart failure is conventionally assessed by the physician's evaluation, patients' own perception of their symptoms, quality of life (QoL) tools, and a measure of functional capacity. These aspects can be measured with tools such as the New York Heart Association functional class, QoL tools such as the EuropeanQoL-5 dimension, the Kansas City Cardiomyopathy Questionnaire, patient global assessment (PGA), and by 6-minute walk test (6MWT), respectively. The ferric carboxymaltose in patients with heart failure and iron deficiency (FAIR-HF) trial demonstrated that treatment with intravenous ferric carboxymaltose in iron-deficient patients with symptomatic heart failure with reduced left ventricular function, significantly improved all 5 outcome measures. This analysis assessed the correlations between the longitudinal changes in the measures of clinical status, as measured by QoL tools and the changes in the measures of functional capacity as measured by the 6MWT. This analysis used the database from the FAIR-HF trial, which randomized 459 patients with chronic heart failure (reduced left ventricular ejection fraction) and iron deficiency, with or without anemia to ferrous carboxymaltose or placebo. The degree of correlation between QoL tools and the 6MWT was assessed at 4, 12, and 24 weeks. The data demonstrate highly significant correlations between QoL and functional capacity, as measured by the 6MWT, at all time points (p <0.001). Changes in PGA, Kansas City Cardiomyopathy Questionnaire, and EuroQoL-5D correlated increasingly over time with changes in 6MWT performance. Interestingly, the strongest correlation at 24 weeks is for the PGA, which is a simple numerical scale (r = -0.57, p <0.001). This analysis provides evidence that QoL assessment show a significant correlation with functional capacity, as measured by the 6MWT. The strength of these correlations increased over time.

  1. Diagram, a Learning Environment for Initiation to Object-Oriented Modeling with UML Class Diagrams

    ERIC Educational Resources Information Center

    Py, Dominique; Auxepaules, Ludovic; Alonso, Mathilde

    2013-01-01

    This paper presents Diagram, a learning environment for object-oriented modelling (OOM) with UML class diagrams. Diagram an open environment, in which the teacher can add new exercises without constraints on the vocabulary or the size of the diagram. The interface includes methodological help, encourages self-correcting and self-monitoring, and…

  2. Origin and use of crystallization phase diagrams.

    PubMed

    Rupp, Bernhard

    2015-03-01

    Crystallization phase diagrams are frequently used to conceptualize the phase relations and also the processes taking place during the crystallization of macromolecules. While a great deal of freedom is given in crystallization phase diagrams owing to a lack of specific knowledge about the actual phase boundaries and phase equilibria, crucial fundamental features of phase diagrams can be derived from thermodynamic first principles. Consequently, there are limits to what can be reasonably displayed in a phase diagram, and imagination may start to conflict with thermodynamic realities. Here, the commonly used `crystallization phase diagrams' are derived from thermodynamic excess properties and their limitations and appropriate use is discussed.

  3. Voronoi Diagrams Without Bounding Boxes

    NASA Astrophysics Data System (ADS)

    Sang, E. T. K.

    2015-10-01

    We present a technique for presenting geographic data in Voronoi diagrams without having to specify a bounding box. The method restricts Voronoi cells to points within a user-defined distance of the data points. The mathematical foundation of the approach is presented as well. The cell clipping method is particularly useful for presenting geographic data that is spread in an irregular way over a map, as for example the Dutch dialect data displayed in Figure 2. The automatic generation of reasonable cell boundaries also makes redundant a frequently used solution to this problem that requires data owners to specify region boundaries, as in Goebl (2010) and Nerbonne et al (2011).

  4. Assessment of the magnitude and associated factors of immunological failure among adult and adolescent HIV-infected patients in St. Luke and Tulubolo Hospital, Oromia Region, Ethiopia

    PubMed Central

    Bayou, Bekelech; Sisay, Abay; Kumie, Abera

    2015-01-01

    Introduction The use of antiretroviral therapy (ART) has become a standard of care for the treatment of HIV infection. However, cost and resistance to ART are major obstacles for access to treatment especially in resource-limited settings. In this study, we aimed to assess the magnitude and associated factors of Immunological failure among adult and adolescent HIV infected Patients (with age ‘15yrs) on Highly Active Antiretroviral Therapy (HAART) in St. Luke and Tulu Bolo Hospitals, Oromia Region, Ethiopia. Methods A retrospective follow-up study was conducted among HIV-infected patients initiated 1st line ART at St. Luke and Tulu Bolo Hospitals, South West Shoa Zone, Oromia, Ethiopia. Results A total of 828 patient charts were reviewed. 477(57.6%) were female and the median age was 32 years. The median baseline CD4 count was 148cells/mm3. The most common prescribed ART was TDF based (36.7%). Out of 828 patients chart reviewed 6.8% (56) were developed immunological failure. Out of them only 20 (2.4%) were detected and put on second line regimen. The incidence of immunological failure was 1.8 cases per 100 person years of follow-up. Patients who had not disclosed their HIV status to any one had high risk of immunological failure compared with patients those who had disclosed their HIV status (AHR, 0.429; 95% CI 0.206 - 0.893; P-value=0.024). Conclusion Non disclosures of HIV status and with ambulatory of baseline functional status were found to be predictors of immunological failure. Most of the immunological failure cases were not detected early and not switched to second line ARV regimen. So patients with the above risk factors should be considered for a timely switch to second line HAART. PMID:26587140

  5. The NaNO3-KNO3 phase diagram

    NASA Astrophysics Data System (ADS)

    Benages-Vilau, R.; Calvet, T.; Cuevas-Diarte, M. A.; Oonk, H. A. J.

    2016-01-01

    Many papers have been published in relation to the NaNO3-KNO3 phase diagram determination in the last 160 years. These papers fall in two categories: (1) the solid-liquid equilibrium is assumed to be of the eutectic type, and (2) the solid-liquid equilibrium is considered as a loop with a minimum. The discordance between the two views is related to the slow transition kinetics that complicate the assessment of thermal 'fluctuations', and also to the appearance of a metastable form of potassium nitrate. The main result of this paper is the experimental phase diagram constructed with new experimental data so that we can assure that the second option is correct. This phase diagram is defined by a eutectoid invariant, an asymmetric immiscibility gap and a continuous solid solution with a minimum of melting point. Additionally, the ABθ model simulates correctly the experimental piece of evidence.

  6. Respiratory Failure

    MedlinePlus

    Respiratory failure happens when not enough oxygen passes from your lungs into your blood. Your body's organs, ... brain, need oxygen-rich blood to work well. Respiratory failure also can happen if your lungs can' ...

  7. Phase Diagrams of Nuclear Pasta

    NASA Astrophysics Data System (ADS)

    Caplan, Matthew; Horowitz, Chuck; Berry, Don; da Silva Schneider, Andre

    2016-03-01

    In the inner crust of neutrons stars, where matter is near the saturation density, protons and neutrons arrange themselves into complex structures called nuclear pasta. Early theoretical work predicted a simple graduated hierarchy of pasta phases, consisting of spheres, cylinders, slabs, and uniform matter with voids. Previous work has simulated these phases with a simple classical model and has shown that the formation of these structures is dependent on the temperature, density, and proton fraction. However, previous work only studied a limited range of these parameters due to computational limitations. Thanks to recent advances in computing it is now possible to survey the structure of nuclear pasta for a larger range of parameters. By simulating nuclear pasta with constant temperature and proton fraction in an expanding simulation volume we are able to study the phase transitions in nuclear pasta, and thus produce a set of phase diagrams. We report on these phase diagrams as well as newly identified phases of nuclear pasta and discuss their implications for neutron star observables.

  8. The Importance of Design in Learning from Node-Link Diagrams

    ERIC Educational Resources Information Center

    van Amelsvoort, Marije; van der Meij, Jan; Anjewierden, Anjo; van der Meij, Hans

    2013-01-01

    Diagrams organize by location. They give spatial cues for finding and recognizing information and for making inferences. In education, diagrams are often used to help students understand and recall information. This study assessed the influence of perceptual cues on reading behavior and subsequent retention. Eighty-two participants were assigned…

  9. Mechanical assessment of local bone quality to predict failure of locked plating in a proximal humerus fracture model.

    PubMed

    Röderer, Götz; Brianza, Stefano; Schiuma, Damiano; Schwyn, Ronald; Scola, Alexander; Gueorguiev, Boyko; Gebhard, Florian; Tami, Andrea

    2013-09-01

    The importance of osteoporosis in proximal humerus fractures is well recognized. However, the local distribution of bone quality in the humeral head may also have a significant effect because it remains unclear in what quality of bone screws of standard implants purchase. The goal of this study was to investigate whether the failure of proximal humerus locked plating can be predicted by the DensiProbe (ARI, Davos, Switzerland). A 2-part fracture with metaphyseal impaction was simulated in 12 fresh-frozen human cadaveric humeri. Using the DensiProbe, local bone quality was determined in the humeral head in the course of 6 proximal screws of a standard locking plate (Philos; Synthes GmbH, Solothurn, Switzerland). Cyclic mechanical testing with increasing axial loading until failure was performed. Bone mineral density (BMD) significantly correlated with cycles until failure. Head migration significantly increased between 1000 and 2000 loading cycles and significantly correlated with BMD after 3000 cycles. DensiProbe peak torque in all screw positions and their respective mean torque correlated significantly with the BMD values. In 3 positions, the peak torque significantly correlated with cycles to failure; here BMD significantly influenced mechanical stability. The validity of the DensiProbe was proven by the correlation between its peak torque measurements and BMD. The correlation between the peak torque and cycles to failure revealed the potential of the DensiProbe to predict the failure of locked plating in vitro. This method provides information about local bone quality, potentially making it suitable for intraoperative use by allowing the surgeon to take measures to improve stability.

  10. Increased crop failure due to climate change: assessing adaptation options using models and socio-economic data for wheat in China

    NASA Astrophysics Data System (ADS)

    Challinor, Andrew J.; Simelton, Elisabeth S.; Fraser, Evan D. G.; Hemming, Debbie; Collins, Mathew

    2010-07-01

    Tools for projecting crop productivity under a range of conditions, and assessing adaptation options, are an important part of the endeavour to prioritize investment in adaptation. We present ensemble projections of crop productivity that account for biophysical processes, inherent uncertainty and adaptation, using spring wheat in Northeast China as a case study. A parallel 'vulnerability index' approach uses quantitative socio-economic data to account for autonomous farmer adaptation. The simulations show crop failure rates increasing under climate change, due to increasing extremes of both heat and water stress. Crop failure rates increase with mean temperature, with increases in maximum failure rates being greater than those in median failure rates. The results suggest that significant adaptation is possible through either socio-economic measures such as greater investment, or biophysical measures such as drought or heat tolerance in crops. The results also show that adaptation becomes increasingly necessitated as mean temperature and the associated number of extremes rise. The results, and the limitations of this study, also suggest directions for research for linking climate and crop models, socio-economic analyses and crop variety trial data in order to prioritize options such as capacity building, plant breeding and biotechnology.

  11. Systolic Longitudinal Function of the Left Ventricle Assessed by Speckle Tracking in Heart Failure Patients with Preserved Ejection Fraction

    PubMed Central

    Toufan, Mehrnoush; Mohammadzadeh Gharebaghi, Saeed; Pourafkari, Leili; Delir Abdolahinia, Elham

    2015-01-01

    Background: Echocardiographic evaluations of the longitudinal axis of the left ventricular (LV) function have been used in the diagnosis and assessment of heart failure with normal ejection fraction (HFNEF). The evaluation of the global and segmental peak systolic longitudinal strains (PSLSs) by two-dimensional speckle tracking echocardiography (STE) may correlate with conventional echocardiography findings. We aimed to use STE to evaluate the longitudinal function of the LV in patients with HFNEF. Methods: In this study, 126 patients with HFNEF and diastolic dysfunction and 60 normal subjects on conventional echocardiography underwent STE evaluations, including LV end-diastolic and end-systolic dimensions; interventricular septal thickness; posterior wall thickness; LV volume; LV ejection fraction; left atrial volume index; early diastolic peak flow velocity (𝐸); late diastolic peak flow velocity (𝐴); 𝐸/𝐴 ratio; deceleration time of 𝐸; early diastolic myocardial velocity (e′); late diastolic myocardial velocity (A′); systolic myocardial velocity (S); and global, basal, mid, and apical PSLSs. The correlations between these methods were assessed. Results: The mean age was 57.50 ± 10.07 years in the HFNEF patients and 54.90 ± 7.17 years in the control group. The HFNEF group comprised 69.8% males and 30.2% females, and the normal group consisted of 70% males and 30% females. The global, basal, mid, and apical PSLSs were significantly lower in the HFNEF group (p value < 0.001 for all). There was a significant positive correlation between the global PSLS and the septal e' (p value < 0.001). There was a negative correlation between the global PSLS and the E/e' ratio (p value = 0.001). There was a significant negative correlation between the E/e' ratio and the mid PSLS (p value = 0.002) and the basal PSLS (p value = 0.001). There was a weak positive correlation between the septal e' and the mid PSLS (p value = 0.001) and the

  12. Hubble's diagram and cosmic expansion

    NASA Astrophysics Data System (ADS)

    Kirshner, Robert P.

    2004-01-01

    Edwin Hubble's classic article on the expanding universe appeared in PNAS in 1929 [Hubble, E. P. (1929) Proc. Natl. Acad. Sci. USA 15, 168-173]. The chief result, that a galaxy's distance is proportional to its redshift, is so well known and so deeply embedded into the language of astronomy through the Hubble diagram, the Hubble constant, Hubble's Law, and the Hubble time, that the article itself is rarely referenced. Even though Hubble's distances have a large systematic error, Hubble's velocities come chiefly from Vesto Melvin Slipher, and the interpretation in terms of the de Sitter effect is out of the mainstream of modern cosmology, this article opened the way to investigation of the expanding, evolving, and accelerating universe that engages today's burgeoning field of cosmology.

  13. Hubble's diagram and cosmic expansion.

    PubMed

    Kirshner, Robert P

    2004-01-01

    Edwin Hubble's classic article on the expanding universe appeared in PNAS in 1929 [Hubble, E. P. (1929) Proc. Natl. Acad. Sci. USA 15, 168-173]. The chief result, that a galaxy's distance is proportional to its redshift, is so well known and so deeply embedded into the language of astronomy through the Hubble diagram, the Hubble constant, Hubble's Law, and the Hubble time, that the article itself is rarely referenced. Even though Hubble's distances have a large systematic error, Hubble's velocities come chiefly from Vesto Melvin Slipher, and the interpretation in terms of the de Sitter effect is out of the mainstream of modern cosmology, this article opened the way to investigation of the expanding, evolving, and accelerating universe that engages today's burgeoning field of cosmology. PMID:14695886

  14. Phase diagram of ammonium nitrate.

    PubMed

    Dunuwille, Mihindra; Yoo, Choong-Shik

    2013-12-01

    Ammonium Nitrate (AN) is a fertilizer, yet becomes an explosive upon a small addition of chemical impurities. The origin of enhanced chemical sensitivity in impure AN (or AN mixtures) is not well understood, posing significant safety issues in using AN even today. To remedy the situation, we have carried out an extensive study to investigate the phase stability of AN and its mixtures with hexane (ANFO-AN mixed with fuel oil) and Aluminum (Ammonal) at high pressures and temperatures, using diamond anvil cells (DAC) and micro-Raman spectroscopy. The results indicate that pure AN decomposes to N2, N2O, and H2O at the onset of the melt, whereas the mixtures, ANFO and Ammonal, decompose at substantially lower temperatures. The present results also confirm the recently proposed phase IV-IV' transition above 17 GPa and provide new constraints for the melting and phase diagram of AN to 40 GPa and 400°C.

  15. Phase diagram of ammonium nitrate

    NASA Astrophysics Data System (ADS)

    Dunuwille, Mihindra; Yoo, Choong-Shik

    2013-12-01

    Ammonium Nitrate (AN) is a fertilizer, yet becomes an explosive upon a small addition of chemical impurities. The origin of enhanced chemical sensitivity in impure AN (or AN mixtures) is not well understood, posing significant safety issues in using AN even today. To remedy the situation, we have carried out an extensive study to investigate the phase stability of AN and its mixtures with hexane (ANFO-AN mixed with fuel oil) and Aluminum (Ammonal) at high pressures and temperatures, using diamond anvil cells (DAC) and micro-Raman spectroscopy. The results indicate that pure AN decomposes to N2, N2O, and H2O at the onset of the melt, whereas the mixtures, ANFO and Ammonal, decompose at substantially lower temperatures. The present results also confirm the recently proposed phase IV-IV' transition above 17 GPa and provide new constraints for the melting and phase diagram of AN to 40 GPa and 400°C.

  16. Hubble's diagram and cosmic expansion

    PubMed Central

    Kirshner, Robert P.

    2004-01-01

    Edwin Hubble's classic article on the expanding universe appeared in PNAS in 1929 [Hubble, E. P. (1929) Proc. Natl. Acad. Sci. USA 15, 168–173]. The chief result, that a galaxy's distance is proportional to its redshift, is so well known and so deeply embedded into the language of astronomy through the Hubble diagram, the Hubble constant, Hubble's Law, and the Hubble time, that the article itself is rarely referenced. Even though Hubble's distances have a large systematic error, Hubble's velocities come chiefly from Vesto Melvin Slipher, and the interpretation in terms of the de Sitter effect is out of the mainstream of modern cosmology, this article opened the way to investigation of the expanding, evolving, and accelerating universe that engages today's burgeoning field of cosmology. PMID:14695886

  17. Phase diagram of ammonium nitrate

    NASA Astrophysics Data System (ADS)

    Dunuwille, M.; Yoo, C. S.

    2014-05-01

    Ammonium Nitrate (AN) has often subjected to uses in improvised explosive devices, due to its wide availability as a fertilizer and its capability of becoming explosive with slight additions of organic and inorganic compounds. Yet, the origin of enhanced energetic properties of impure AN (or AN mixtures) is neither chemically unique nor well understood -resulting in rather catastrophic disasters in the past1 and thereby a significant burden on safety in using ammonium nitrates even today. To remedy this situation, we have carried out an extensive study to investigate the phase stability of AN at high pressure and temperature, using diamond anvil cells and micro-Raman spectroscopy. The present results confirm the recently proposed phase IV-to-IV' transition above 17 GPa2 and provide new constraints for the melting and phase diagram of AN to 40 GPa and 400 °C.

  18. A Regime Diagram for Subduction

    NASA Astrophysics Data System (ADS)

    Stegman, D. R.; Farrington, R.; Capitanio, F. A.; Schellart, W. P.

    2009-12-01

    Regime diagrams and associated scaling relations have profoundly influenced our understanding of planetary dynamics. Previous regime diagrams characterized the regimes of stagnant-lid, small viscosity contrast, transitional, and no-convection for temperature-dependent (Moresi and Solomatov, 1995), and non-linear power law rheologies (Solomatov and Moresi, 1997) as well as stagnant-lid, sluggish-lid, and mobile-lid regimes once the finite strength of rock was considered (Moresi and Solomatov, 1998). Scalings derived from such models have been the cornerstone for parameterized models of thermal evolution of rocky planets and icy moons for the past decade. While such a theory can predict the tectonic state of a planetary body, it is still rather incomplete in regards to predicting tectonics. For example, the mobile-lid regime is unspecific as to how continuous lithospheric recycling should occur on a terrestrial planet. Towards this goal, Gerya et al., (2008) advanced a new regime diagram aiming to characterize when subduction would manifest itself as a one-sided or two-sided downwelling and either symmetric or asymmetric. Here, we present a regime diagram for the case of a single-sided, asymmetric type of subduction (most Earth-like type). Using a 3-D numerical model of a free subduction, we describe a total of 5 different styles of subduction that can possibly occur. Each style is distinguished by its upper mantle slab morphology resulting from the sinking kinematics. We provide movies to illustrate the different styles and their progressive time-evolution. In each regime, subduction is accommodated by a combination of plate advance and slab rollback, with associated motions of forward plate velocity and trench retreat, respectively. We demonstrate that the preferred subduction mode depends upon two essential controlling factors: 1) buoyancy of the downgoing plate and 2) strength of plate in resisting bending at the hinge. We propose that a variety of subduction

  19. Mathematical review on source-type diagrams

    NASA Astrophysics Data System (ADS)

    Aso, Naofumi; Ohta, Kazuaki; Ide, Satoshi

    2016-03-01

    A source-type diagram is a visualization tool used to display earthquake sources, including double-couples, compensated linear vector dipoles, and isotropic deformation. Together with recent observations of non-double-couple events in a variety of tectonic settings, it is important to be able to recognize the source type intuitively from a representative diagram. Since previous works have proposed diagrams created using a range of projections, we review these diagrams in the framework of the moment tensor eigenvalue space. For further applications, we also provide complete formulas for conversion between moment tensor representation and the coordinate system of each diagram style. Using both a global catalog and synthetic data, we discuss differences between types of diagrams and the relative effectiveness of each.

  20. Continuation of point clouds via persistence diagrams

    NASA Astrophysics Data System (ADS)

    Gameiro, Marcio; Hiraoka, Yasuaki; Obayashi, Ippei

    2016-11-01

    In this paper, we present a mathematical and algorithmic framework for the continuation of point clouds by persistence diagrams. A key property used in the method is that the persistence map, which assigns a persistence diagram to a point cloud, is differentiable. This allows us to apply the Newton-Raphson continuation method in this setting. Given an original point cloud P, its persistence diagram D, and a target persistence diagram D‧, we gradually move from D to D‧, by successively computing intermediate point clouds until we finally find a point cloud P‧ having D‧ as its persistence diagram. Our method can be applied to a wide variety of situations in topological data analysis where it is necessary to solve an inverse problem, from persistence diagrams to point cloud data.

  1. The neptunium-iron phase diagram

    NASA Astrophysics Data System (ADS)

    Gibson, J. K.; Haire, R. G.; Beahm, E. C.; Gensini, M. M.; Maeda, A.; Ogawa, T.

    1994-08-01

    The phase relations in the Np-Fe alloy system have been elucidated using differential thermal analysis. A phase diagram for this system is postulated based upon the experimental results, regular-solution model calculations, and an expected correspondence to the U-Fe and Pu-Fe diagrams. The postulated Np-Fe diagram is characterized by limited terminal solid solubilities, two intermetallic solid phases, NpFe 2 and Np 6Fe, and two eutectics.

  2. [ASSESSING THE IMPACT OF CARDIAC DYSSYNCHRONY ON QUALITY OF LIFE IN PATIENTS WITH CHRONIC HEART FAILURE IN COMBINATION WITH TYPE 2 DIABETES MELLITUS].

    PubMed

    Vlasenko, M A; Rodionova, Iu V; Lopin, D O

    2014-01-01

    In the article considers the influence of type 2 diabetes mellitus (T2DM) and cardiac dyssyn- chrony (DYS) on health-related quality of life (HRQoL) in patients with chronic heart failure (CHF) by means of Minnesota Living with Heart Failure Questionnaire (MLHFQ). It was found a negative impact of T2DM both on the overall assessment; and on the physical and emotional components of the HRQoL.in patients with CHE DYS also has a negative effect on HRQoL both in patients with isolated CHF and patients with CHF and concomitant T2DM, but its influence is mediated mainly by physical component. There were a number of factors that contribute to the DYS development in patients of studied groups identified, including the poor state of carbohydrate and lipid metabolism compensation, activation of systemic inflammation. It is expedient to study HRQoL to optimize therapeutic strategies in these patients.

  3. Symmetric Monotone Venn Diagrams with Seven Curves

    NASA Astrophysics Data System (ADS)

    Cao, Tao; Mamakani, Khalegh; Ruskey, Frank

    An n-Venn diagram consists of n curves drawn in the plane in such a way that each of the 2 n possible intersections of the interiors and exteriors of the curves forms a connected non-empty region. A k-region in a diagram is a region that is in the interior of precisely k curves. A n-Venn diagram is symmetric if it has a point of rotation about which rotations of the plane by 2π/n radians leaves the diagram fixed; it is polar symmetric if it is symmetric and its stereographic projection about the infinite outer face is isomorphic to the projection about the innermost face. A Venn diagram is monotone if every k-region is adjacent to both some (k - 1)-region (if k > 0) and also to some k + 1 region (if k < n). A Venn diagram is simple if at most two curves intersect at any point. We prove that the "Grünbaum" encoding uniquely identifies monotone simple symmetric n-Venn diagrams and describe an algorithm that produces an exhaustive list of all of the monotone simple symmetric n-Venn diagrams. There are exactly 23 simple monotone symmetric 7-Venn diagrams, of which 6 are polar symmetric.

  4. Influence Diagram Use With Respect to Technology Planning and Investment

    NASA Technical Reports Server (NTRS)

    Levack, Daniel J. H.; DeHoff, Bryan; Rhodes, Russel E.

    2009-01-01

    Influence diagrams are relatively simple, but powerful, tools for assessing the impact of choices or resource allocations on goals or requirements. They are very general and can be used on a wide range of problems. They can be used for any problem that has defined goals, a set of factors that influence the goals or the other factors, and a set of inputs. Influence diagrams show the relationship among a set of results and the attributes that influence them and the inputs that influence the attributes. If the results are goals or requirements of a program, then the influence diagram can be used to examine how the requirements are affected by changes to technology investment. This paper uses an example to show how to construct and interpret influence diagrams, how to assign weights to the inputs and attributes, how to assign weights to the transfer functions (influences), and how to calculate the resulting influences of the inputs on the results. A study is also presented as an example of how using influence diagrams can help in technology planning and investment. The Space Propulsion Synergy Team (SPST) used this technique to examine the impact of R&D spending on the Life Cycle Cost (LCC) of a space transportation system. The question addressed was the effect on the recurring and the non-recurring portions of LCC of the proportion of R&D resources spent to impact technology objectives versus the proportion spent to impact operational dependability objectives. The goals, attributes, and the inputs were established. All of the linkages (influences) were determined. The weighting of each of the attributes and each of the linkages was determined. Finally the inputs were varied and the impacts on the LCC determined and are presented. The paper discusses how each of these was accomplished both for credibility and as an example for future studies using influence diagrams for technology planning and investment planning.

  5. Application of Failure Mode and Effect Analysis (FMEA) and cause and effect analysis in conjunction with ISO 22000 to a snails (Helix aspersa) processing plant; A case study.

    PubMed

    Arvanitoyannis, Ioannis S; Varzakas, Theodoros H

    2009-08-01

    Failure Mode and Effect Analysis (FMEA) has been applied for the risk assessment of snails manufacturing. A tentative approach of FMEA application to the snails industry was attempted in conjunction with ISO 22000. Preliminary Hazard Analysis was used to analyze and predict the occurring failure modes in a food chain system (snails processing plant), based on the functions, characteristics, and/or interactions of the ingredients or the processes, upon which the system depends. Critical Control points have been identified and implemented in the cause and effect diagram (also known as Ishikawa, tree diagram, and fishbone diagram). In this work a comparison of ISO22000 analysis with HACCP is carried out over snails processing and packaging. However, the main emphasis was put on the quantification of risk assessment by determining the RPN per identified processing hazard. Sterilization of tins, bioaccumulation of heavy metals, packaging of shells and poisonous mushrooms, were the processes identified as the ones with the highest RPN (280, 240, 147, 144, respectively) and corrective actions were undertaken. Following the application of corrective actions, a second calculation of RPN values was carried out leading to considerably lower values (below the upper acceptable limit of 130). It is noteworthy that the application of Ishikawa (Cause and Effect or Tree diagram) led to converging results thus corroborating the validity of conclusions derived from risk assessment and FMEA. Therefore, the incorporation of FMEA analysis within the ISO22000 system of a snails processing industry is considered imperative. PMID:19582641

  6. Application of Failure Mode and Effect Analysis (FMEA) and cause and effect analysis in conjunction with ISO 22000 to a snails (Helix aspersa) processing plant; A case study.

    PubMed

    Arvanitoyannis, Ioannis S; Varzakas, Theodoros H

    2009-08-01

    Failure Mode and Effect Analysis (FMEA) has been applied for the risk assessment of snails manufacturing. A tentative approach of FMEA application to the snails industry was attempted in conjunction with ISO 22000. Preliminary Hazard Analysis was used to analyze and predict the occurring failure modes in a food chain system (snails processing plant), based on the functions, characteristics, and/or interactions of the ingredients or the processes, upon which the system depends. Critical Control points have been identified and implemented in the cause and effect diagram (also known as Ishikawa, tree diagram, and fishbone diagram). In this work a comparison of ISO22000 analysis with HACCP is carried out over snails processing and packaging. However, the main emphasis was put on the quantification of risk assessment by determining the RPN per identified processing hazard. Sterilization of tins, bioaccumulation of heavy metals, packaging of shells and poisonous mushrooms, were the processes identified as the ones with the highest RPN (280, 240, 147, 144, respectively) and corrective actions were undertaken. Following the application of corrective actions, a second calculation of RPN values was carried out leading to considerably lower values (below the upper acceptable limit of 130). It is noteworthy that the application of Ishikawa (Cause and Effect or Tree diagram) led to converging results thus corroborating the validity of conclusions derived from risk assessment and FMEA. Therefore, the incorporation of FMEA analysis within the ISO22000 system of a snails processing industry is considered imperative.

  7. Influence Diagrams as Decision-Making Tools for Pesticide Risk Management

    EPA Science Inventory

    The pesticide policy arena is filled with discussion of probabilistic approaches to assess ecological risk, however, similar discussions about implementing formal probabilistic methods in pesticide risk decision making are less common. An influence diagram approach is proposed f...

  8. Phase Diagram of Ammonium Nitrate

    NASA Astrophysics Data System (ADS)

    Dunuwille, Mihindra; Yoo, Choong-Shik

    2013-06-01

    Ammonium Nitrate (AN) has often been subjected to uses in improvised explosive devices, due to its wide availability as a fertilizer and its capability of becoming explosive with slight additions of organic and inorganic compounds. Yet, the origin of enhanced energetic properties of impure AN (or AN mixtures) is neither chemically unique nor well understood - resulting in rather catastrophic disasters in the past1 and thereby a significant burden on safety, in using ammonium nitrates even today. To remedy this situation, we have carried out an extensive study to investigate the phase stability of AN, in different chemical environments, at high pressure and temperature, using diamond anvil cells and micro-Raman spectroscopy. The present results confirm the recently proposed phase IV-to-IV' transition above 15 GPa2 and provide new constraints for the melting and phase diagram of AN to 40 GPa and 673 K. The present study has been supported by the U.S. DHS under Award Number 2008-ST-061-ED0001.

  9. Phase diagram of ammonium nitrate

    SciTech Connect

    Dunuwille, Mihindra; Yoo, Choong-Shik

    2013-12-07

    Ammonium Nitrate (AN) is a fertilizer, yet becomes an explosive upon a small addition of chemical impurities. The origin of enhanced chemical sensitivity in impure AN (or AN mixtures) is not well understood, posing significant safety issues in using AN even today. To remedy the situation, we have carried out an extensive study to investigate the phase stability of AN and its mixtures with hexane (ANFO–AN mixed with fuel oil) and Aluminum (Ammonal) at high pressures and temperatures, using diamond anvil cells (DAC) and micro-Raman spectroscopy. The results indicate that pure AN decomposes to N{sub 2}, N{sub 2}O, and H{sub 2}O at the onset of the melt, whereas the mixtures, ANFO and Ammonal, decompose at substantially lower temperatures. The present results also confirm the recently proposed phase IV-IV{sup ′} transition above 17 GPa and provide new constraints for the melting and phase diagram of AN to 40 GPa and 400°C.

  10. No Such Thing as Failure, Only Feedback: Designing Innovative Opportunities for E-Assessment and Technology-Mediated Feedback

    ERIC Educational Resources Information Center

    Miller, Charles; Doering, Aaron; Scharber, Cassandra

    2010-01-01

    In this paper we challenge designers, researchers, teachers, students, and parents to re-assess and re-envision the value of technology-mediated feedback and e-assessment by examining the innovative roles feedback and assessment played in the design of three contemporary web-based learning environments. These environments include 1) an…

  11. Economic impact assessment from the use of a mobile app for the self-management of heart diseases by patients with heart failure in a Spanish region.

    PubMed

    Cano Martín, José Antonio; Martínez-Pérez, Borja; de la Torre-Díez, Isabel; López-Coronado, Miguel

    2014-09-01

    Currently, cardiovascular diseases are the deadliest diseases with a total of 17 million deaths worldwide. Hence, they are the focus of many mobile applications for smartphones and tablets. This paper will assess the ex-ante economic impact as well as will determine the cost-effectiveness analysis that the use of one of this app, CardioManager, by patients with heart failure will have in a Spanish community, Castile and Leon. For this, a cost-effectiveness analysis using the hidden Markov model were performed in a hypothetical cohort of patients diagnosed with heart failure, based on the information of epidemiological parameters and the costs derived from the management and care of heart failure patients by the Public Health Care System of Castile and Leon. The costs of patient care were estimated from the perspective of the Ministry of Health of Spain using a discount rate of 3 %. Finally, an estimation of the ex-ante impact that would suppose the introduction of CardioManager in the Health Care System is performed. It is concluded that the introduction of CardioManager may generate a 33 % reduction in the cost of management and treatment of the disease. This means that CardioManager may be able to save more than 9,000 € per patient to the local Health Care System of Castile and Leon, which can be translated in a saving of 0.31 % of the total health expenditure of the region.

  12. Free-Body Diagrams: Necessary or Sufficient?

    NASA Astrophysics Data System (ADS)

    Rosengrant, David; Van Heuvelen, Alan; Etkina, Eugenia

    2005-09-01

    The Rutgers PAER group is working to help students develop various scientific abilities. One of the abilities is to create, understand and learn to use for qualitative reasoning and problem solving different representations of physical processes such as pictorial representations, motion diagrams, free-body diagrams, and energy bar charts. Physics education literature indicates that using multiple representations is beneficial for student understanding of physics ideas and for problem solving. We developed a special approach to construct and utilize free-body diagrams for representing physical phenomena and for problem solving. We will examine whether students draw free-body diagrams in solving problems when they know they will not receive credit for it; the consistency of their use in different conceptual areas; and if students who use free-body diagrams while solving problems in different areas of physics are more successful then those who do not.

  13. Reading fitness landscape diagrams through HSAB concepts

    NASA Astrophysics Data System (ADS)

    Vigneresse, Jean-Louis

    2014-10-01

    Fitness landscapes are conceived as range of mountains, with local peaks and valleys. In terms of potential, such topographic variations indicate places of local instability or stability. The chemical potential, or electronegativity, its value changed of sign, carries similar information. In addition to chemical descriptors defined through hard-soft acid-base (HSAB) concepts and computed through density functional theory (DFT), the principles that rule chemical reactions allow the design of such landscape diagrams. The simplest diagram uses electrophilicity and hardness as coordinates. It allows examining the influence of maximum hardness or minimum electrophilicity principles. A third dimension is introduced within such a diagram by mapping the topography of electronegativity, polarizability or charge exchange. Introducing charge exchange during chemical reactions, or mapping a third parameter (f.i. polarizability) reinforces the information carried by a simple binary diagram. Examples of such diagrams are provided, using data from Earth Sciences, simple oxides or ligands.

  14. Assessing the Quality and Comparative Effectiveness of Team-Based Care for Heart Failure: Who, What, Where, When, and How.

    PubMed

    Cooper, Lauren B; Hernandez, Adrian F

    2015-07-01

    Team-based or multidisciplinary care may be a potential way to positively impact outcomes for heart failure (HF) patients by improving clinical outcomes, managing patient symptoms, and reducing costs. The multidisciplinary team includes the HF cardiologist, HF nurses, clinical pharmacists, dieticians, exercise specialists, mental health providers, social workers, primary care providers, and additional subspecialty providers. The timing and setting of multidisciplinary care depends on the needs of the patient and the resources available. Multidisciplinary HF teams should be evaluated based on their ability to achieve goals, as well as their potential for sustainability over time.

  15. Looking Forward, Looking Back: Assessing Variations in Hospital Resource Use and Outcomes for Elderly Patients with Heart Failure

    PubMed Central

    Ong, Michael K.; Mangione, Carol M.; Romano, Patrick S.; Zhou, Qiong; Auerbach, Andrew D.; Chun, Alein; Davidson, Bruce; Ganiats, Theodore G.; Greenfield, Sheldon; Gropper, Michael A.; Malik, Shaista; Rosenthal, J. Thomas; Escarce, José J.

    2009-01-01

    Background Recent studies have found substantial variation in hospital resource utilization by expired Medicare beneficiaries with chronic illnesses. By analyzing only expired patients, these studies cannot identify differences across hospitals in health outcomes like mortality. This study examines the association between mortality and resource utilization at the hospital level, when all Medicare beneficiaries hospitalized for heart failure are examined. Methods and Results 3,999 individuals hospitalized with a principal diagnosis of heart failure at six California teaching hospitals between January 1, 2001 and June 30, 2005 were analyzed with multivariate risk-adjustment models for total hospital days, total hospital direct costs, and mortality within 180-days after initial admission (“Looking Forward”). A subset of 1,639 individuals who died during the study period were analyzed with multivariate risk-adjustment models for total hospital days and total hospital direct costs within 180-days prior to death (“Looking Back”). “Looking Forward” risk-adjusted hospital means ranged from 17.0% to 26.0% for mortality, 7.8 to 14.9 days for total hospital days, and 0.66 to 1.30 times the mean value for indexed total direct costs. Spearman rank correlation coefficients were −0.68 between mortality and hospital days, and −0.93 between mortality and indexed total direct costs. “Looking Back” risk-adjusted hospital means ranged from 9.1 to 21.7 days for total hospital days and 0.91 to 1.79 times the mean value for indexed total direct costs. Variation in resource utilization site ranks between expired and all individuals were due to insignificant differences. Conclusions California teaching hospitals that used more resources caring for patients hospitalized for heart failure had lower mortality rates. Focusing only on expired individuals may overlook mortality variation as well as associations between greater resource utilization and lower mortality

  16. Testicular failure

    MedlinePlus

    ... LH . Your doctor may also order a semen analysis to examine the number of healthy sperm you are producing. Sometimes, an ultrasound of the testes will be ordered. Testicular failure and low testosterone level may be hard to ...

  17. A Hubble Diagram for Quasars

    NASA Astrophysics Data System (ADS)

    Risaliti, G.; Lusso, E.

    2015-12-01

    We present a new method to test the ΛCDM cosmological model and to estimate cosmological parameters based on the nonlinear relation between the ultraviolet and X-ray luminosities of quasars. We built a data set of 1138 quasars by merging several samples from the literature with X-ray measurements at 2 keV and SDSS photometry, which was used to estimate the extinction-corrected 2500 Å flux. We obtained three main results: (1) we checked the nonlinear relation between X-ray and UV luminosities in small redshift bins up to z˜ 6, confirming that the relation holds at all redshifts with the same slope; (2) we built a Hubble diagram for quasars up to z˜ 6, which is well matched to that of supernovae in the common z = 0-1.4 redshift interval and extends the test of the cosmological model up to z˜ 6; and (3) we showed that this nonlinear relation is a powerful tool for estimating cosmological parameters. Using the present data and assuming a ΛCDM model, we obtain {{{Ω }}}M = 0.22{}-0.08+0.10 and {{{Ω }}}{{Λ }} = 0.92{}-0.30+0.18 ({{{Ω }}}M = 0.28 ± 0.04 and {{{Ω }}}{{Λ }} = 0.73 +/- 0.08 from a joint quasar-SNe fit). Much more precise measurements will be achieved with future surveys. A few thousand SDSS quasars already have serendipitous X-ray observations from Chandra or XMM-Newton, and at least 100,000 quasars with UV and X-ray data will be made available by the extended ROentgen Survey with an Imaging Telescope Array all-sky survey in a few years. The Euclid, Large Synoptic Survey Telescope, and Advanced Telescope for High ENergy Astrophysics surveys will further increase the sample size to at least several hundred thousand. Our simulations show that these samples will provide tight constraints on the cosmological parameters and will allow us to test for possible deviations from the standard model with higher precision than is possible today.

  18. The Art of Free-Body Diagrams.

    ERIC Educational Resources Information Center

    Puri, Avinash

    1996-01-01

    Discusses the difficulty of drawing free-body diagrams which only show forces exerted on a body from its neighbors. Presents three ways a body may be modeled: a particle, rigid extended, and nonrigid extended. (MKR)

  19. Phase diagram for passive electromagnetic scatterers.

    PubMed

    Lee, Jeng Yi; Lee, Ray-Kuang

    2016-03-21

    With the conservation of power, a phase diagram defined by amplitude square and phase of scattering coefficients for each spherical harmonic channel is introduced as a universal map for any passive electromagnetic scatterers. Physically allowable solutions for scattering coefficients in this diagram clearly show power competitions among scattering and absorption. It also illustrates a variety of exotic scattering or absorption phenomena, from resonant scattering, invisible cloaking, to coherent perfect absorber. With electrically small core-shell scatterers as an example, we demonstrate a systematic method to design field-controllable structures based on the allowed trajectories in this diagram. The proposed phase diagram and inverse design can provide tools to design functional electromagnetic devices. PMID:27136839

  20. An Improved Mnemonic Diagram for Thermodynamic Relationships.

    ERIC Educational Resources Information Center

    Rodriguez, Joaquin; Brainard, Alan J.

    1989-01-01

    Considers pressure, volume, entropy, temperature, Helmholtz free energy, Gibbs free energy, enthalpy, and internal energy. Suggests the mnemonic diagram is for use with simple systems that are defined as macroscopically homogeneous, isotropic, uncharged, and chemically inert. (MVL)

  1. Attribute Reduction Based on Property Pictorial Diagram

    PubMed Central

    Wan, Qing; Wei, Ling

    2014-01-01

    This paper mainly studies attribute reduction which keeps the lattice structure in formal contexts based on the property pictorial diagram. Firstly, the property pictorial diagram of a formal context is defined. Based on such diagram, an attribute reduction approach of concept lattice is achieved. Then, through the relation between an original formal context and its complementary context, an attribute reduct of complementary context concept lattice is obtained, which is also based on the property pictorial diagram of the original formal context. Finally, attribute reducts in property oriented concept lattice and object oriented concept lattice can be acquired by the relations of attribute reduction between these two lattices and concept lattice of complementary context. In addition, a detailed illustrative example is presented. PMID:25247200

  2. Veitch diagram plotter simplifies Boolean functions

    NASA Technical Reports Server (NTRS)

    Rubin, D. K.

    1964-01-01

    This device for simplifying the plotting of a Veitch diagram consists of several overlays for blocking out the unwanted squares. This method of plotting the various input combinations to a computer is used in conjunction with the Boolean functions.

  3. A universal structured-design diagramer

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Program (FLOWCHARTER) generates standardized flowcharts and concordances for development and debugging of programs in any language. User describes programming-language grammar, providing syntax rules in Backus-Naur form (BNF), list of semantic rules, and set of concordance rules. Once grammar is described, user supplies only source code of program to be diagrammed. FLOWCHARTER automatically produces flow diagram and concordance. Source code for program is written for PASCAL Release 2 compiler, as distributed by University of Minnesota.

  4. Elementary diagrams in nuclear and neutron matter

    SciTech Connect

    Wiringa, R.B.

    1995-08-01

    Variational calculations of nuclear and neutron matter are currently performed using a diagrammatic cluster expansion with the aid of nonlinear integral equations for evaluating expectation values. These are the Fermi hypernetted chain (FHNC) and single-operator chain (SOC) equations, which are a way of doing partial diagram summations to infinite order. A more complete summation can be made by adding elementary diagrams to the procedure. The simplest elementary diagrams appear at the four-body cluster level; there is one such E{sub 4} diagram in Bose systems, but 35 diagrams in Fermi systems, which gives a level of approximation called FHNC/4. We developed a novel technique for evaluating these diagrams, by computing and storing 6 three-point functions, S{sub xyz}(r{sub 12}, r{sub 13}, r{sub 23}), where xyz (= ccd, cce, ddd, dde, dee, or eee) denotes the exchange character at the vertices 1, 2, and 3. All 35 Fermi E{sub 4} diagrams can be constructed from these 6 functions and other two-point functions that are already calculated. The elementary diagrams are known to be important in some systems like liquid {sup 3}He. We expect them to be small in nuclear matter at normal density, but they might become significant at higher densities appropriate for neutron star calculations. This year we programmed the FHNC/4 contributions to the energy and tested them in a number of simple model cases, including liquid {sup 3}He and Bethe`s homework problem. We get reasonable, but not exact agreement with earlier published work. In nuclear and neutron matter with the Argonne v{sub 14} interaction these contributions are indeed small corrections at normal density and grow to only 5-10 MeV/nucleon at 5 times normal density.

  5. Lattice and Phase Diagram in QCD

    SciTech Connect

    Lombardo, Maria Paola

    2008-10-13

    Model calculations have produced a number of very interesting expectations for the QCD Phase Diagram, and the task of a lattice calculations is to put these studies on a quantitative grounds. I will give an overview of the current status of the lattice analysis of the QCD phase diagram, from the quantitative results of mature calculations at zero and small baryochemical potential, to the exploratory studies of the colder, denser phase.

  6. Fluctuations and the QCD phase diagram

    SciTech Connect

    Schaefer, B.-J.

    2012-06-15

    In this contribution the role of quantum fluctuations for the QCD phase diagram is discussed. This concerns in particular the importance of the matter back-reaction to the gluonic sector. The impact of these fluctuations on the location of the confinement/deconfinement and the chiral transition lines as well as their interrelation are investigated. Consequences of our findings for the size of a possible quarkyonic phase and location of a critical endpoint in the phase diagram are drawn.

  7. ISS EPS Orbital Replacement Unit Block Diagrams

    NASA Technical Reports Server (NTRS)

    Schmitz, Gregory V.

    2001-01-01

    The attached documents are being provided to Switching Power Magazine for information purposes. This magazine is writing a feature article on the International Space Station Electrical Power System, focusing on the switching power processors. These units include the DC-DC Converter Unit (DDCU), the Bi-directional Charge/Discharge Unit (BCDU), and the Sequential Shunt Unit (SSU). These diagrams are high-level schematics/block diagrams depicting the overall functionality of each unit.

  8. Use of influence diagrams in gas transfer system option prioritization

    SciTech Connect

    Heger, A.S.; Garcia, M.D.

    1995-08-01

    A formal decision-analysis methodology was applied to aid the Department of Energy (DOE) in deciding which of several gas transfer system (GTS) options should be selected. The decision objectives for this case study, i.e., risk and cost, were directly derived from the DOE guidelines. Influence diagrams were used to define the structure of the decision problem and clearly delineate the flow if information. A set of performance matrices wee used in conjunction with the influence diagrams to assess and evaluate the degree to which the objectives of the case study were met. These performance measures were extracted from technical models, design and operating data, and professional judgments. The results were aggregated to provide an overall evaluation of the different design options of the gas transfer system. Consequently, the results of this analysis were used as an aid to DOE to select a viable GTS option.

  9. Use of serial assessment of disease severity and liver biopsy for indication for liver transplantation in pediatric Epstein-Barr virus-induced fulminant hepatic failure.

    PubMed

    Nakazawa, Atsuko; Nakano, Natsuko; Fukuda, Akinari; Sakamoto, Seisuke; Imadome, Ken-Ichi; Kudo, Toyoichiro; Matsuoka, Kentaro; Kasahara, Mureo

    2015-03-01

    The decision to perform liver transplantation (LT) in patients with Epstein-Barr virus (EBV)-induced fulminant hepatic failure (FHF) relies on a precise assessment of laboratory and pathological findings. In this study, we analyzed clinical and laboratory data as well as the pathological features of the liver in order to evaluate the pathogenesis and the need for LT in 5 patients with EBV-induced FHF. According to the King's College criteria, the Acute Liver Failure Early Dynamic (ALFED) model, and the Japanese criteria (from the Acute Liver Failure Study Group of Japan), only 1 patient was considered to be a candidate for LT. However, explanted liver tissues in 3 cases exhibited massive hepatocellular necrosis together with diffuse CD8-positive T cell infiltration in both the portal area and the sinusoid. EBV was detected in the liver, plasma, and peripheral blood mononuclear cells (PBMNCs). In 2 cases indicated to be at moderate risk by the ALFED model, liver biopsy showed CD8-positive and EBV-encoded RNA signal-positive lymphocytic infiltration predominantly in the portal area, but massive hepatocellular necrosis was not observed. These patients were treated with immunosuppressants and etoposide under the diagnosis of EBV-induced hemophagocytic lymphohistiocytosis or systemic EBV-positive T cell lymphoproliferative disease of childhood. EBV DNA was detected at a high level in PBMNCs, although it was negative in plasma. On the basis of the pathological analysis of the explanted liver tissues, LT was proposed for the restoration of liver function and the removal of the EBV-infected lymphocytes concentrated in the liver. Detecting EBV DNA by a quantitative polymerase chain reaction in plasma and PBMNCs was informative. An accurate evaluation of the underlying pathogenesis is essential for developing a treatment strategy in patients with EBV-induced FHF.

  10. Assessment of RELAP5/MOD3 with the LOFT L9-1/L3-3 experiment simulating an anticipated transient with multiple failures

    SciTech Connect

    Bang, Y.S.; Seul, K.W.; Kim, H.J.

    1994-02-01

    The RELAP5/MOD3 5m5 code is assessed using the L9-1/L3-3 test carried out in the LOFT facility, a 1/60-scaled experimental reactor, simulating a loss of feedwater accident with multiple failures and the sequentially-induced small break loss-of-coolant accident. The code predictability is evaluated for the four separated sub-periods with respect to the system response; initial heatup phase, spray and power operated relief valve (PORV) cycling phase, blowdown phase and recovery phase. Based on the comparisons of the results from the calculation with the experiment data, it is shown that the overall thermal-hydraulic behavior important to the scenario such as a heat removal between the primary side and the secondary side and a system depressurization can be well-predicted and that the code could be applied to the full-scale nuclear power plant for an anticipated transient with multiple failures within a reasonable accuracy. The minor discrepancies between the prediction and the experiment are identified in reactor scram time, post-scram behavior in the initial heatup phase, excessive heatup rate in the cycling phase, insufficient energy convected out the PORV under the hot leg stratified condition in the saturated blowdown phase and void distribution in secondary side in the recovery phase. This may come from the code uncertainties in predicting the spray mass flow rate, the associated condensation in pressurizer and junction fluid density under stratified condition.

  11. Learning from Failures.

    ERIC Educational Resources Information Center

    Saffran, Murray

    1991-01-01

    Describes mistakes made in trying to change the Nutrition and Digestion section of a medical biochemistry course. Author tried to make the section student taught and reports nine mistakes including the following: ignoring active opposition of colleagues, failure to assess the receptivity of the class to a new form of teaching, and overestimating…

  12. Troponins in heart failure.

    PubMed

    Omland, T; Røsjø, H; Giannitsis, E; Agewall, S

    2015-03-30

    The signs and symptoms of heart failure are frequently unspecific and correlate poorly with objective indices of cardiac function. Objective assessment of cardiac function by echocardiography or other imaging modalities also correlate poorly with symptomatic status and functional capacity. Accordingly, there is a need for circulating biomarkers that can provide incremental diagnostic and prognostic information to the existing armamentarium of tests. The introduction of more sensitive assays that allow determination of very low circulating concentrations of the myofibrillar proteins cardiac troponin I and T has not only resulted in improved diagnostic accuracy in the setting of acute coronary syndromes. The high sensitivity assays have also shown that cardiac troponins are frequently found chronically circulating in a variety of acute and chronic, cardiac and non-cardiac disease conditions, including acute heart failure and chronic symptomatic and asymptomatic left ventricular dysfunction. Cardiac troponin I and T provide may provide clinically useful prognostic information both concerning the future risk of developing heart failure in asymptomatic subjects and the risk of fatal events and hospital admissions in those with already established heart failure This review summarizes current literature on the clinical performance and utility of cardiac troponin measurements as diagnostic and prognostic tools in patients with symptomatic heart failure, as well as in those with asymptomatic left ventricular dysfunction, and clinical phenotypes at high risk for developing heart failure, including stable coronary artery disease, left ventricular hypertrophy, and aortic stenosis.

  13. Margins in high temperature leak-before-break assessments

    SciTech Connect

    Budden, P.J.; Hooton, D.G.

    1997-04-01

    Developments in the defect assessment procedure R6 to include high-temperature mechanisms in Leak-before-Break arguments are described. In particular, the effect of creep on the time available to detect a leak and on the crack opening area, and hence leak rate, is discussed. The competing influence of these two effects is emphasized by an example. The application to Leak-before-Break of the time-dependent failure assessment diagram approach for high temperature defect assessment is then outlined. The approach is shown to be of use in assessing the erosion of margins by creep.

  14. Heart failure.

    PubMed

    2014-12-15

    Essential facts Heart failure affects about 900,000 people in the UK. The condition can affect people of all ages, but it is more common in older people, with more than half of all patients over the age of 75. It is caused by the heart failing to pump enough blood around the body at the right pressure, usually because the heart muscle has become too weak or stiff to work properly. Acute heart failure, which occurs when symptoms develop quickly, is the leading cause of hospital admission in people over 65. PMID:25492766

  15. Rationale and design of a pilot randomized controlled trial to assess the role of intravenous ferric carboxymaltose in Asian patients with heart failure (PRACTICE‐ASIA‐HF)

    PubMed Central

    Yeo, Poh Shuan Daniel; Hadi, Farid Abdul; Cushway, Timothy; Lee, Kim Yee; Tai, Bee Choo

    2015-01-01

    Abstract Aims Iron deficiency (ID) is highly prevalent in patients with heart failure (HF) worldwide regardless of haemoglobin levels. Results from therapeutic trials of intermittently dosed intravenous (i.v.) iron are promising in the ambulatory Caucasian population with HF with reduced left ventricular ejection fraction, although evidence is scarce in Asia. The Pilot Randomized Controlled Trial to Assess the Role of Intravenous Ferric Carboxymaltose in Asian Patients with Heart Failure aims to assess the effect of single‐dose i.v. ferric carboxymaltose (FCM) in a multi‐ethnic Asian population with HF and ID. Methods and results This is an open‐label, randomized, placebo‐controlled trial recruiting stabilized inpatients with decompensated HF (regardless of left ventricular ejection fraction), ID [defined as serum ferritin <300 ng/mL if transferrin saturation <20%] and haemoglobin ≤14 g/dL. Patients from two tertiary institutions were randomized in a 1:1 ratio to receive a single dose of either i.v. FCM (Ferinject®) 1000 mg or i.v. normal saline. The primary endpoint is the change in 6‐min walk distance at Weeks 4 and 12, and secondary endpoints are changes at Weeks 4 and 12 in (i) quality of life as measured by the Kansas City Cardiomyopathy Questionnaire and Visual Analogue Scale scores and (ii) New York Heart Association functional class. Conclusions Preliminary efficacy data on i.v. FCM therapy in Asian HF are expected from this pilot study to support larger‐scale multicentre therapeutic i.v. FCM trials within Asia.

  16. Industrial Power Distribution System Reliability Assessment utilizing Markov Approach

    NASA Astrophysics Data System (ADS)

    Guzman-Rivera, Oscar R.

    A method to perform power system reliability analysis using Markov Approach, Reliability Block Diagrams and Fault Tree analysis has been presented. The Markov method we use is a state space model and is based on state diagrams generated for a one line industrial power distribution system. The Reliability block diagram (RBD) method is a graphical and calculation tool used to model the distribution power system of an industrial facility. Quantitative reliability estimations on this work are based on CARMS and Block Sim simulations as well as state space, RBD's and Failure Mode analyses. The power system reliability was assessed and the main contributors to power system reliability have been identified, both qualitatively and quantitatively. Methods to improve reliability have also been provided including redundancies and protection systems that might be added to the system in order to improve reliability.

  17. Application of ISO 22000 and Failure Mode and Effect Analysis (FMEA) for industrial processing of salmon: a case study.

    PubMed

    Arvanitoyannis, Ioannis S; Varzakas, Theodoros H

    2008-05-01

    The Failure Mode and Effect Analysis (FMEA) model was applied for risk assessment of salmon manufacturing. A tentative approach of FMEA application to the salmon industry was attempted in conjunction with ISO 22000. Preliminary Hazard Analysis was used to analyze and predict the occurring failure modes in a food chain system (salmon processing plant), based on the functions, characteristics, and/or interactions of the ingredients or the processes, upon which the system depends. Critical Control points were identified and implemented in the cause and effect diagram (also known as Ishikawa, tree diagram and fishbone diagram). In this work, a comparison of ISO 22000 analysis with HACCP is carried out over salmon processing and packaging. However, the main emphasis was put on the quantification of risk assessment by determining the RPN per identified processing hazard. Fish receiving, casing/marking, blood removal, evisceration, filet-making cooling/freezing, and distribution were the processes identified as the ones with the highest RPN (252, 240, 210, 210, 210, 210, 200 respectively) and corrective actions were undertaken. After the application of corrective actions, a second calculation of RPN values was carried out resulting in substantially lower values (below the upper acceptable limit of 130). It is noteworthy that the application of Ishikawa (Cause and Effect or Tree diagram) led to converging results thus corroborating the validity of conclusions derived from risk assessment and FMEA. Therefore, the incorporation of FMEA analysis within the ISO 22000 system of a salmon processing industry is anticipated to prove advantageous to industrialists, state food inspectors, and consumers. PMID:18464031

  18. Application of ISO 22000 and Failure Mode and Effect Analysis (FMEA) for industrial processing of salmon: a case study.

    PubMed

    Arvanitoyannis, Ioannis S; Varzakas, Theodoros H

    2008-05-01

    The Failure Mode and Effect Analysis (FMEA) model was applied for risk assessment of salmon manufacturing. A tentative approach of FMEA application to the salmon industry was attempted in conjunction with ISO 22000. Preliminary Hazard Analysis was used to analyze and predict the occurring failure modes in a food chain system (salmon processing plant), based on the functions, characteristics, and/or interactions of the ingredients or the processes, upon which the system depends. Critical Control points were identified and implemented in the cause and effect diagram (also known as Ishikawa, tree diagram and fishbone diagram). In this work, a comparison of ISO 22000 analysis with HACCP is carried out over salmon processing and packaging. However, the main emphasis was put on the quantification of risk assessment by determining the RPN per identified processing hazard. Fish receiving, casing/marking, blood removal, evisceration, filet-making cooling/freezing, and distribution were the processes identified as the ones with the highest RPN (252, 240, 210, 210, 210, 210, 200 respectively) and corrective actions were undertaken. After the application of corrective actions, a second calculation of RPN values was carried out resulting in substantially lower values (below the upper acceptable limit of 130). It is noteworthy that the application of Ishikawa (Cause and Effect or Tree diagram) led to converging results thus corroborating the validity of conclusions derived from risk assessment and FMEA. Therefore, the incorporation of FMEA analysis within the ISO 22000 system of a salmon processing industry is anticipated to prove advantageous to industrialists, state food inspectors, and consumers.

  19. Heart Failure

    MedlinePlus

    ... of breath Common causes of heart failure are coronary artery disease, high blood pressure and diabetes. It is more common in people who are 65 years old or older, African Americans, people who are ... treatments fail. NIH: National Heart, Lung, and Blood Institute

  20. TIME-TEMPERATURE-TRANSFORMATION (TTT) DIAGRAMS FOR FUTURE WASTE COMPOSITIONS

    SciTech Connect

    Billings, A.; Edwards, T.

    2010-07-08

    As a part of the Waste Acceptance Product Specifications (WAPS) for Vitrified High-Level Waste Forms defined by the Department of Energy - Office of Environmental Management, the waste form stability must be determined for each of the projected high-level waste (HLW) types at the Savannah River Site (SRS). Specifically, WAPS 1.4.1 requires the glass transition temperature (T{sub g}) to be defined and time-temperature-transformation (TTT) diagrams to be developed. The T{sub g} of a glass is an indicator of the approximate temperature where the supercooled liquid converts to a solid on cooling or conversely, where the solid begins to behave as a viscoelastic solid on heating. A TTT diagram identifies the crystalline phases that can form as a function of time and temperature for a given waste type or more specifically, the borosilicate glass waste form. In order to assess durability, the Product Consistency Test (PCT) was used and the durability results compared to the Environmental Assessment (EA) glass. The measurement of glass transition temperature and the development of TTT diagrams have already been performed for the seven Defense Waste Processing Facility (DWPF) projected compositions as defined in the Waste Form Compliance Plan (WCP) and in SRNL-STI-2009-00025. Additional phase transformation information exists for other projected compositions, but overall these compositions did not cover composition regions estimated for future waste processing. To develop TTT diagrams for future waste types, the Savannah River National Laboratory (SRNL) fabricated two caches of glass from reagent grade oxides to simulate glass compositions which would be likely processed with and without Al dissolution. These were used for glass transition temperature measurement and TTT diagram development. The glass transition temperatures of both glasses were measured using differential scanning calorimetry (DSC) and were recorded to be 448 C and 452 C. Using the previous TTT diagrams as

  1. The spectroscopic Hertzsprung-Russell diagram of Galactic massive stars

    NASA Astrophysics Data System (ADS)

    Castro, N.; Fossati, L.; Langer, N.; Simón-Díaz, S.; Schneider, F. R. N.; Izzard, R. G.

    2014-10-01

    The distribution of stars in the Hertzsprung-Russell diagram narrates their evolutionary history and directly assesses their properties. Placing stars in this diagram however requires the knowledge of their distances and interstellar extinctions, which are often poorly known for Galactic stars. The spectroscopic Hertzsprung-Russell diagram (sHRD) tells similar evolutionary tales, but is independent of distance and extinction measurements. Based on spectroscopically derived effective temperatures and gravities of almost 600 stars, we derive for the first time the observational distribution of Galactic massive stars in the sHRD. While biases and statistical limitations in the data prevent detailed quantitative conclusions at this time, we see several clear qualitative trends. By comparing the observational sHRD with different state-of-the-art stellar evolutionary predictions, we conclude that convective core overshooting may be mass-dependent and, at high mass (≳15 M⊙), stronger than previously thought. Furthermore, we find evidence for an empirical upper limit in the sHRD for stars with Teff between 10 000 and 32 000 K and, a strikingly large number of objects below this line. This over-density may be due to inflation expanding envelopes in massive main-sequence stars near the Eddington limit. Appendix A is available in electronic form at http://www.aanda.org

  2. Failure of Homeostatic Model Assessment of Insulin Resistance to Detect Marked Diet-Induced Insulin Resistance in Dogs

    PubMed Central

    Ader, Marilyn; Stefanovski, Darko; Richey, Joyce M.; Kim, Stella P.; Kolka, Cathryn M.; Ionut, Viorica; Kabir, Morvarid; Bergman, Richard N.

    2014-01-01

    Accurate quantification of insulin resistance is essential for determining efficacy of treatments to reduce diabetes risk. Gold-standard methods to assess resistance are available (e.g., hyperinsulinemic clamp or minimal model), but surrogate indices based solely on fasting values have attractive simplicity. One such surrogate, the homeostatic model assessment of insulin resistance (HOMA-IR), is widely applied despite known inaccuracies in characterizing resistance across groups. Of greater significance is whether HOMA-IR can detect changes in insulin sensitivity induced by an intervention. We tested the ability of HOMA-IR to detect high-fat diet–induced insulin resistance in 36 healthy canines using clamp and minimal model analysis of the intravenous glucose tolerance test (IVGTT) to document progression of resistance. The influence of pancreatic function on HOMA-IR accuracy was assessed using the acute insulin response during the IVGTT (AIRG). Diet-induced resistance was confirmed by both clamp and minimal model (P < 0.0001), and measures were correlated with each other (P = 0.001). In striking contrast, HOMA-IR ([fasting insulin (μU/mL) × fasting glucose (mmol)]/22.5) did not detect reduced sensitivity induced by fat feeding (P = 0.22). In fact, 13 of 36 animals showed an artifactual decrease in HOMA-IR (i.e., increased sensitivity). The ability of HOMA-IR to detect diet-induced resistance was particularly limited under conditions when insulin secretory function (AIRG) is less than robust. In conclusion, HOMA-IR is of limited utility for detecting diet-induced deterioration of insulin sensitivity quantified by glucose clamp or minimal model. Caution should be exercised when using HOMA-IR to detect insulin resistance when pancreatic function is compromised. It is necessary to use other accurate indices to detect longitudinal changes in insulin resistance with any confidence. PMID:24353184

  3. The Semiotic Structure of Geometry Diagrams: How Textbook Diagrams Convey Meaning

    ERIC Educational Resources Information Center

    Dimmel, Justin K.; Herbst, Patricio G.

    2015-01-01

    Geometry diagrams use the visual features of specific drawn objects to convey meaning about generic mathematical entities. We examine the semiotic structure of these visual features in two parts. One, we conduct a semiotic inquiry to conceptualize geometry diagrams as mathematical texts that comprise choices from different semiotic systems. Two,…

  4. A method for developing design diagrams for ceramic and glass materials using fatigue data

    NASA Technical Reports Server (NTRS)

    Heslin, T. M.; Magida, M. B.; Forrest, K. A.

    1986-01-01

    The service lifetime of glass and ceramic materials can be expressed as a plot of time-to-failure versus applied stress whose plot is parametric in percent probability of failure. This type of plot is called a design diagram. Confidence interval estimates for such plots depend on the type of test that is used to generate the data, on assumptions made concerning the statistical distribution of the test results, and on the type of analysis used. This report outlines the development of design diagrams for glass and ceramic materials in engineering terms using static or dynamic fatigue tests, assuming either no particular statistical distribution of test results or a Weibull distribution and using either median value or homologous ratio analysis of the test results.

  5. 49 CFR 1152.10 - System diagram map.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... of its rail system or file only a narrative description of its lines that provides all of the... date upon which the diagram or narrative, or any amended diagram or narrative, is filed with the Board... pending before the Board on the date upon which the diagram or narrative, or any amended diagram...

  6. 49 CFR 1152.10 - System diagram map.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... of its rail system or file only a narrative description of its lines that provides all of the... date upon which the diagram or narrative, or any amended diagram or narrative, is filed with the Board... pending before the Board on the date upon which the diagram or narrative, or any amended diagram...

  7. 49 CFR 1152.10 - System diagram map.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... of its rail system or file only a narrative description of its lines that provides all of the... date upon which the diagram or narrative, or any amended diagram or narrative, is filed with the Board... pending before the Board on the date upon which the diagram or narrative, or any amended diagram...

  8. The Use of Computational Diagrams and Nomograms in Higher Education.

    ERIC Educational Resources Information Center

    Brandenburg, Richard K.; Simpson, William A.

    1984-01-01

    The use of computational diagrams and nomographs for the calculations that frequently occur in college administration is examined. Steps in constructing a nomograph and a four-dimensional computational diagram are detailed, and uses of three- and four-dimensional diagrams are covered. Diagrams and nomographs are useful in the following cases: (1)…

  9. 49 CFR 1152.10 - System diagram map.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... of its rail system or file only a narrative description of its lines that provides all of the... date upon which the diagram or narrative, or any amended diagram or narrative, is filed with the Board... pending before the Board on the date upon which the diagram or narrative, or any amended diagram...

  10. 49 CFR 1152.10 - System diagram map.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... of its rail system or file only a narrative description of its lines that provides all of the... date upon which the diagram or narrative, or any amended diagram or narrative, is filed with the Board... pending before the Board on the date upon which the diagram or narrative, or any amended diagram...

  11. Fishbone Diagrams: Organize Reading Content with a "Bare Bones" Strategy

    ERIC Educational Resources Information Center

    Clary, Renee; Wandersee, James

    2010-01-01

    Fishbone diagrams, also known as Ishikawa diagrams or cause-and-effect diagrams, are one of the many problem-solving tools created by Dr. Kaoru Ishikawa, a University of Tokyo professor. Part of the brilliance of Ishikawa's idea resides in the simplicity and practicality of the diagram's basic model--a fish's skeleton. This article describes how…

  12. The Butterfly diagram leopard skin pattern

    NASA Astrophysics Data System (ADS)

    Ternullo, Maurizio

    2011-08-01

    A time-latitude diagram where spotgroups are given proportional relevance to their area is presented. The diagram reveals that the spotted area distribution is higly dishomogeneous, most of it being concentrated in few, small portions (``knots'') of the Butterfly Diagram; because of this structure, the BD may be properly described as a cluster of knots. The description, assuming that spots scatter around the ``spot mean latitude'' steadily drifting equatorward, is challenged. Indeed, spots cluster around at as many latitudes as knots; a knot may appear at either lower or higher latitudes than previous ones, in a seemingly random way; accordingly, the spot mean latitude abruptly drifts equatorward or even poleward at any knot activation, in spite of any smoothing procedure. Preliminary analyses suggest that the activity splits, in any hemisphere, into two or more distinct ``activity waves'', drifting equatorward at a rate higher than the spot zone as a whole.

  13. Phase diagram of a truncated tetrahedral model.

    PubMed

    Krcmar, Roman; Gendiar, Andrej; Nishino, Tomotoshi

    2016-08-01

    Phase diagram of a discrete counterpart of the classical Heisenberg model, the truncated tetrahedral model, is analyzed on the square lattice, when the interaction is ferromagnetic. Each spin is represented by a unit vector that can point to one of the 12 vertices of the truncated tetrahedron, which is a continuous interpolation between the tetrahedron and the octahedron. Phase diagram of the model is determined by means of the statistical analog of the entanglement entropy, which is numerically calculated by the corner transfer matrix renormalization group method. The obtained phase diagram consists of four different phases, which are separated by five transition lines. In the parameter region, where the octahedral anisotropy is dominant, a weak first-order phase transition is observed. PMID:27627273

  14. Phase diagram of a truncated tetrahedral model

    NASA Astrophysics Data System (ADS)

    Krcmar, Roman; Gendiar, Andrej; Nishino, Tomotoshi

    2016-08-01

    Phase diagram of a discrete counterpart of the classical Heisenberg model, the truncated tetrahedral model, is analyzed on the square lattice, when the interaction is ferromagnetic. Each spin is represented by a unit vector that can point to one of the 12 vertices of the truncated tetrahedron, which is a continuous interpolation between the tetrahedron and the octahedron. Phase diagram of the model is determined by means of the statistical analog of the entanglement entropy, which is numerically calculated by the corner transfer matrix renormalization group method. The obtained phase diagram consists of four different phases, which are separated by five transition lines. In the parameter region, where the octahedral anisotropy is dominant, a weak first-order phase transition is observed.

  15. Adding point of care ultrasound to assess volume status in heart failure patients in a nurse-led outpatient clinic. A randomised study

    PubMed Central

    Gundersen, Guri Holmen; Norekval, Tone M; Haug, Hilde Haugberg; Skjetne, Kyrre; Kleinau, Jens Olaf; Graven, Torbjorn; Dalen, Havard

    2016-01-01

    Objectives Medical history, physical examination and laboratory testing are not optimal for the assessment of volume status in heart failure (HF) patients. We aimed to study the clinical influence of focused ultrasound of the pleural cavities and inferior vena cava (IVC) performed by specialised nurses to assess volume status in HF patients at an outpatient clinic. Methods HF outpatients were prospectively included and underwent laboratory testing, history recording and clinical examination by two nurses with and without an ultrasound examination of the pleural cavities and IVC using a pocket-size imaging device, in random order. Each nurse worked in a team with a cardiologist. The influence of the different diagnostic tests on diuretic dosing was assessed descriptively and in linear regression analyses. Results Sixty-two patients were included and 119 examinations were performed. Mean±SD age was 74±12 years, EF was 34±14%, and N-terminal pro-brain natriuretic peptide (NT-proBNP) value was 3761±3072 ng/L. Dosing of diuretics differed between the teams in 31 out of 119 consultations. Weight change and volume status assessed clinically with and without ultrasound predicted dose adjustment of diuretics at follow-up (p<0.05). Change of oedema, NT-proBNP, creatinine, and symptoms did not (p≥0.10). In adjusted analyses, only volume status based on ultrasound predicted dose adjustments of diuretics at first visit and follow-up (all ultrasound p≤0.01, all other p≥0.2). Conclusions Ultrasound examinations of the pleural cavities and IVC by nurses may improve diagnostics and patient care in HF patients at an outpatient clinic, but more studies are needed to determine whether these examinations have an impact on clinical outcomes. Trial registration number NCT01794715. PMID:26438785

  16. A pseudo-haptic knot diagram interface

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Weng, Jianguang; Hanson, Andrew J.

    2011-01-01

    To make progress in understanding knot theory, we will need to interact with the projected representations of mathematical knots which are of course continuous in 3D but significantly interrupted in the projective images. One way to achieve such a goal would be to design an interactive system that allows us to sketch 2D knot diagrams by taking advantage of a collision-sensing controller and explore their underlying smooth structures through a continuous motion. Recent advances of interaction techniques have been made that allow progress to be made in this direction. Pseudo-haptics that simulates haptic effects using pure visual feedback can be used to develop such an interactive system. This paper outlines one such pseudo-haptic knot diagram interface. Our interface derives from the familiar pencil-and-paper process of drawing 2D knot diagrams and provides haptic-like sensations to facilitate the creation and exploration of knot diagrams. A centerpiece of the interaction model simulates a "physically" reactive mouse cursor, which is exploited to resolve the apparent conflict between the continuous structure of the actual smooth knot and the visual discontinuities in the knot diagram representation. Another value in exploiting pseudo-haptics is that an acceleration (or deceleration) of the mouse cursor (or surface locator) can be used to indicate the slope of the curve (or surface) of whom the projective image is being explored. By exploiting these additional visual cues, we proceed to a full-featured extension to a pseudo-haptic 4D visualization system that simulates the continuous navigation on 4D objects and allows us to sense the bumps and holes in the fourth dimension. Preliminary tests of the software show that main features of the interface overcome some expected perceptual limitations in our interaction with 2D knot diagrams of 3D knots and 3D projective images of 4D mathematical objects.

  17. B-Fe-U Phase Diagram

    NASA Astrophysics Data System (ADS)

    Dias, Marta; Carvalho, Patrícia Almeida; Mardolcar, Umesh Vinaica; Tougait, Olivier; Noël, Henri; Gonçalves, António Pereira

    2014-04-01

    The liquidus projection of the U-rich corner of the B-Fe-U phase diagram is proposed based on X-ray powder diffraction measurements, differential thermal analysis, and scanning electron microscopy observations complemented with energy- and wavelength-dispersive X-ray spectroscopies. Two ternary reactions in this U-rich region were observed and their approximate temperatures were established. In addition, an overview of the complete phase diagram is given, including the liquidus projection; isothermal sections at 1053 K, 1223 K, and 1373 K (780 °C, 950 °C, and 1100 °C); and a U:(Fe,B) = 1:5 isopleth.

  18. Common Cause Failure Modeling

    NASA Technical Reports Server (NTRS)

    Hark, Frank; Britton, Paul; Ring, Rob; Novack, Steven D.

    2015-01-01

    Common Cause Failures (CCFs) are a known and documented phenomenon that defeats system redundancy. CCFS are a set of dependent type of failures that can be caused by: system environments; manufacturing; transportation; storage; maintenance; and assembly, as examples. Since there are many factors that contribute to CCFs, the effects can be reduced, but they are difficult to eliminate entirely. Furthermore, failure databases sometimes fail to differentiate between independent and CCF (dependent) failure and data is limited, especially for launch vehicles. The Probabilistic Risk Assessment (PRA) of NASA's Safety and Mission Assurance Directorate at Marshall Space Flight Center (MFSC) is using generic data from the Nuclear Regulatory Commission's database of common cause failures at nuclear power plants to estimate CCF due to the lack of a more appropriate data source. There remains uncertainty in the actual magnitude of the common cause risk estimates for different systems at this stage of the design. Given the limited data about launch vehicle CCF and that launch vehicles are a highly redundant system by design, it is important to make design decisions to account for a range of values for independent and CCFs. When investigating the design of the one-out-of-two component redundant system for launch vehicles, a response surface was constructed to represent the impact of the independent failure rate versus a common cause beta factor effect on a system's failure probability. This presentation will define a CCF and review estimation calculations. It gives a summary of reduction methodologies and a review of examples of historical CCFs. Finally, it presents the response surface and discusses the results of the different CCFs on the reliability of a one-out-of-two system.

  19. Common Cause Failure Modeling

    NASA Technical Reports Server (NTRS)

    Hark, Frank; Britton, Paul; Ring, Rob; Novack, Steven D.

    2016-01-01

    Common Cause Failures (CCFs) are a known and documented phenomenon that defeats system redundancy. CCFS are a set of dependent type of failures that can be caused by: system environments; manufacturing; transportation; storage; maintenance; and assembly, as examples. Since there are many factors that contribute to CCFs, the effects can be reduced, but they are difficult to eliminate entirely. Furthermore, failure databases sometimes fail to differentiate between independent and CCF (dependent) failure and data is limited, especially for launch vehicles. The Probabilistic Risk Assessment (PRA) of NASA's Safety and Mission Assurance Directorate at Marshal Space Flight Center (MFSC) is using generic data from the Nuclear Regulatory Commission's database of common cause failures at nuclear power plants to estimate CCF due to the lack of a more appropriate data source. There remains uncertainty in the actual magnitude of the common cause risk estimates for different systems at this stage of the design. Given the limited data about launch vehicle CCF and that launch vehicles are a highly redundant system by design, it is important to make design decisions to account for a range of values for independent and CCFs. When investigating the design of the one-out-of-two component redundant system for launch vehicles, a response surface was constructed to represent the impact of the independent failure rate versus a common cause beta factor effect on a system's failure probability. This presentation will define a CCF and review estimation calculations. It gives a summary of reduction methodologies and a review of examples of historical CCFs. Finally, it presents the response surface and discusses the results of the different CCFs on the reliability of a one-out-of-two system.

  20. Assessment of vasodilator therapy in patients with severe congestive heart failure: limitations of measurements of left ventricular ejection fraction and volumes

    SciTech Connect

    Firth, B.G.; Dehmer, G.J.; Markham, R.V. Jr.; Willerson, J.T.; Hillis, L.D.

    1982-11-01

    Although noninvasive techniques are often used to assess the effect of vasodilator therapy in patients with congestive heart failure, it is unknown whether changes in noninvasively determined left ventricular ejection fraction, volume, or dimension reliably reflect alterations in intracardiac pressure and flow. Accordingly, we compared the acute effect of sodium nitroprusside on left ventricular volume and ejection fraction (determined scintigraphically) with its effect on intracardiac pressure and forward cardiac index (determined by thermodilution) in 12 patients with severe, chronic congestive heart failure and a markedly dilated left ventricle. Nitroprusside (infused at 1.3 +/- 1.1 (mean +/- standard deviation) microgram/kg/min) caused a decrease in mean systemic arterial, mean pulmonary arterial, and mean pulmonary capillary wedge pressure as well as a concomitant increase in forward cardiac index. Simultaneously, left ventricular end-diastolic and end-systolic volume indexes decreased, but the scintigraphically determined cardiac index did not change significantly. Left ventricular ejection fraction averaged 0.19 +/- 0.05 before nitroprusside administration and increased by less than 0.05 units in response to nitroprusside in 11 of 12 patients. The only significant correlation between scintigraphically and invasively determined variables was that between the percent change in end-diastolic volume index and the percent change in pulmonary capillary wedge pressure (r . 0.68, p . 0.01). Although nitroprusside produced changes in scintigraphically determined left ventricular ejection fraction, end-systolic volume index, and cardiac index, these alterations bore no predictable relation to changes in intracardiac pressure, forward cardiac index, or vascular resistance. Furthermore, nitroprusside produced a considerably greater percent change in the invasively measured variables than in the scintigraphically determined ones.

  1. Clinical usefulness of blood metal measurements to assess the failure of metal-on-metal hip implants

    PubMed Central

    Sampson, Barry; Hart, Alister

    2012-01-01

    In April 2010, a Medicines and Healthcare Products Regulatory Agency safety alert concerning all metal-on-metal (MOM) hip replacements recommended measuring chromium and cobalt concentrations when managing patients with painful prostheses. The need for this review is illustrated by the recent surge in requests for these blood tests from orthopaedic surgeons following this alert. The aim is to provide guidance to laboratories in assessing these requests and advising clinicians on interpretation. First, we summarize the basic terminology regarding the types of hip replacements, with emphasis on the MOM type. Second, we describe the clinical concerns over implant-derived wear debris in the local tissues and distant sites. Analytical aspects of the measurement of the relevant metal ions and what factors affect the levels measured are discussed. The application of inductively coupled plasma mass spectrometry techniques to the measurement of these metals is considered in detail. The biological effects of metal wear products are summarized with local toxicity and systemic biological effects considered, including carcinogenicity, genotoxicity and systemic toxicity. Clinical cases are used to illustrate pertinent points. PMID:22155921

  2. Identifying conservation successes, failures and future opportunities; assessing recovery potential of wild ungulates and tigers in Eastern Cambodia.

    PubMed

    O'Kelly, Hannah J; Evans, Tom D; Stokes, Emma J; Clements, Tom J; Dara, An; Gately, Mark; Menghor, Nut; Pollard, Edward H B; Soriyun, Men; Walston, Joe

    2012-01-01

    Conservation investment, particularly for charismatic and wide-ranging large mammal species, needs to be evidence-based. Despite the prevalence of this theme within the literature, examples of robust data being generated to guide conservation policy and funding decisions are rare. We present the first published case-study of tiger conservation in Indochina, from a site where an evidence-based approach has been implemented for this iconic predator and its prey. Despite the persistence of extensive areas of habitat, Indochina's tiger and ungulate prey populations are widely supposed to have precipitously declined in recent decades. The Seima Protection Forest (SPF), and broader Eastern Plains Landscape, was identified in 2000 as representing Cambodia's best hope for tiger recovery; reflected in its designation as a Global Priority Tiger Conservation Landscape. Since 2005 distance sampling, camera-trapping and detection-dog surveys have been employed to assess the recovery potential of ungulate and tiger populations in SPF. Our results show that while conservation efforts have ensured that small but regionally significant populations of larger ungulates persist, and density trends in smaller ungulates are stable, overall ungulate populations remain well below theoretical carrying capacity. Extensive field surveys failed to yield any evidence of tiger, and we contend that there is no longer a resident population within the SPF. This local extirpation is believed to be primarily attributable to two decades of intensive hunting; but importantly, prey densities are also currently below the level necessary to support a viable tiger population. Based on these results and similar findings from neighbouring sites, Eastern Cambodia does not currently constitute a Tiger Source Site nor meet the criteria of a Global Priority Tiger Landscape. However, SPF retains global importance for many other elements of biodiversity. It retains high regional importance for ungulate

  3. Identifying conservation successes, failures and future opportunities; assessing recovery potential of wild ungulates and tigers in Eastern Cambodia.

    PubMed

    O'Kelly, Hannah J; Evans, Tom D; Stokes, Emma J; Clements, Tom J; Dara, An; Gately, Mark; Menghor, Nut; Pollard, Edward H B; Soriyun, Men; Walston, Joe

    2012-01-01

    Conservation investment, particularly for charismatic and wide-ranging large mammal species, needs to be evidence-based. Despite the prevalence of this theme within the literature, examples of robust data being generated to guide conservation policy and funding decisions are rare. We present the first published case-study of tiger conservation in Indochina, from a site where an evidence-based approach has been implemented for this iconic predator and its prey. Despite the persistence of extensive areas of habitat, Indochina's tiger and ungulate prey populations are widely supposed to have precipitously declined in recent decades. The Seima Protection Forest (SPF), and broader Eastern Plains Landscape, was identified in 2000 as representing Cambodia's best hope for tiger recovery; reflected in its designation as a Global Priority Tiger Conservation Landscape. Since 2005 distance sampling, camera-trapping and detection-dog surveys have been employed to assess the recovery potential of ungulate and tiger populations in SPF. Our results show that while conservation efforts have ensured that small but regionally significant populations of larger ungulates persist, and density trends in smaller ungulates are stable, overall ungulate populations remain well below theoretical carrying capacity. Extensive field surveys failed to yield any evidence of tiger, and we contend that there is no longer a resident population within the SPF. This local extirpation is believed to be primarily attributable to two decades of intensive hunting; but importantly, prey densities are also currently below the level necessary to support a viable tiger population. Based on these results and similar findings from neighbouring sites, Eastern Cambodia does not currently constitute a Tiger Source Site nor meet the criteria of a Global Priority Tiger Landscape. However, SPF retains global importance for many other elements of biodiversity. It retains high regional importance for ungulate

  4. Study flow diagrams in Cochrane systematic review updates: an adapted PRISMA flow diagram.

    PubMed

    Stovold, Elizabeth; Beecher, Deirdre; Foxlee, Ruth; Noel-Storr, Anna

    2014-05-29

    Cochrane systematic reviews are conducted and reported according to rigorous standards. A study flow diagram must be included in a new review, and there is clear guidance from the PRISMA statement on how to do this. However, for a review update, there is currently no guidance on how study flow diagrams should be presented. To address this, a working group was formed to find a solution and produce guidance on how to use these diagrams in review updates.A number of different options were devised for how these flow diagrams could be used in review updates, and also in cases where multiple searches for a review or review update have been conducted. These options were circulated to the Cochrane information specialist community for consultation and feedback. Following the consultation period, the working group refined the guidance and made the recommendation that for review updates an adapted PRISMA flow diagram should be used, which includes an additional box with the number of previously included studies feeding into the total. Where multiple searches have been conducted, the results should be added together and treated as one set of results.There is no existing guidance for using study flow diagrams in review updates. Our adapted diagram is a simple and pragmatic solution for showing the flow of studies in review updates.

  5. Nonverbal Poetry: Family Life-Space Diagrams.

    ERIC Educational Resources Information Center

    Bardill, Donald R.

    2001-01-01

    Examines life-space diagrams as a form of nonverbal poetry which taps personal feelings, tells a story, and characterizes a particular life situation, forming a useful therapy technique that provides a family the opportunity to examine its internal family relationships. Offers two case studies, discusses five levels of knowing and awareness, and…

  6. Computer-Generated Diagrams for the Classroom.

    ERIC Educational Resources Information Center

    Carle, Mark A.; Greenslade, Thomas B., Jr.

    1986-01-01

    Describes 10 computer programs used to draw diagrams usually drawn on chalkboards, such as addition of three vectors, vector components, range of a projectile, lissajous figures, beats, isotherms, Snell's law, waves passing through a lens, magnetic field due to Helmholtz coils, and three curves. Several programming tips are included. (JN)

  7. Spin wave Feynman diagram vertex computation package

    NASA Astrophysics Data System (ADS)

    Price, Alexander; Javernick, Philip; Datta, Trinanjan

    Spin wave theory is a well-established theoretical technique that can correctly predict the physical behavior of ordered magnetic states. However, computing the effects of an interacting spin wave theory incorporating magnons involve a laborious by hand derivation of Feynman diagram vertices. The process is tedious and time consuming. Hence, to improve productivity and have another means to check the analytical calculations, we have devised a Feynman Diagram Vertex Computation package. In this talk, we will describe our research group's effort to implement a Mathematica based symbolic Feynman diagram vertex computation package that computes spin wave vertices. Utilizing the non-commutative algebra package NCAlgebra as an add-on to Mathematica, symbolic expressions for the Feynman diagram vertices of a Heisenberg quantum antiferromagnet are obtained. Our existing code reproduces the well-known expressions of a nearest neighbor square lattice Heisenberg model. We also discuss the case of a triangular lattice Heisenberg model where non collinear terms contribute to the vertex interactions.

  8. Journeys on the H-R diagram

    SciTech Connect

    Kaler, J.B.

    1988-05-01

    The evolution of various types of stars along the H-R diagram is discussed. Star birth and youth is addressed, and the events that occur due to core contraction, shell burning, and double-shell burning are described. The evolutionary courses of planetary nebulae, white dwarfs, and supernovas are examined.

  9. Complexities of One-Component Phase Diagrams

    ERIC Educational Resources Information Center

    Ciccioli, Andrea; Glasser, Leslie

    2011-01-01

    For most materials, the solid at and near the triple-point temperature is denser than the liquid with which it is in equilibrium. However, for water and certain other materials, the densities of the phases are reversed, with the solid being less dense. The profound consequences for the appearance of the "pVT" diagram of one-component materials…

  10. Constructing Causal Diagrams to Learn Deliberation

    ERIC Educational Resources Information Center

    Easterday, Matthew W.; Aleven, Vincent; Scheines, Richard; Carver, Sharon M.

    2009-01-01

    Policy problems like "What should we do about global warming?" are ill-defined in large part because we do not agree on a system to represent them the way we agree Algebra problems should be represented by equations. As a first step toward building a policy deliberation tutor, we investigated: (a) whether causal diagrams help students learn to…

  11. Fog Machines, Vapors, and Phase Diagrams

    ERIC Educational Resources Information Center

    Vitz, Ed

    2008-01-01

    A series of demonstrations is described that elucidate the operation of commercial fog machines by using common laboratory equipment and supplies. The formation of fogs, or "mixing clouds", is discussed in terms of the phase diagram for water and other chemical principles. The demonstrations can be adapted for presentation suitable for elementary…

  12. Image Attributes: A Study of Scientific Diagrams.

    ERIC Educational Resources Information Center

    Brunskill, Jeff; Jorgensen, Corinne

    2002-01-01

    Discusses advancements in imaging technology and increased user access to digital images, as well as efforts to develop adequate indexing and retrieval methods for image databases. Describes preliminary results of a study of undergraduates that explored the attributes naive subjects use to describe scientific diagrams. (Author/LRW)

  13. Dynamic Tactile Diagram Simplification on Refreshable Displays

    ERIC Educational Resources Information Center

    Rastogi, Ravi; Pawluk, Dianne T. V.

    2013-01-01

    The increasing use of visual diagrams in educational and work environments, and even our daily lives, has created obstacles for individuals who are blind or visually impaired to "independently" access the information they represent. Although physical tactile pictures can be created to convey the visual information, it is typically a slow,…

  14. Phase diagram of spiking neural networks

    PubMed Central

    Seyed-allaei, Hamed

    2015-01-01

    In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probability of 2%, 20% of neurons are inhibitory and 80% are excitatory. These common values are based on experiments, observations, and trials and errors, but here, I take a different perspective, inspired by evolution, I systematically simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable. I stimulate networks with pulses and then measure their: dynamic range, dominant frequency of population activities, total duration of activities, maximum rate of population and the occurrence time of maximum rate. The results are organized in phase diagram. This phase diagram gives an insight into the space of parameters – excitatory to inhibitory ratio, sparseness of connections and synaptic weights. This phase diagram can be used to decide the parameters of a model. The phase diagrams show that networks which are configured according to the common values, have a good dynamic range in response to an impulse and their dynamic range is robust in respect to synaptic weights, and for some synaptic weights they oscillates in α or β frequencies, independent of external stimuli. PMID:25788885

  15. NFHS Court and Field Diagram Guide.

    ERIC Educational Resources Information Center

    Gillis, John, Ed.

    This guide contains a comprehensive collection of diagrams and specifications of playing fields and courts used in interscholastic and recreational sports, along with information on how to set up various formats of tournament drawings, how to compute golf handicaps, and how to convert metric-to-English distances. Lists are provided of national…

  16. Weight diagram construction of Lax operators

    SciTech Connect

    Carbon, S.L.; Piard, E.J.

    1991-10-01

    We review and expand methods introduced in our previous paper. It is proved that cyclic weight diagrams corresponding to representations of affine Lie algebras allow one to construct the associated Lax operator. The resultant Lax operator is in the Miura-like form and generates the modified KdV equations. The algorithm is extended to the super-symmetric case.

  17. The Binary Temperature-Composition Phase Diagram

    ERIC Educational Resources Information Center

    Sanders, Philip C.; Reeves, James H.; Messina, Michael

    2006-01-01

    The equations for the liquid and gas lines in the binary temperature-composition phase diagram are derived by approximating that delta(H)[subscript vap] of the two liquids are equal. It is shown that within this approximation, the resulting equations are not too difficult to present in an undergraduate physical chemistry lecture.

  18. The role of post-failure brittleness of soft rocks in the assessment of stability of intact masses: FDEM technique applications to ideal problems

    NASA Astrophysics Data System (ADS)

    Lollino, Piernicola; Andriani, Gioacchino Francesco; Fazio, Nunzio Luciano; Perrotti, Michele

    2016-04-01

    Strain-softening under low confinement stress, i.e. the drop of strength that occurs in the post-failure stage, represents a key factor of the stress-strain behavior of rocks. However, this feature of the rock behavior is generally underestimated or even neglected in the assessment of boundary value problems of intact soft rock masses. This is typically the case when the stability of intact rock masses is treated by means of limit equilibrium or finite element analyses, for which rigid-plastic or elastic perfectly-plastic constitutive models, generally implementing peak strength conditions of the rock, are respectively used. In fact, the aforementioned numerical techniques are characterized by intrinsic limitations that do not allow to account for material brittleness, either for the method assumptions or due to numerical stability problems, as for the case of the finite element method, unless sophisticated regularization techniques are implemented. However, for those problems that concern the stability of intact soft rock masses at low stress levels, as for example the stability of shallow underground caves or that of rock slopes, the brittle stress-strain response of rock in the post-failure stage cannot be disregarded due to the risk of overestimation of the stability factor. This work is aimed at highlighting the role of post-peak brittleness of soft rocks in the analysis of specific ideal problems by means of the use of a hybrid finite-discrete element technique (FDEM) that allows for the simulation of the rock stress-strain brittle behavior in a proper way. In particular, the stability of two ideal cases, represented by a shallow underground rectangular cave and a vertical cliff, has been analyzed by implementing a post-peak brittle behavior of the rock and the comparison with a non-brittle response of the rock mass is also explored. To this purpose, the mechanical behavior of a soft calcarenite belonging to the Calcarenite di Gravina formation, extensively

  19. Heart failure in South America.

    PubMed

    Bocchi, Edimar Alcides

    2013-05-01

    Continued assessment of temporal trends in mortality and epidemiology of specific heart failure in South America is needed to provide a scientific basis for rational allocation of the limited health care resources, and strategies to reduce risk and predict the future burden of heart failure. The epidemiology of heart failure in South America was reviewed. Heart failure is the main cause of hospitalization based on available data from approximately 50% of the South American population. The main etiologies of heart failure are ischemic, idiopathic dilated cardiomyopathy, valvular, hypertensive and chagasic etiologies. In endemic areas, Chagas heart disease may be responsible by 41% of the HF cases. Also, heart failure presents high mortality especially in patients with Chagas etiology. Heart failure and etiologies associated with heart failure may be responsible for 6.3% of causes of deaths. Rheumatic fever is the leading cause of valvular heart disease. However, a tendency to reduction of HF mortality due to Chagas heart disease from 1985 to 2006, and reduction in mortality due to HF from 1999 to 2005 were observed in selected states in Brazil. The findings have important public health implications because the allocation of health care resources, and strategies to reduce risk of heart failure should also consider the control of neglected Chagas disease and rheumatic fever in South American countries.

  20. Heart Failure in South America

    PubMed Central

    Bocchi, Edimar Alcides

    2013-01-01

    Continued assessment of temporal trends in mortality and epidemiology of specific heart failure in South America is needed to provide a scientific basis for rational allocation of the limited health care resources, and strategies to reduce risk and predict the future burden of heart failure. The epidemiology of heart failure in South America was reviewed. Heart failure is the main cause of hospitalization based on available data from approximately 50% of the South American population. The main etiologies of heart failure are ischemic, idiopathic dilated cardiomyopathy, valvular, hypertensive and chagasic etiologies. In endemic areas, Chagas heart disease may be responsible by 41% of the HF cases. Also, heart failure presents high mortality especially in patients with Chagas etiology. Heart failure and etiologies associated with heart failure may be responsible for 6.3% of causes of deaths. Rheumatic fever is the leading cause of valvular heart disease. However, a tendency to reduction of HF mortality due to Chagas heart disease from 1985 to 2006, and reduction in mortality due to HF from 1999 to 2005 were observed in selected states in Brazil. The findings have important public health implications because the allocation of health care resources, and strategies to reduce risk of heart failure should also consider the control of neglected Chagas disease and rheumatic fever in South American countries. PMID:23597301

  1. Improvement in exercise capacity despite cardiac deteriora tion: nonivasive assessment of long-term therapy with amrinone in severe heart failure.

    PubMed

    Siegel, L A; LeJemtel, T H; Strom, J; Maskin, C; Forman, R; Frishman, W; Wexler, J; Ribner, H; Sonnenblick, E H

    1983-11-01

    Seven patients with severe congestive heart failure (CHF) were treated with oral amrinone for a mean duration of 39 weeks (range 16 to 72). During the first week of therapy, exercise capacity as assessed on a treadmill using the Naughton protocol, increased substantially from 7.6 +/- 4.2 to 12.1 +/- 4.4 minutes (p less than 0.01). At an early period of follow-up (8 to 12 weeks), a further significant increase in exercise capacity to 14.7 +/- 5.0 minutes (p less than 0.05) was demonstrated, while at a later follow-up exercise capacity had decreased to 11.4 +/- 6.8 minutes (p less than 0.05). This was still significantly greater than prior to amrinone therapy (p less than 0.01). Left ventricular ejection fraction was increased from 14 +/- 4 to 19 +/- 4% (p less than 0.05) during the first week of therapy, but was not significantly different from control at the early and late periods of follow-up. Left ventricular end-diastolic dimension index increased from control value of 43 +/- 5 to 47 +/- 7 mm/m2 (p less than 0.01) at the late period of follow-up. Thus long-term amrinone therapy resulted in a substantial improvement in exercise capacity despite a slow, but progressive decline in cardiac performance.

  2. New web-based applications for mechanistic case diagramming

    PubMed Central

    Dee, Fred R.; Haugen, Thomas H.; Kreiter, Clarence D.

    2014-01-01

    The goal of mechanistic case diagraming (MCD) is to provide students with more in-depth understanding of cause and effect relationships and basic mechanistic pathways in medicine. This will enable them to better explain how observed clinical findings develop from preceding pathogenic and pathophysiological events. The pedagogic function of MCD is in relating risk factors, disease entities and morphology, signs and symptoms, and test and procedure findings in a specific case scenario with etiologic pathogenic and pathophysiological sequences within a flow diagram. In this paper, we describe the addition of automation and predetermined lists to further develop the original concept of MCD as described by Engelberg in 1992 and Guerrero in 2001. We demonstrate that with these modifications, MCD is effective and efficient in small group case-based teaching for second-year medical students (ratings of ~3.4 on a 4.0 scale). There was also a significant correlation with other measures of competency, with a ‘true’ score correlation of 0.54. A traditional calculation of reliability showed promising results (α =0.47) within a low stakes, ungraded environment. Further, we have demonstrated MCD's potential for use in independent learning and TBL. Future studies are needed to evaluate MCD's potential for use in medium stakes assessment or self-paced independent learning and assessment. MCD may be especially relevant in returning students to the application of basic medical science mechanisms in the clinical years. PMID:25059836

  3. A Simple Approach for Boundary Improvement of Euler Diagrams.

    PubMed

    Simonetto, Paolo; Archambault, Daniel; Scheidegger, Carlos

    2016-01-01

    General methods for drawing Euler diagrams tend to generate irregular polygons. Yet, empirical evidence indicates that smoother contours make these diagrams easier to read. In this paper, we present a simple method to smooth the boundaries of any Euler diagram drawing. When refining the diagram, the method must ensure that set elements remain inside their appropriate boundaries and that no region is removed or created in the diagram. Our approach uses a force system that improves the diagram while at the same time ensuring its topological structure does not change. We demonstrate the effectiveness of the approach through case studies and quantitative evaluations.

  4. Initial Sequential Organ Failure Assessment score versus Simplified Acute Physiology score to analyze multiple organ dysfunction in infectious diseases in Intensive Care Unit

    PubMed Central

    Nair, Remyasri; Bhandary, Nithish M.; D’Souza, Ashton D.

    2016-01-01

    Aims: To investigate initial Sequential Organ Failure Assessment (SOFA) score of patients in Intensive Care Unit (ICU), who were diagnosed with infectious disease, as an indicator of multiple organ dysfunction and to examine if initial SOFA score is a better mortality predictor compared to Simplified Acute Physiology Score (SAPS). Materials and Methods: Hospital-based study done in medical ICU, from June to September 2014 with a sample size of 48. Patients aged 18 years and above, diagnosed with infectious disease were included. Patients with history of chronic illness (renal/hepatic/pulmonary/  cardiovascular), diabetes, hypertension, chronic obstructive pulmonary disease, heart disease, those on immunosuppressive therapy/chemoradiotherapy for malignancy and patients in immunocompromised state were excluded. Blood investigations were obtained. Six organ dysfunctions were assessed using initial SOFA score and graded from 0 to 4. SAPS was calculated as the sum of points assigned to each of the 17 variables (12 physiological, age, type of admission, and three underlying diseases). The outcome measure was survival status at ICU discharge. Results: We categorized infectious diseases into dengue fever, leptospirosis, malaria, respiratory tract infections, and others which included undiagnosed febrile illness, meningitis, urinary tract infection and gastroenteritis. Initial SOFA score was both sensitive and specific; SAPS lacked sensitivity. We found no significant association between age and survival status. Both SAPS and initial SOFA score were found to be statistically significant as mortality predictors. There is significant association of initial SOFA score in analyzing organ dysfunction in infectious diseases (P < 0.001). SAPS showed no statistical significance. There was statistically significant (P = 0.015) percentage of nonsurvivors with moderate and severe dysfunction, based on SOFA score. Nonsurvivors had higher SAPS but was not statistically significant (P

  5. Clinical relationship of myocardial sympathetic nervous activity to cardiovascular functions in chronic heart failure: assessment by myocardial scintigraphy with 123I-metaiodobenzylguanidine.

    PubMed

    Wada, Yukoh; Miura, Masaetsu; Fujiwara, Satomi; Mori, Shunpei; Seiji, Kazumasa; Kimura, Tokihisa

    2003-12-01

    The aim of this study was to clarify the relationship between cardiac sympathetic nervous activity (SNA) assessed by radioiodinated metaiodobenzylguanidine (123I-MIBG), an analogue of norepinephrine and cardiovascular functions in patients with chronic heart failure (CHF). Subjects were 17 patients with CHF. A dose of 111 MBq of 123I-MIBG was administered intravenously, and 5-minute anterior planar images were obtained 15 minutes (early image) and 3 hours (delayed image) after the injection. The heart/mediastinum (H/M) count ratio was defined to quantify cardiac 123I-MIBG uptake. The washout ratio (WR) of 123I-MIBG from the heart was calculated as follows: (early counts-delayed counts)/early counts x 100 (%). Echocardiography was performed on all patients within 1 week of 123I-MIBG scintigraphy to measure stroke volume index (SVI). Blood pressure and heart rate (HR) in the resting state were also recorded to calculate cardiovascular functions including cardiac output, pulse pressure (PP), and mean blood pressure. Significant linear correlations were found between the early H/M ratio of 123I-MIBG and SVI, and between the delayed H/M ratio of 123I-MIBG and SVI, respectively. WR of 123I-MIBG was correlated with HR, and was inversely correlated with SVI and with PP, respectively. It is likely that a decrease in SVI is associated with enhanced cardiac SNA in severe CHF. 123I-MIBG scintigraphy is effective in assessing the cardiac functional status and SNA in patients with CHF in vivo. Moreover, changes in PP and HR indicate well alteration in SNA. PMID:14690018

  6. Assessment of the effects of physical training in patients with chronic heart failure: the utility of effort-independent exercise variables.

    PubMed

    Kemps, Hareld M C; de Vries, Wouter R; Schmikli, Sandor L; Zonderland, Maria L; Hoogeveen, Adwin R; Thijssen, Eric J M; Schep, Goof

    2010-02-01

    Traditionally, the effects of physical training in patients with chronic heart failure (CHF) are evaluated by changes in peak oxygen uptake (peak VO(2)). The assessment of peak VO(2), however, is highly dependent on the patients' motivation. The aim of the present study was to evaluate the clinical utility of effort-independent exercise variables for detecting training effects in CHF patients. In a prospective controlled trial, patients with stable CHF were allocated to an intervention group (N = 30), performing a 12-week combined cycle interval and muscle resistance training program, or a control group (N = 18) that was matched for age, gender, body composition and left ventricular ejection fraction. The following effort-independent exercise variables were evaluated: the ventilatory anaerobic threshold (VAT), oxygen uptake efficiency slope (OUES), the V(E)/VCO(2) slope and the time constant of VO(2) kinetics during recovery from submaximal constant-load exercise (tau-rec). In addition to post-training increases in peak VO(2) and peak V(E), the intervention group showed significant within and between-group improvements in VAT, OUES and tau-rec. There were no significant differences between relative improvements of the effort-independent exercise variables in the intervention group. In contrast with VAT, which could not be determined in 9% of the patients, OUES and tau-rec were determined successfully in all patients. Therefore, we conclude that OUES and tau-rec are useful in clinical practice for the assessment of training effects in CHF patients, especially in cases of poor subject effort during symptom-limited exercise testing or when patients are unable to reach a maximal exercise level.

  7. Fibrosis assessment by integrated backscatter and its relationship with longitudinal deformation and diastolic function in heart failure with preserved ejection fraction.

    PubMed

    Carluccio, Erberto; Biagioli, Paolo; Zuchi, Cinzia; Bardelli, Giuliana; Murrone, Adriano; Lauciello, Rosanna; D'Addario, Sandra; Mengoni, Anna; Alunni, Gianfranco; Ambrosio, Giuseppe

    2016-07-01

    Myocardial reflectivity, as assessed by calibrated integrated backscatter (cIB) analysis, is a non-invasive surrogate for the amount of left ventricular (LV) fibrosis. The aim of this study was to assess the myocardial reflectivity pattern in patients with heart failure and preserved ejection fraction (HFpEF), and to evaluate its relationship with longitudinal systolic deformation of LV by 2D-speckle tracking echocardiography, and degree of diastolic dysfunction. Transthoracic echocardiography, myocardial Doppler-derived systolic (Sm) and early diastolic velocity (E'), global longitudinal strain (GLS), and tissue characterization by cIB, were obtained in 86 subjects, 46 with HFpEF, and 40 controls. GLS was significantly impaired in HFpEF patients (-15.4 ± 3.5 % vs -21.5 ± 2.9 % in controls; P < 0.0001). Increased myocardial reflectivity, as evidenced by less negative values of cIB, was also found in HFpEF compared to controls (-21.2 ± 4.4 dB vs -25.3 ± 3.9 dB, P < 0.0001). In HFpEF patients, myocardial reflectivity was positively related to GLS (r = 0.68, P < 0.0001), E/E' ratio (r = 0.38, P = 0.009), and Tau (r = 0.43, P = 0.002), and inversely related to E' velocity (r = -0.46, P = 0.0012). These associations remained significant after adjustment for age, preload and afterload indices. Patients with HFpEF show changes of LV structure consistent with enhanced fibrosis-as evidenced by increased myocardial reflectivity- which parallel the degree of diastolic dysfunction, and of longitudinal systolic dysfunction.

  8. Layout pattern analysis using the Voronoi diagram of line segments

    NASA Astrophysics Data System (ADS)

    Dey, Sandeep Kumar; Cheilaris, Panagiotis; Gabrani, Maria; Papadopoulou, Evanthia

    2016-01-01

    Early identification of problematic patterns in very large scale integration (VLSI) designs is of great value as the lithographic simulation tools face significant timing challenges. To reduce the processing time, such a tool selects only a fraction of possible patterns which have a probable area of failure, with the risk of missing some problematic patterns. We introduce a fast method to automatically extract patterns based on their structure and context, using the Voronoi diagram of line-segments as derived from the edges of VLSI design shapes. Designers put line segments around the problematic locations in patterns called "gauges," along which the critical distance is measured. The gauge center is the midpoint of a gauge. We first use the Voronoi diagram of VLSI shapes to identify possible problematic locations, represented as gauge centers. Then we use the derived locations to extract windows containing the problematic patterns from the design layout. The problematic locations are prioritized by the shape and proximity information of the design polygons. We perform experiments for pattern selection in a portion of a 22-nm random logic design layout. The design layout had 38,584 design polygons (consisting of 199,946 line segments) on layer Mx, and 7079 markers generated by an optical rule checker (ORC) tool. The optical rules specify requirements for printing circuits with minimum dimension. Markers are the locations of some optical rule violations in the layout. We verify our approach by comparing the coverage of our extracted patterns to the ORC-generated markers. We further derive a similarity measure between patterns and between layouts. The similarity measure helps to identify a set of representative gauges that reduces the number of patterns for analysis.

  9. Phase Coexistence in a Dynamic Phase Diagram.

    PubMed

    Gentile, Luigi; Coppola, Luigi; Balog, Sandor; Mortensen, Kell; Ranieri, Giuseppe A; Olsson, Ulf

    2015-08-01

    Metastability and phase coexistence are important concepts in colloidal science. Typically, the phase diagram of colloidal systems is considered at the equilibrium without the presence of an external field. However, several studies have reported phase transition under mechanical deformation. The reason behind phase coexistence under shear flow is not fully understood. Here, multilamellar vesicle (MLV)-to-sponge (L3 ) and MLV-to-Lα transitions upon increasing temperature are detected using flow small-angle neutron scattering techniques. Coexistence of Lα and MLV phases at 40 °C under shear flow is detected by using flow NMR spectroscopy. The unusual rheological behavior observed by studying the lamellar phase of a non-ionic surfactant is explained using (2) H NMR and diffusion flow NMR spectroscopy with the coexistence of planar lamellar-multilamellar vesicles. Moreover, a dynamic phase diagram over a wide range of temperatures is proposed.

  10. Penguin diagrams for improved staggered fermions

    SciTech Connect

    Lee, Weonjong

    2005-01-01

    We calculate, at the one-loop level, penguin diagrams for improved staggered fermion operators constructed using various fat links. The main result is that diagonal mixing coefficients with penguin operators are identical between the unimproved operators and the improved operators using such fat links as Fat7, Fat7+Lepage, Fat7, HYP (I) and HYP (II). In addition, it turns out that the off-diagonal mixing vanishes for those constructed using fat links of Fat7, Fat7 and HYP (II). This is a consequence of the fact that the improvement by various fat links changes only the mixing with higher dimension operators and off-diagonal operators. The results of this paper, combined with those for current-current diagrams, provide complete matching at the one-loop level with all corrections of O(g{sup 2}) included.

  11. Prediction of boron carbon nitrogen phase diagram

    NASA Astrophysics Data System (ADS)

    Yao, Sanxi; Zhang, Hantao; Widom, Michael

    We studied the phase diagram of boron, carbon and nitrogen, including the boron-carbon and boron-nitrogen binaries and the boron-carbon-nitrogen ternary. Based on the idea of electron counting and using a technique of mixing similar primitive cells, we constructed many ''electron precise'' structures. First principles calculation is performed on these structures, with either zero or high pressures. For the BN binary, our calculation confirms that a rhmobohedral phase can be stablized at high pressure, consistent with some experimental results. For the BCN ternary, a new ground state structure is discovered and an Ising-like phase transition is suggested. Moreover, we modeled BCN ternary phase diagram and show continuous solubility from boron carbide to the boron subnitride phase.

  12. Phase diagram of a single lane roundabout

    NASA Astrophysics Data System (ADS)

    Echab, H.; Lakouari, N.; Ez-Zahraouy, H.; Benyoussef, A.

    2016-03-01

    Using the cellular automata model, we numerically study the traffic dynamic in a single lane roundabout system of four entry/exit points. The boundaries are controlled by the injecting rates α1, α2 and the extracting rate β. Both the system with and without Splitter Islands of width Lsp are considered. The phase diagram in the (α1 , β) space and its variation with the roundabout size, Pagg (i.e. the probability of aggressive entry), and Pexit (i.e. the probability of preferential exit) are constructed. The results show that the phase diagram in both cases consists of three phases: free flow, congested and jammed. However, as Lsp increases the free flow phase enlarges while the congested and jammed ones shrink. On the other hand, the short sized roundabout shows better performance in the free flow phase while the large one is more optimal in the congested phase. The density profiles are also investigated.

  13. Direct Measurement of the Fluid Phase Diagram.

    PubMed

    Bao, Bo; Riordon, Jason; Xu, Yi; Li, Huawei; Sinton, David

    2016-07-19

    The thermodynamic phase of a fluid (liquid, vapor or supercritical) is fundamental to all chemical processes, and the critical point is particularly important for supercritical chemical extraction. Conventional phase measurement methods require hours to obtain a single datum on the pressure and temperature diagram. Here, we present the direct measurement of the full pressure-temperature phase diagram, with 10 000 microwells. Orthogonal, linear, pressure and temperature gradients are obtained with 100 parallel microchannels (spanning the pressure range), each with 100 microwells (spanning the temperature range). The phase-mapping approach is demonstrated with both a pure substance (CO2) and a mixture (95% CO2 + 5% N2). Liquid, vapor, and supercritical regions are clearly differentiated, and the critical pressure is measured at 1.2% error with respect to the NIST standard. This approach provides over 100-fold improvement in measurement speed over conventional methods. PMID:27331613

  14. The effect of rotation on Petersen Diagrams

    NASA Astrophysics Data System (ADS)

    Suárez, J. C.; Garrido, R.

    The well-known Petersen diagrams are a useful technique to constrain the mass and metallicity of models for double-mode radial pulsators. However, when moderately rotating stellar models are considered this method may fails. A preliminary study of the effect of rotation on the first overtone to fundamental period ratios is discussed for slow to moderate rotational velocities. The impact on the mass and metallicity determination is examined.

  15. Statistics-related and reliability-physics-related failure processes in electronics devices and products

    NASA Astrophysics Data System (ADS)

    Suhir, E.

    2014-05-01

    The well known and widely used experimental reliability "passport" of a mass manufactured electronic or a photonic product — the bathtub curve — reflects the combined contribution of the statistics-related and reliability-physics (physics-of-failure)-related processes. When time progresses, the first process results in a decreasing failure rate, while the second process associated with the material aging and degradation leads to an increased failure rate. An attempt has been made in this analysis to assess the level of the reliability physics-related aging process from the available bathtub curve (diagram). It is assumed that the products of interest underwent the burn-in testing and therefore the obtained bathtub curve does not contain the infant mortality portion. It has been also assumed that the two random processes in question are statistically independent, and that the failure rate of the physical process can be obtained by deducting the theoretically assessed statistical failure rate from the bathtub curve ordinates. In the carried out numerical example, the Raleigh distribution for the statistical failure rate was used, for the sake of a relatively simple illustration. The developed methodology can be used in reliability physics evaluations, when there is a need to better understand the roles of the statistics-related and reliability-physics-related irreversible random processes in reliability evaluations. The future work should include investigations on how powerful and flexible methods and approaches of the statistical mechanics can be effectively employed, in addition to reliability physics techniques, to model the operational reliability of electronic and photonic products.

  16. Nonthermal Radio Emission and the HR Diagram

    NASA Technical Reports Server (NTRS)

    Gibson, D. M.

    1985-01-01

    Perhaps the most reliable indicator of non-radiative heating/momentum in a stellar atmosphere is the presence of nonthermal radio emission. To date, 77 normal stellar objects have been detected and identified as nonthermal sources. These stellar objects are tabulated herein. It is apparent that non-thermal radio emission is not ubiquitous across the HR diagram. This is clearly the case for the single stars; it is not as clear for the binaries unless the radio emission is associated with their late-type components. Choosing to make this association, the single stars and the late-type components are plotted together. The following picture emerges: (1) there are four locations on the HR diagram where non-thermal radio stars are found; (2) the peak incoherent 5 GHz luminosities show a suprisingly small range for stars within each class; (3) the fraction of stellar energy that escapes as radio emission can be estimated by comparing the integrated maximum radio luminosity to the bolometric luminosity; (4) there are no apparent differences in L sub R between binaries with two cool components, binaries with one hot and one cool component, and single stars for classes C and D; and (5) The late-type stars (classes B, C, and D) are located in parts of the HR diagram where there is reason to suspect that the surfaces of the stars are being braked with respect to their interiors.

  17. Refined phase diagram of boron nitride

    SciTech Connect

    Solozhenko, V.; Turkevich, V.Z.; Holzapfel, W.B.

    1999-04-15

    The equilibrium phase diagram of boron nitride thermodynamically calculated by Solozhenko in 1988 has been now refined on the basis of new experimental data on BN melting and extrapolation of heat capacities of BN polymorphs into high-temperature region using the adapted pseudo-Debye model. As compared with the above diagram, the hBN {l_reversible} cBN equilibrium line is displaced by 60 K toward higher temperatures. The hBN-cBN-L triple point has been calculated to be at 3480 {+-} 10 K and 5.9 {+-} 0.1 GPa, while the hBN-L-V triple point is at T = 3400 {+-} 20 K and p = 400 {+-} 20 Pa, which indicates that the region of thermodynamic stability of vapor in the BN phase diagram is extremely small. It has been found that the slope of the cBN melting curve is positive whereas the slope of hBN melting curve varies from positive between ambient pressure and 3.4 GPa to negative at higher pressures.

  18. Recognition and processing of logic diagrams

    NASA Astrophysics Data System (ADS)

    Darwish, Ahmed M.; Bashandy, Ahmed R.

    1996-03-01

    In this paper we present a vision system that is capable of interpreting schematic logic diagrams, i.e. determine the output as a logic function of the inputs. The system is composed of a number of modules each designed to perform a specific subtask. Each module bears a minor contribution in the form of a new mixture of known algorithms or extensions to handle actual real life image imperfections which researchers tend to ignore when they develop their theoretical foundations. The main contribution, thus, is not in any individual module, it is rather in their integration to achieve the target job. The system is organized more or less in a classical fashion. Aside from the image acquisition and preprocessing modules, interesting modules include: the segmenter, the identifier, the connector and the grapher. A good segmentation output is one reason for the success of the presented system. Several novelties exist in the presented approach. Following segmentation the type of each logic gate is determined and its topological connectivity. The logic diagram is then transformed to a directed acyclic graph in which the final node is the output logic gate. The logic function is then determined by backtracking techniques. The system is not only aimed at recognition applications. In fact its main usage may be to target other processing applications such as storage compression and graphics modification and manipulation of the diagram as is explained.

  19. The Critical Importance of Russell's Diagram

    NASA Astrophysics Data System (ADS)

    Gingerich, O.

    2013-04-01

    The idea of dwarf and giants stars, but not the nomenclature, was first established by Eijnar Hertzsprung in 1905; his first diagrams in support appeared in 1911. In 1913 Henry Norris Russell could demonstrate the effect far more strikingly because he measured the parallaxes of many stars at Cambridge, and could plot absolute magnitude against spectral type for many points. The general concept of dwarf and giant stars was essential in the galactic structure work of Harlow Shapley, Russell's first graduate student. In order to calibrate the period-luminosity relation of Cepheid variables, he was obliged to fall back on statistical parallax using only 11 Cepheids, a very sparse sample. Here the insight provided by the Russell diagram became critical. The presence of yellow K giant stars in globular clusters credentialed his calibration of the period-luminosity relation by showing that the calibrated luminosity of the Cepheids was comparable to the luminosity of the K giants. It is well known that in 1920 Shapley did not believe in the cosmological distances of Heber Curtis' spiral nebulae. It is not so well known that in 1920 Curtis' plot of the period-luminosity relation suggests that he didn't believe it was a physical relation and also he failed to appreciate the significance of the Russell diagram for understanding the large size of the Milky Way.

  20. Placing the Forces on Free-Body Diagrams.

    ERIC Educational Resources Information Center

    Sperry, Willard

    1994-01-01

    Discusses the problem of drawing free-body diagrams to analyze the conditions of static equilibrium. Presents a method based on the correct placement of the normal force on the body. Includes diagrams. (MVL)

  1. 75 FR 61512 - Outer Continental Shelf Official Protraction Diagrams

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-05

    ... Outer Continental Shelf Official Protraction Diagrams (OPDs) located within Atlantic Ocean areas, with... Bureau of Ocean Energy Management, Regulation and Enforcement Outer Continental Shelf Official Protraction Diagrams AGENCY: Bureau of Ocean Energy Management, Regulation and Enforcement, Interior....

  2. Massive basketball diagram for a thermal scalar field theory

    NASA Astrophysics Data System (ADS)

    Andersen, Jens O.; Braaten, Eric; Strickland, Michael

    2000-08-01

    The ``basketball diagram'' is a three-loop vacuum diagram for a scalar field theory that cannot be expressed in terms of one-loop diagrams. We calculate this diagram for a massive scalar field at nonzero temperature, reducing it to expressions involving three-dimensional integrals that can be easily evaluated numerically. We use this result to calculate the free energy for a massive scalar field with a φ4 interaction to three-loop order.

  3. Failure environment analysis tool applications

    NASA Technical Reports Server (NTRS)

    Pack, Ginger L.; Wadsworth, David B.

    1994-01-01

    Understanding risks and avoiding failure are daily concerns for the women and men of NASA. Although NASA's mission propels us to push the limits of technology, and though the risks are considerable, the NASA community has instilled within it, the determination to preserve the integrity of the systems upon which our mission and, our employees lives and well-being depend. One of the ways this is being done is by expanding and improving the tools used to perform risk assessment. The Failure Environment Analysis Tool (FEAT) was developed to help engineers and analysts more thoroughly and reliably conduct risk assessment and failure analysis. FEAT accomplishes this by providing answers to questions regarding what might have caused a particular failure; or, conversely, what effect the occurrence of a failure might have on an entire system. Additionally, FEAT can determine what common causes could have resulted in other combinations of failures. FEAT will even help determine the vulnerability of a system to failures, in light of reduced capability. FEAT also is useful in training personnel who must develop an understanding of particular systems. FEAT facilitates training on system behavior, by providing an automated environment in which to conduct 'what-if' evaluation. These types of analyses make FEAT a valuable tool for engineers and operations personnel in the design, analysis, and operation of NASA space systems.

  4. Failure environment analysis tool applications

    NASA Technical Reports Server (NTRS)

    Pack, Ginger L.; Wadsworth, David B.

    1993-01-01

    Understanding risks and avoiding failure are daily concerns for the women and men of NASA. Although NASA's mission propels us to push the limits of technology, and though the risks are considerable, the NASA community has instilled within, the determination to preserve the integrity of the systems upon which our mission and, our employees lives and well-being depend. One of the ways this is being done is by expanding and improving the tools used to perform risk assessment. The Failure Environment Analysis Tool (FEAT) was developed to help engineers and analysts more thoroughly and reliably conduct risk assessment and failure analysis. FEAT accomplishes this by providing answers to questions regarding what might have caused a particular failure; or, conversely, what effect the occurrence of a failure might have on an entire system. Additionally, FEAT can determine what common causes could have resulted in other combinations of failures. FEAT will even help determine the vulnerability of a system to failures, in light of reduced capability. FEAT also is useful in training personnel who must develop an understanding of particular systems. FEAT facilitates training on system behavior, by providing an automated environment in which to conduct 'what-if' evaluation. These types of analyses make FEAT a valuable tool for engineers and operations personnel in the design, analysis, and operation of NASA space systems.

  5. The Classroom as Rhizome: New Strategies for Diagramming Knotted Interactions

    ERIC Educational Resources Information Center

    de Freitas, Elizabeth

    2012-01-01

    This article calls attention to the unexamined role of diagrams in educational research and offers examples of alternative diagramming practices or tools that shed light on classroom interaction as a rhizomatic process. Drawing extensively on the work of Latour, Deleuze and Guattari, and Chatelet, this article explores the power of diagramming as…

  6. Science Visual Literacy: Learners' Perceptions and Knowledge of Diagrams

    ERIC Educational Resources Information Center

    McTigue, Erin M.; Flowers, Amanda C.

    2011-01-01

    Constructing meaning from science texts relies not only on comprehending the words but also the diagrams and other graphics. The goal of this study was to explore elementary students' perceptions of science diagrams and their skills related to diagram interpretation. 30 students, ranging from second grade through middle school, completed a diagram…

  7. Differential Cognitive and Affective Responses to Flow Diagrams in Science

    ERIC Educational Resources Information Center

    Holliday, William G.; And Others

    1977-01-01

    Describes a study in which tenth-grade biology students who were low verbal performers scored significantly higher on achievement tests when provided with picture-word diagrams of biological concepts than when provided with block-word diagrams. Students and teachers also preferred picture-word diagrams as indicated by a questionnaire. (MLH)

  8. Students' Learning Activities While Studying Biological Process Diagrams

    ERIC Educational Resources Information Center

    Kragten, Marco; Admiraal, Wilfried; Rijlaarsdam, Gert

    2015-01-01

    Process diagrams describe how a system functions (e.g. photosynthesis) and are an important type of representation in Biology education. In the present study, we examined students' learning activities while studying process diagrams, related to their resulting comprehension of these diagrams. Each student completed three learning tasks. Verbal…

  9. Beneficial and Adverse Effects of Electro-acupuncture Assessed in the Canine Chronic Atrio-ventricular Block Model Having Severe Hypertension and Chronic Heart Failure.

    PubMed

    Cao, Xin; Lu, Shengfeng; Ohara, Hiroshi; Nakamura, Yuji; Izumi-Nakaseko, Hiroko; Ando, Kentaro; Zhu, Bingmei; Xu, Bin; Sugiyama, Atsushi

    2015-01-01

    Regarding the effects of electro-acupuncture for severe hypertension, we assessed its acute cardiovascular consequences with 4 subjects of the chronic atrioventricular block dogs having severe hypertension and chronic heart failure. The electro-acupuncture consisting of 2 mA at 2 Hz frequency was carried out for 30 min at Renying (ST-9) and Taichong (LR-3) every other day. Seven sessions were performed within 2 weeks. In the 1st and 7th sessions, the animals were anesthetized with pentobarbital to analyze the effects of the electro-acupuncture on cardiovascular variables. No significant change was detected in any of the basal control values of the cardiohemodynamic or electrophysiological variables between the 1st and 7th sessions. During the 1st session, electo-acupuncture produced a peak increase in mean blood pressure by 8.7% at 35 min (p < 0.05), whereas during the 7th session the peak increase was 6.5% at 35 min (p = 0.06). There was no significant change in the cardiac output, total peripheral resistance, a product of the heart rate and systolic blood pressure (= double product) reflecting myocardial oxygen consumption, QRS width or QT interval during the electrical stimulation in the 1st or 7th session. The results suggest that electroacupuncture may not exert lethal adverse effect except the vasopressor response, but that it can decrease the treatment-induced sympathetic response including vasopressor reaction and tachycardia. Since electro-acupuncture may have some potential to induce hypertensive crisis at the beginning, clinicians have to pay attention on its use for patients with hypertension.

  10. Usefulness of Combining Galectin-3 and BIVA Assessments in Predicting Short- and Long-Term Events in Patients Admitted for Acute Heart Failure

    PubMed Central

    De Berardinis, Benedetta; Magrini, Laura; Zampini, Giorgio; Zancla, Benedetta; Salerno, Gerardo; Cardelli, Patrizia; Di Stasio, Enrico; Gaggin, Hanna K.; Belcher, Arianna; Parry, Blair A.; Nagurney, John T.; Januzzi, James L.; Di Somma, Salvatore

    2014-01-01

    Introduction. Acute heart failure (AHF) is associated with a higher risk for the occurrence of rehospitalization and death. Galectin-3 (GAL3) is elevated in AHF patients and is an indicator in predicting short-term mortality. The total body water using bioimpedance vector analysis (BIVA) is able to identify mortality within AHF patients. The aim of this study was to evaluate the short- and long-term predictive value of GAL3, BIVA, and the combination of both in AHF patients in Emergency Department (ED). Methods. 205 ED patients with AHF were evaluated by testing for B type natriuretic peptide (BNP) and GAL3. The primary endpoint was death and rehospitalization at 30, 60, 90, and 180 days and 12 and 18 months. AHF patients were evaluated at the moment of ED arrival with clinical judgment and GAL3 and BIVA measurement. Results. GAL3 level was significantly higher in patients >71 years old, and with eGFR < 30 cc/min. The area under the curve (AUC) of GAL3 + BIVA, GAL3 and BIVA for death and rehospitalization both when considered in total and when considered serially for the follow-up period showed that the combination has a better prognostic value. Kaplan-Meier survival curve for GAL3 values >17.8 ng/mL shows significant survival difference. At multivariate Cox regression analysis GAL3 is an independent variable to predict death + rehospitalization with a value of 32.24 ng/mL at 30 days (P < 0.005). Conclusion. In patients admitted for AHF an early assessment of GAL3 and BIVA seems to be useful in identifying patients at high risk for death and rehospitalization at short and long term. Combining the biomarker and the device could be of great utility since they monitor the severity of two pathophysiological different mechanisms: heart fibrosis and fluid overload. PMID:25101304

  11. The Space Station Freedom Reliability and Maintainability Assessment Tool

    NASA Technical Reports Server (NTRS)

    Blumentritt, Will; Doran, Linda; Sample, Keith

    1993-01-01

    The Reliability and Maintainability Assessment Tool is a stochastic, event-oriented simulation model that has been developed to analyze the functional reliability, availability, and maintainability characteristics of the Space Station Freedom. This tool simulates failures and performs corrective and preventive maintenance tasks, utilizing user-specified maintenance resources, including crewmembers and/or robotics, and accommodates the growth of the station. The model dynamically interfaces with minimal cut sets derived from reliability block diagrams to assess functional status and to determine queuing priorities.

  12. The Space Station Freedom Reliability and Maintainability Assessment Tool

    NASA Astrophysics Data System (ADS)

    Blumentritt, Will; Doran, Linda; Sample, Keith

    The Reliability and Maintainability Assessment Tool is a stochastic, event-oriented simulation model that has been developed to analyze the functional reliability, availability, and maintainability characteristics of the Space Station Freedom. This tool simulates failures and performs corrective and preventive maintenance tasks, utilizing user-specified maintenance resources, including crewmembers and/or robotics, and accommodates the growth of the station. The model dynamically interfaces with minimal cut sets derived from reliability block diagrams to assess functional status and to determine queuing priorities.

  13. State-transition diagrams for biologists.

    PubMed

    Bersini, Hugues; Klatzmann, David; Six, Adrien; Thomas-Vaslin, Véronique

    2012-01-01

    It is clearly in the tradition of biologists to conceptualize the dynamical evolution of biological systems in terms of state-transitions of biological objects. This paper is mainly concerned with (but obviously not limited too) the immunological branch of biology and shows how the adoption of UML (Unified Modeling Language) state-transition diagrams can ease the modeling, the understanding, the coding, the manipulation or the documentation of population-based immune software model generally defined as a set of ordinary differential equations (ODE), describing the evolution in time of populations of various biological objects. Moreover, that same UML adoption naturally entails a far from negligible representational economy since one graphical item of the diagram might have to be repeated in various places of the mathematical model. First, the main graphical elements of the UML state-transition diagram and how they can be mapped onto a corresponding ODE mathematical model are presented. Then, two already published immune models of thymocyte behavior and time evolution in the thymus, the first one originally conceived as an ODE population-based model whereas the second one as an agent-based one, are refactored and expressed in a state-transition form so as to make them much easier to understand and their respective code easier to access, to modify and run. As an illustrative proof, for any immunologist, it should be possible to understand faithfully enough what the two software models are supposed to reproduce and how they execute with no need to plunge into the Java or Fortran lines.

  14. State-Transition Diagrams for Biologists

    PubMed Central

    Bersini, Hugues; Klatzmann, David; Six, Adrien; Thomas-Vaslin, Véronique

    2012-01-01

    It is clearly in the tradition of biologists to conceptualize the dynamical evolution of biological systems in terms of state-transitions of biological objects. This paper is mainly concerned with (but obviously not limited too) the immunological branch of biology and shows how the adoption of UML (Unified Modeling Language) state-transition diagrams can ease the modeling, the understanding, the coding, the manipulation or the documentation of population-based immune software model generally defined as a set of ordinary differential equations (ODE), describing the evolution in time of populations of various biological objects. Moreover, that same UML adoption naturally entails a far from negligible representational economy since one graphical item of the diagram might have to be repeated in various places of the mathematical model. First, the main graphical elements of the UML state-transition diagram and how they can be mapped onto a corresponding ODE mathematical model are presented. Then, two already published immune models of thymocyte behavior and time evolution in the thymus, the first one originally conceived as an ODE population-based model whereas the second one as an agent-based one, are refactored and expressed in a state-transition form so as to make them much easier to understand and their respective code easier to access, to modify and run. As an illustrative proof, for any immunologist, it should be possible to understand faithfully enough what the two software models are supposed to reproduce and how they execute with no need to plunge into the Java or Fortran lines. PMID:22844438

  15. State-transition diagrams for biologists.

    PubMed

    Bersini, Hugues; Klatzmann, David; Six, Adrien; Thomas-Vaslin, Véronique

    2012-01-01

    It is clearly in the tradition of biologists to conceptualize the dynamical evolution of biological systems in terms of state-transitions of biological objects. This paper is mainly concerned with (but obviously not limited too) the immunological branch of biology and shows how the adoption of UML (Unified Modeling Language) state-transition diagrams can ease the modeling, the understanding, the coding, the manipulation or the documentation of population-based immune software model generally defined as a set of ordinary differential equations (ODE), describing the evolution in time of populations of various biological objects. Moreover, that same UML adoption naturally entails a far from negligible representational economy since one graphical item of the diagram might have to be repeated in various places of the mathematical model. First, the main graphical elements of the UML state-transition diagram and how they can be mapped onto a corresponding ODE mathematical model are presented. Then, two already published immune models of thymocyte behavior and time evolution in the thymus, the first one originally conceived as an ODE population-based model whereas the second one as an agent-based one, are refactored and expressed in a state-transition form so as to make them much easier to understand and their respective code easier to access, to modify and run. As an illustrative proof, for any immunologist, it should be possible to understand faithfully enough what the two software models are supposed to reproduce and how they execute with no need to plunge into the Java or Fortran lines. PMID:22844438

  16. Ten proposed rules of numerical diagrams

    NASA Astrophysics Data System (ADS)

    Court, Arnold

    Diagrams have been used for 3 centuries to present numerical information succinctly. Yet no complete, specific rules are available for their preparation, similar to the rules of grammar, syntax, and spelling of language. Some guidance is offered in the dozen books on graphics published in the United States in the past 40 years, and fragments appear in textbooks and manuals. But none of these is complete or consistent, and each offers at least one glaring contravention of the basic tenets of mathematics and logic.

  17. Magnetic phase diagram of epitaxial dysprosium

    NASA Astrophysics Data System (ADS)

    Tsui, F.; Flynn, C. P.

    1993-08-01

    We have determined the magnetic phase diagram of Dy as a function of epitaxial strain ɛ, applied field H, and temperature T. $roman Y sub x roman Lu sub 1-x- alloys were employed as templates to clamp the films at selected strains. The separate roles of epitaxial clamping and strain are identified for the first time. There is a clearly defined transition as the strain is changed at low temperature from the clamped helical phase to the ferromagnetic phase. The transition is modeled by a linear coupling treatment of the magnetoelastic strains.

  18. On critical exponents without Feynman diagrams

    NASA Astrophysics Data System (ADS)

    Sen, Kallol; Sinha, Aninda

    2016-11-01

    In order to achieve a better analytic handle on the modern conformal bootstrap program, we re-examine and extend the pioneering 1974 work of Polyakov’s, which was based on consistency between the operator product expansion and unitarity. As in the bootstrap approach, this method does not depend on evaluating Feynman diagrams. We show how this approach can be used to compute the anomalous dimensions of certain operators in the O(n) model at the Wilson–Fisher fixed point in 4-ε dimensions up to O({ε }2). AS dedicates this work to the loving memory of his mother.

  19. Phase diagram of degenerate exciton systems.

    PubMed

    Lai, C W; Zoch, J; Gossard, A C; Chemla, D S

    2004-01-23

    Degenerate exciton systems have been produced in quasi-two-dimensional confined areas in semiconductor coupled quantum well structures. We observed contractions of clouds containing tens of thousands of excitons within areas as small as (10 micron)2 near 10 kelvin. The spatial and energy distributions of optically active excitons were determined by measuring photoluminescence as a function of temperature and laser excitation and were used as thermodynamic quantities to construct the phase diagram of the exciton system, which demonstrates the existence of distinct phases. Understanding the formation mechanisms of these degenerate exciton systems can open new opportunities for the realization of Bose-Einstein condensation in the solid state.

  20. Toward a phase diagram for stocks

    NASA Astrophysics Data System (ADS)

    Ivanova, K.

    1999-08-01

    A display of the tentatively basic parameters of stocks, i.e. the daily closing price and the daily transaction volume is presented eliminating the time variable between them. The “phase diagram” looks like a triangular region similar to the two-phase region of traffic diagrams. The data is taken for two companies (SGP and OXHP) which present different long-range correlations in the closing price value as examined by the linearly Detrended Fluctuation Analysis (DFA) statistical method. Substructures are observed in the “phase diagram” as due to changes in management policy, e.g. stock splits.

  1. Shape Diagram of Vesicles in Poiseuille Flow

    NASA Astrophysics Data System (ADS)

    Coupier, Gwennou; Farutin, Alexander; Minetti, Christophe; Podgorski, Thomas; Misbah, Chaouqi

    2012-04-01

    Soft bodies flowing in a channel often exhibit parachutelike shapes usually attributed to an increase of hydrodynamic constraint (viscous stress and/or confinement). We show that the presence of a fluid membrane leads to the reverse phenomenon and build a phase diagram of shapes—which are classified as bullet, croissant, and parachute—in channels of varying aspect ratio. Unexpectedly, shapes are relatively wider in the narrowest direction of the channel. We highlight the role of flow patterns on the membrane in this response to the asymmetry of stress distribution.

  2. Domestic and foreign trends in the prevalence of heart failure and the necessity of next-generation artificial hearts: a survey by the Working Group on Establishment of Assessment Guidelines for Next-Generation Artificial Heart Systems.

    PubMed

    Tatsumi, Eisuke; Nakatani, Takeshi; Imachi, Kou; Umezu, Mitsuo; Kyo, Shun-Ei; Sase, Kazuhiro; Takatani, Setsuo; Matsuda, Hikaru

    2007-01-01

    A series of guidelines for development and assessment of next-generation medical devices has been drafted under an interagency collaborative project by the Ministry of Health, Labor and Welfare and the Ministry of Economy, Trade and Industry. The working group for assessment guidelines of next-generation artificial hearts reviewed the trend in the prevalence of heart failure and examined the potential usefulness of such devices in Japan and in other countries as a fundamental part of the process of establishing appropriate guidelines. At present, more than 23 million people suffer from heart failure in developed countries, including Japan. Although Japan currently has the lowest mortality from heart failure among those countries, the number of patients is gradually increasing as our lifestyle becomes more Westernized; the associated medical expenses are rapidly growing. The number of heart transplantations, however, is limited due to the overwhelming shortage of donor hearts, not only in Japan but worldwide. Meanwhile, clinical studies and surveys have revealed that the major causes of death in patients undergoing long-term use of ventricular assist devices (VADs) were infection, thrombosis, and mechanical failure, all of which are typical of VADs. It is therefore of urgent and universal necessity to develop next-generation artificial hearts that have excellent durability to provide at least 2 years of event-free operation with a superior quality of life and that can be used for destination therapy to save patients with irreversible heart failure. It is also very important to ensure that an environment that facilitates the development, testing, and approval evaluation processes of next-generation artificial hearts be established as soon as possible.

  3. The Mental Health Outcomes of Drought: A Systematic Review and Causal Process Diagram.

    PubMed

    Vins, Holly; Bell, Jesse; Saha, Shubhayu; Hess, Jeremy J

    2015-10-22

    Little is understood about the long term, indirect health consequences of drought (a period of abnormally dry weather). In particular, the implications of drought for mental health via pathways such as loss of livelihood, diminished social support, and rupture of place bonds have not been extensively studied, leaving a knowledge gap for practitioners and researchers alike. A systematic review of literature was performed to examine the mental health effects of drought. The systematic review results were synthesized to create a causal process diagram that illustrates the pathways linking drought effects to mental health outcomes. Eighty-two articles using a variety of methods in different contexts were gathered from the systematic review. The pathways in the causal process diagram with greatest support in the literature are those focusing on the economic and migratory effects of drought. The diagram highlights the complexity of the relationships between drought and mental health, including the multiple ways that factors can interact and lead to various outcomes. The systematic review and resulting causal process diagram can be used in both practice and theory, including prevention planning, public health programming, vulnerability and risk assessment, and research question guidance. The use of a causal process diagram provides a much needed avenue for integrating the findings of diverse research to further the understanding of the mental health implications of drought.

  4. The Mental Health Outcomes of Drought: A Systematic Review and Causal Process Diagram

    PubMed Central

    Vins, Holly; Bell, Jesse; Saha, Shubhayu; Hess, Jeremy J.

    2015-01-01

    Little is understood about the long term, indirect health consequences of drought (a period of abnormally dry weather). In particular, the implications of drought for mental health via pathways such as loss of livelihood, diminished social support, and rupture of place bonds have not been extensively studied, leaving a knowledge gap for practitioners and researchers alike. A systematic review of literature was performed to examine the mental health effects of drought. The systematic review results were synthesized to create a causal process diagram that illustrates the pathways linking drought effects to mental health outcomes. Eighty-two articles using a variety of methods in different contexts were gathered from the systematic review. The pathways in the causal process diagram with greatest support in the literature are those focusing on the economic and migratory effects of drought. The diagram highlights the complexity of the relationships between drought and mental health, including the multiple ways that factors can interact and lead to various outcomes. The systematic review and resulting causal process diagram can be used in both practice and theory, including prevention planning, public health programming, vulnerability and risk assessment, and research question guidance. The use of a causal process diagram provides a much needed avenue for integrating the findings of diverse research to further the understanding of the mental health implications of drought. PMID:26506367

  5. Revisiting the phase diagram of hard ellipsoids

    NASA Astrophysics Data System (ADS)

    Odriozola, Gerardo

    2012-04-01

    In this work, the well-known Frenkel-Mulder phase diagram of hard ellipsoids of revolution [D. Frenkel and B. M. Mulder, Mol. Phys. 55, 1171 (1985), 10.1080/00268978500101971] is revisited by means of replica exchange Monte Carlo simulations. The method provides good sampling of dense systems and so, solid phases can be accessed without the need of imposing a given structure. At high densities, we found plastic solids and fcc-like crystals for semi-spherical ellipsoids (prolates and oblates), and SM2 structures [P. Pfleiderer and T. Schilling, Phys. Rev. E 75, 020402 (2007)] for x : 1-prolates and 1 : x-oblates with x ≥ 3. The revised fluid-crystal and isotropic-nematic transitions reasonably agree with those presented in the Frenkel-Mulder diagram. An interesting result is that, for small system sizes (100 particles), we obtained 2:1- and 1.5:1-prolate equations of state without transitions, while some order is developed at large densities. Furthermore, the symmetric oblate cases are also reluctant to form ordered phases.

  6. Instability Regions in the Upper HR Diagram

    NASA Technical Reports Server (NTRS)

    deJager, Cornelis; Lobel, Alex; Nieuwenhuijzen, Hans; Stothers, Richard; Hansen, James E. (Technical Monitor)

    2001-01-01

    The following instability regions for blueward evolving supergiants are outlined and compared: (1) Areas in the Hertzsprung-Russell(HR) diagram where stars are dynamically unstable. (2) Areas where the effective acceleration in the upper part of the photospheres is negative, hence directed outward. (3) Areas where the sonic points of the stellar wind (Where wind velocity = sound velocity) are situated inside the photospheres, at a level deeper than tau(sub Ross) = 0.01. We compare the results with the positions of actual stars in the HR diagram and we find evidence that the recent strong contraction of the yellow hypergiant HR8752 was initiated in a period during which (g(sub eff)) is less than 0, whereupon the star became dynamically unstable. The instability and extreme shells around IRC+10420 are suggested to be related to three factors: (g(sub eff)) is less than 0; the sonic point is situated inside the photosphere; and the star is dynamically unstable.

  7. Recent Results in Ring-Diagram Analysis

    NASA Astrophysics Data System (ADS)

    Rabello-Soares, M. C.

    2013-12-01

    The ring-diagram technique was developed by Frank Hill 25 years ago and matured quickly during the late 1990s. It is nowadays one of the most commonly used techniques in local helioseismology. The method consists in the power spectral analysis of solar acoustic oscillations on small regions (2° to 30°) of the solar surface. The power spectrum resembles a set of trumpets nested inside each other and for a given frequency, it looks like a ring, hence the technique's name. It provides information on the horizontal flow field and thermodynamic structure in the layers immediately below the photosphere. With data regularly provided by MDI, GONG, and more recently HMI, many important results have been achieved. In recently years, these results include estimations of the meridional circulation and its evolution with solar cycle; flows associated with active regions, as well as, flow divergence and vorticity, and thermal structure beneath and around active regions. Much progress is expected with data now provided by HMI's high spatial resolution observations and high duty cycle. There are two data processing pipelines (GONG and HMI) providing free access to the data and the results of the ring-diagram analysis. Here we will discuss the most recent results and improvements in the technique, as well as, the many challenges that still remain.

  8. Critical point analysis of phase envelope diagram

    SciTech Connect

    Soetikno, Darmadi; Siagian, Ucok W. R.; Kusdiantara, Rudy Puspita, Dila Sidarto, Kuntjoro A. Soewono, Edy; Gunawan, Agus Y.

    2014-03-24

    Phase diagram or phase envelope is a relation between temperature and pressure that shows the condition of equilibria between the different phases of chemical compounds, mixture of compounds, and solutions. Phase diagram is an important issue in chemical thermodynamics and hydrocarbon reservoir. It is very useful for process simulation, hydrocarbon reactor design, and petroleum engineering studies. It is constructed from the bubble line, dew line, and critical point. Bubble line and dew line are composed of bubble points and dew points, respectively. Bubble point is the first point at which the gas is formed when a liquid is heated. Meanwhile, dew point is the first point where the liquid is formed when the gas is cooled. Critical point is the point where all of the properties of gases and liquids are equal, such as temperature, pressure, amount of substance, and others. Critical point is very useful in fuel processing and dissolution of certain chemicals. Here in this paper, we will show the critical point analytically. Then, it will be compared with numerical calculations of Peng-Robinson equation by using Newton-Raphson method. As case studies, several hydrocarbon mixtures are simulated using by Matlab.

  9. Critical point analysis of phase envelope diagram

    NASA Astrophysics Data System (ADS)

    Soetikno, Darmadi; Kusdiantara, Rudy; Puspita, Dila; Sidarto, Kuntjoro A.; Siagian, Ucok W. R.; Soewono, Edy; Gunawan, Agus Y.

    2014-03-01

    Phase diagram or phase envelope is a relation between temperature and pressure that shows the condition of equilibria between the different phases of chemical compounds, mixture of compounds, and solutions. Phase diagram is an important issue in chemical thermodynamics and hydrocarbon reservoir. It is very useful for process simulation, hydrocarbon reactor design, and petroleum engineering studies. It is constructed from the bubble line, dew line, and critical point. Bubble line and dew line are composed of bubble points and dew points, respectively. Bubble point is the first point at which the gas is formed when a liquid is heated. Meanwhile, dew point is the first point where the liquid is formed when the gas is cooled. Critical point is the point where all of the properties of gases and liquids are equal, such as temperature, pressure, amount of substance, and others. Critical point is very useful in fuel processing and dissolution of certain chemicals. Here in this paper, we will show the critical point analytically. Then, it will be compared with numerical calculations of Peng-Robinson equation by using Newton-Raphson method. As case studies, several hydrocarbon mixtures are simulated using by Matlab.

  10. Automated D/3 to Visio Analog Diagrams

    2000-08-10

    ADVAD1 reads an ASCII file containing the D/3 DCS MDL input for analog points for a D/3 continuous database. It uses the information in the files to create a series of Visio files representing the structure of each analog chain, one drawing per Visio file. The actual drawing function is performed by Visio (requires Visio version 4.5+). The user can configure the program to select which fields in the database are shown on the diagrammore » and how the information is to be presented. This gives a visual representation of the structure of the analog chains, showing selected fields in a consistent manner. Updating documentation can be done easily and the automated approach eliminates human error in the cadding process. The program can also create the drawings far faster than a human operator is capable, able to create approximately 270 typical diagrams in about 8 minutes on a Pentium II 400 MHz PC. The program allows for multiple option sets to be saved to provide different settings (i.e., different fields, different field presentations, and /or different diagram layouts) for various scenarios or facilities on one workstation. Option sets may be exported from the Windows registry to allow duplication of settings on another workstation.« less

  11. Revisiting the phase diagram of hard ellipsoids.

    PubMed

    Odriozola, Gerardo

    2012-04-01

    In this work, the well-known Frenkel-Mulder phase diagram of hard ellipsoids of revolution [D. Frenkel and B. M. Mulder, Mol. Phys. 55, 1171 (1985)] is revisited by means of replica exchange Monte Carlo simulations. The method provides good sampling of dense systems and so, solid phases can be accessed without the need of imposing a given structure. At high densities, we found plastic solids and fcc-like crystals for semi-spherical ellipsoids (prolates and oblates), and SM2 structures [P. Pfleiderer and T. Schilling, Phys. Rev. E 75, 020402 (2007)] for x : 1-prolates and 1 : x-oblates with x ≥ 3. The revised fluid-crystal and isotropic-nematic transitions reasonably agree with those presented in the Frenkel-Mulder diagram. An interesting result is that, for small system sizes (100 particles), we obtained 2:1- and 1.5:1-prolate equations of state without transitions, while some order is developed at large densities. Furthermore, the symmetric oblate cases are also reluctant to form ordered phases.

  12. Phase diagrams of disordered Weyl semimetals

    NASA Astrophysics Data System (ADS)

    Shapourian, Hassan; Hughes, Taylor L.

    2016-02-01

    Weyl semimetals are gapless quasitopological materials with a set of isolated nodal points forming their Fermi surface. They manifest their quasitopological character in a series of topological electromagnetic responses including the anomalous Hall effect. Here, we study the effect of disorder on Weyl semimetals while monitoring both their nodal/semimetallic and topological properties through computations of the localization length and the Hall conductivity. We examine three different lattice tight-binding models which realize the Weyl semimetal in part of their phase diagram and look for universal features that are common to all of the models, and interesting distinguishing features of each model. We present detailed phase diagrams of these models for large system sizes and we find that weak disorder preserves the nodal points up to the diffusive limit, but does affect the Hall conductivity. We show that the trend of the Hall conductivity is consistent with an effective picture in which disorder causes the Weyl nodes move within the Brillouin zone along a specific direction that depends deterministically on the properties of the model and the neighboring phases to the Weyl semimetal phase. We also uncover an unusual (nonquantized) anomalous Hall insulator phase which can only exist in the presence of disorder.

  13. Identifying Liquid-Gas System Misconceptions and Addressing Them Using a Laboratory Exercise on Pressure-Temperature Diagrams of a Mixed Gas Involving Liquid-Vapor Equilibrium

    ERIC Educational Resources Information Center

    Yoshikawa, Masahiro; Koga, Nobuyoshi

    2016-01-01

    This study focuses on students' understandings of a liquid-gas system with liquid-vapor equilibrium in a closed system using a pressure-temperature ("P-T") diagram. By administrating three assessment questions concerning the "P-T" diagrams of liquid-gas systems to students at the beginning of undergraduate general chemistry…

  14. Effects of Student-Generated Diagrams versus Student-Generated Summaries on Conceptual Understanding of Causal and Dynamic Knowledge in Plate Tectonics.

    ERIC Educational Resources Information Center

    Gobert, Janice D.; Clement, John J.

    1999-01-01

    Grade five students' (n=58) conceptual understanding of plate tectonics was measured by analysis of student-generated summaries and diagrams, and by posttest assessment of both the spatial/static and causal/dynamic aspects of the domain. The diagram group outperformed the summary and text-only groups on the posttest measures. Discusses the effects…

  15. Assessment of a primary care-based telemonitoring intervention for home care patients with heart failure and chronic lung disease. The TELBIL study

    PubMed Central

    2011-01-01

    Background Telemonitoring technology offers one of the most promising alternatives for the provision of health care services at the patient's home. The primary aim of this study is to evaluate the impact of a primary care-based telemonitoring intervention on the frequency of hospital admissions. Methods/design A primary care-based randomised controlled trial will be carried out to assess the impact of a telemonitoring intervention aimed at home care patients with heart failure (HF) and/or chronic lung disease (CLD). The results will be compared with those obtained with standard health care practice. The duration of the study will be of one year. Sixty patients will be recruited for the study. In-home patients, diagnosed with HF and/or CLD, aged 14 or above and with two or more hospital admissions in the previous year will be eligible. For the intervention group, telemonitoring will consist of daily patient self-measurements of respiratory-rate, heart-rate, blood pressure, oxygen saturation, weight and body temperature. Additionally, the patients will complete a qualitative symptom questionnaire daily using the telemonitoring system. Routine telephone contacts will be conducted every fortnight and additional telephone contacts will be carried out if the data received at the primary care centre are out of the established limits. The control group will receive usual care. The primary outcome measure is the number of hospital admissions due to any cause that occurred in a period of 12 months post-randomisation. The secondary outcome measures are: duration of hospital stay, hospital admissions due to HF or CLD, mortality rate, use of health care resources, quality of life, cost-effectiveness, compliance and patient and health care professional satisfaction with the new technology. Discussion The results of this study will shed some light on the effects of telemonitoring for the follow-up and management of chronic patients from a primary care setting. The study may

  16. Acute kidney failure

    MedlinePlus

    Kidney failure; Renal failure; Renal failure - acute; ARF; Kidney injury - acute ... There are many possible causes of kidney damage. They include: ... cholesterol (cholesterol emboli) Decreased blood flow due to very ...

  17. What Is Heart Failure?

    MedlinePlus

    ... page from the NHLBI on Twitter. What Is Heart Failure? Heart failure is a condition in which the heart can' ... force. Some people have both problems. The term "heart failure" doesn't mean that your heart has stopped ...

  18. Urea distribution in renal failure

    PubMed Central

    Blackmore, D. J.; Elder, W. J.; Bowden, C. H.

    1963-01-01

    An assessment of intracellular urea removed during haemodialysis has been made from urea extraction and plasma urea estimations. An apparent wide variation in the movement of intracellular urea in patients with acute renal failure from obstetric and traumatic causes and with chronic renal failure is reported. A method for the estimation of red cell water urea is presented. In two patients with chronic renal failure the red cell urea level was much higher than would have been expected from the plasma urea level before dialysis. In two obstetric patients there was no such discrepancy. The conclusion is drawn that research should be directed to variations of intracellular metabolism in renal failure before a more rational approach can be made to its management. PMID:16811009

  19. Diagrams: A Visual Survey of Graphs, Maps, Charts and Diagrams for the Graphic Designer.

    ERIC Educational Resources Information Center

    Lockwood, Arthur

    Since the ultimate success of any diagram rests in its clarity, it is important that the designer select a method of presentation which will achieve this aim. He should be aware of the various ways in which statistics can be shown diagrammatically, how information can be incorporated in maps, and how events can be plotted in chart or graph form.…

  20. The Diagram as Story: Unfolding the Event-Structure of the Mathematical Diagram

    ERIC Educational Resources Information Center

    de Freitas, Elizabeth

    2012-01-01

    This paper explores the role of narrative in decoding diagrams. I focus on two fundamental facets of narrative: (1) the recounting of causally related sequences of events, and (2) the positioning of the narrator through point-of-view and voice. In the first two sections of the paper I discuss philosophical and semiotic frameworks for making sense…

  1. Multi-rater Agreement in the Assessment of Anterior Cruciate Ligament Reconstruction Failure. A Radiographic and Video Analysis of the MARS Cohort

    PubMed Central

    Matava, Matthew J.; Arciero, Robert A.; Baumgarten, Keith M.; Carey, James L.; DeBerardino, Thomas M.; Hame, Sharon L.; Hannafin, Jo A.; Miller, Bruce S.; Nissen, Carl W.; Taft, Timothy N.; Wolf, Brian R.; Wright, Rick W.

    2015-01-01

    Background ACL reconstruction failure occurs in up to 10% of cases. Technical errors are considered the most common cause of graft failure despite the absence of validated studies. There is limited data regarding the agreement among orthopedic surgeons in terms of the etiology of primary ACL reconstruction failure and accuracy of graft tunnel placement. Purpose The purpose of this study is to test the hypothesis that experienced knee surgeons have a high level of inter-observer reliability in the agreement of the etiology of the primary ACL reconstruction failure, anatomical graft characteristics, tunnel placement. Methods Twenty cases of revision ACL reconstruction were randomly selected from the MARS database. Each case included the patient's history, standardized radiographs, and a concise 30-second arthroscopic video taken at the time of revision demonstrating the graft remnant and location of the tunnel apertures. 10 MARS surgeons not involved with the primary surgery reviewed all 20 cases. Each surgeon completed a two-part questionnaire dealing with each surgeon's training and practice as well as the placement of the femoral and tibial tunnels, condition of the primary graft, and the surgeon's opinion as to the etiology of graft failure. Inter-rater agreement was determined for each question. Inter-rater agreement was determined for each question with the kappa coefficient and prevalence adjusted bias adjusted kappa (PABAK). Results The 10 reviewers were in practice an average of 14 years. All performed at least 25 ACL reconstructions per year and 9 were fellowship-trained in sports medicine. There was wide variability in agreement among knee experts as to the specific etiology of ACL graft failure. When specifically asked about technical error as the cause for failure, inter-observer agreement was only slight (prevalence adjusted bias adjusted kappa [PABAK]: 0.26). There was fair overall agreement on ideal femoral tunnel placement (PABAK: 0.55), but only

  2. Heart failure - medicines

    MedlinePlus

    CHF - medicines; Congestive heart failure - medicines; Cardiomyopathy - medicines; HF - medicines ... You will need to take most of your heart failure medicines every day. Some medicines are taken ...

  3. A Markov Model for Assessing the Reliability of a Digital Feedwater Control System

    SciTech Connect

    Chu,T.L.; Yue, M.; Martinez-Guridi, G.; Lehner, J.

    2009-02-11

    A Markov approach has been selected to represent and quantify the reliability model of a digital feedwater control system (DFWCS). The system state, i.e., whether a system fails or not, is determined by the status of the components that can be characterized by component failure modes. Starting from the system state that has no component failure, possible transitions out of it are all failure modes of all components in the system. Each additional component failure mode will formulate a different system state that may or may not be a system failure state. The Markov transition diagram is developed by strictly following the sequences of component failures (i.e., failure sequences) because the different orders of the same set of failures may affect the system in completely different ways. The formulation and quantification of the Markov model, together with the proposed FMEA (Failure Modes and Effects Analysis) approach, and the development of the supporting automated FMEA tool are considered the three major elements of a generic conceptual framework under which the reliability of digital systems can be assessed.

  4. Component failure data handbook

    SciTech Connect

    Gentillon, C.D.

    1991-04-01

    This report presents generic component failure rates that are used in reliability and risk studies of commercial nuclear power plants. The rates are computed using plant-specific data from published probabilistic risk assessments supplemented by selected other sources. Each data source is described. For rates with four or more separate estimates among the sources, plots show the data that are combined. The method for combining data from different sources is presented. The resulting aggregated rates are listed with upper bounds that reflect the variability observed in each rate across the nuclear power plant industry. Thus, the rates are generic. Both per hour and per demand rates are included. They may be used for screening in risk assessments or for forming distributions to be updated with plant-specific data.

  5. Basic primitives for molecular diagram sketching

    PubMed Central

    2010-01-01

    A collection of primitive operations for molecular diagram sketching has been developed. These primitives compose a concise set of operations which can be used to construct publication-quality 2 D coordinates for molecular structures using a bare minimum of input bandwidth. The input requirements for each primitive consist of a small number of discrete choices, which means that these primitives can be used to form the basis of a user interface which does not require an accurate pointing device. This is particularly relevant to software designed for contemporary mobile platforms. The reduction of input bandwidth is accomplished by using algorithmic methods for anticipating probable geometries during the sketching process, and by intelligent use of template grafting. The algorithms and their uses are described in detail. PMID:20923555

  6. Phase diagrams of bosonic ABn chains

    NASA Astrophysics Data System (ADS)

    Cruz, G. J.; Franco, R.; Silva-Valencia, J.

    2016-04-01

    The A B N - 1 chain is a system that consists of repeating a unit cell with N sites where between the A and B sites there is an energy difference of λ. We considered bosons in these special lattices and took into account the kinetic energy, the local two-body interaction, and the inhomogenous local energy in the Hamiltonian. We found the charge density wave (CDW) and superfluid and Mott insulator phases, and constructed the phase diagram for N = 2 and 3 at the thermodynamic limit. The system exhibited insulator phases for densities ρ = α/ N, with α being an integer. We obtained that superfluid regions separate the insulator phases for densities larger than one. For any N value, we found that for integer densities ρ, the system exhibits ρ + 1 insulator phases, a Mott insulator phase, and ρ CDW phases. For non-integer densities larger than one, several CDW phases appear.

  7. Understanding starch gelatinization: The phase diagram approach.

    PubMed

    Carlstedt, Jonas; Wojtasz, Joanna; Fyhr, Peter; Kocherbitov, Vitaly

    2015-09-20

    By constructing a detailed phase diagram for the potato starch-water system based on data from optical microscopy, synchrotron X-ray scattering and differential scanning calorimetry, we show that gelatinization can be interpreted in analogy with a eutectic transition. The phase rule explains why the temperature of the gelatinization transition (G) is independent on water content. Furthermore, the melting (M1) endotherm observed in DSC represents a liquidus line; the temperature for this event increases with increasing starch concentration. Both the lamellar spacing and the inter-helix distance were observed to decrease with increasing starch content for starch concentrations between approximately 65 wt% and 75 wt%, while the inter-helix distance continued decreasing upon further dehydration. Understanding starch gelatinization has been a longstanding challenge. The novel approach presented here shows interpretation of this phenomenon from a phase equilibria perspective.

  8. The wiring diagram for plant G signaling.

    PubMed

    Colaneri, Alejandro C; Jones, Alan M

    2014-12-01

    Like electronic circuits, the modular arrangement of cell-signaling networks decides how inputs produce outputs. Animal heterotrimeric guanine nucleotide binding proteins (G-proteins) operate as switches in the circuits that signal between extracellular agonists and intracellular effectors. There still is no biochemical evidence for a receptor or its agonist in the plant G-protein pathways. Plant G-proteins deviate in many important ways from the animal paradigm. This review covers important discoveries from the last two years that enlighten these differences and ends describing alternative wiring diagrams for the plant signaling circuits regulated by G-proteins. We propose that plant G-proteins are integrated in the signaling circuits as variable resistor rather than switches, controlling the flux of information in response to the cell's metabolic state. PMID:25282586

  9. Phase diagram of chirally imbalanced QCD matter

    SciTech Connect

    Chernodub, M. N.; Nedelin, A. S.

    2011-05-15

    We compute the QCD phase diagram in the plane of the chiral chemical potential and temperature using the linear sigma model coupled to quarks and to the Polyakov loop. The chiral chemical potential accounts for effects of imbalanced chirality due to QCD sphaleron transitions which may emerge in heavy-ion collisions. We found three effects caused by the chiral chemical potential: the imbalanced chirality (i) tightens the link between deconfinement and chiral phase transitions; (ii) lowers the common critical temperature; (iii) strengthens the order of the phase transition by converting the crossover into the strong first order phase transition passing via the second order end point. Since the fermionic determinant with the chiral chemical potential has no sign problem, the chirally imbalanced QCD matter can be studied in numerical lattice simulations.

  10. Phase diagram of a Schelling segregation model

    NASA Astrophysics Data System (ADS)

    Gauvin, L.; Vannimenus, J.; Nadal, J.-P.

    2009-07-01

    The collective behavior in a variant of Schelling’s segregation model is characterized with methods borrowed from statistical physics, in a context where their relevance was not conspicuous. A measure of segregation based on cluster geometry is defined and several quantities analogous to those used to describe physical lattice models at equilibrium are introduced. This physical approach allows to distinguish quantitatively several regimes and to characterize the transitions between them, leading to the building of a phase diagram. Some of the transitions evoke empirical sudden ethnic turnovers. We also establish links with ‘spin-1’ models in physics. Our approach provides generic tools to analyze the dynamics of other socio-economic systems.

  11. Band diagrams of layered plasmonic metamaterials

    NASA Astrophysics Data System (ADS)

    Al Shakhs, Mohammed H.; Ott, Peter; Chau, Kenneth J.

    2014-11-01

    We introduce a method to map the band diagrams, or equipotential contours (EPCs), of any layered plasmonic metamaterial using a general expression for the Poynting vector in a lossy layered medium of finite extent under plane-wave illumination. Unlike conventional methods to get band diagrams by solving the Helmholtz equation using the Floquet-Bloch theorem (an approach restricted to infinite, periodic, lossless media), our method adopts a bottom-up philosophy based on spatial-frequency decomposition of the electric and magnetic fields (an approach applicable to finite, lossy media). Equipotential contours are used to visualize phase and group velocities in a wide range of layered plasmonic systems, including the basic building block of a thin metallic layer and more complex multi-layers with unique optical properties such as negative phase velocity, super-resolution imaging, canalization, and far-field imaging. We show that a thin metallic layer can mimic a left-handed electromagnetic response at the surface plasmon resonance and that stacks of metal and dielectric layers can do the same provided that the dielectric layer is sufficiently thin. We also use EPCs to estimate resolution limits of both Pendry's silver slab lens and the Veselago lens and show that the image location and lateral image resolution of metal-dielectric layered flat lenses can be described (and tailored) by the concavity and spectral reach of the dominant band in their EPCs. Homogenization methods for describing the effective optical properties of various layered systems are validated by the extent to which they accurately mimic features in their EPCs.

  12. Expression of Superparamagnetic Particles on FORC Diagrams

    NASA Astrophysics Data System (ADS)

    Hirt, A. M.; Kumari, M.; Crippa, F.; Petri-Fink, A.

    2015-12-01

    Identification of superparamagnetic (SP) particles in natural materials provides information on processes that lead to the new formation or dissolution of iron oxides. SP particles express themselves on first-order reversal curve (FORC) diagrams as a distribution centered near the origin of the diagram. Pike et al. (2001, GJI, 145, 721) demonstrated that thermal relaxation produces an upward shift in the FORC distribution, and attributed this to a pause encountered at each reversal field. In this study we examine the relationship between this upward shift and particles size on two sets of synthetic iron oxide nanoparticles. One set of coated magnetite particles have well-constrained particles size with 9, 16 and 20 nm as their diameter. A second set from the FeraSpin™ Series, consisting of FeraSpinXS, M and XL, were evaluated. Rock magnetic experiments indicate that the first set of samples is exclusively magnetite, whereas the FeraSpin samples contain predominantly magnetite with some degree of oxidation. Samples from both sets show that the upward shift of the FORC distribution at the origin increases with decreasing particle size. The amount of shift in the FeraSpin series is less when compared to the samples from the first set. This is attributed to the effect of interaction that counteracts the effect of thermal relaxation behavior of the SP particles. The FeraSpin series also shows a broader FORC distribution on the vertical axis that appears to be related to non-saturation of the hysteresis curve at maximum applied field. This non-saturation behavior can be due to spins of very fine particles or oxidation to hematite. AC susceptibility at low temperature indicates that particle interaction may affect the effective magnetic particle size. Our results suggest that the FORC distribution in pure SP particle systems provides information on the particle size distribution or oxidation, which can be further evaluated with low temperature techniques.

  13. [Assessment of chronic glucose metabolism disorders coexisting with respiratory failure in non-critical ill patients hospitalized with lower respiratory tract infections].

    PubMed

    Sobocińska, Magdalena Barbara; Loba, Jerzy

    2015-01-01

    Lungs are the target organ in chronic hyperglycemia, but its large reserves causes a subclinical course of these changes. Given the results of other researchers indicating reduced active surface of gas exchange and pulmonary capillary damage, it can be assumed that diabetes and other hyperglycemic states diminish these reserves and impair effectiveness of respiratory gas exchange during pneumonia. So it is plausible to observe coexistence of glucose metabolism disorders and respiratory failure in patients hospitalized with lower respiratory tract infection. An observational study was conducted on 130 patients hospitalized with bacteriologically confirmed pneumonia. 63 patients suffering from chronic glucose metabolism disorders (A) and 67 randomly selected patients in control group (B) were observed on laboratory and clinical findings. There was no significant difference in prevalence of acute respiratory failure, although in the study group a slightly greater number of patients diagnosed with acute respiratory failure was observed. There was a significantly greater number of patients with previously confirmed chronic respiratory failure using long-term oxygen theraphy in A group (p = 0.029). The B patients with average blood glucose level > 108 mg/dl had significantly lower partial pressure of oxygen (PaO2)(gIc ≤ 108: 58.6 +/- 9.8; glc > 108: 51.7 +/- 11.1; p = 0.042). There was a statistically significant negative correlation of the average blood glucose level and PaO2 in the control group (p = 0.0152) and a significant inverse association between the average blood glucose level and the partial pressure of oxygen in patients without COPD belonging to the control group (p = 0.049). Respiratory failure is frequent in patients hospitalized with pneumonia. In patients without chronic glucose metabolism disorders with blood glucose level rising the oxygen tension decreases The association is stronger in patients without COPD.

  14. Time-temperature-transformation diagrams with more than one nose

    NASA Technical Reports Server (NTRS)

    Weinberg, Michael C.

    1992-01-01

    The structures of time-temperature-transformation diagrams of glasses which crystallize the combined homogeneous and heterogeneous crystallization mechanisms are examined. Considerations are given to the factors which might produce more than one extremum in such diagrams. Specific nucleation and growth models are used, and the influence of the parameters which appear in the nucleation and growth rate expressions upon the structure of the diagrams is evaluated.

  15. Massive basketball diagram for a thermal scalar field theory

    SciTech Connect

    Andersen, Jens O.; Braaten, Eric; Strickland, Michael

    2000-08-15

    The ''basketball diagram'' is a three-loop vacuum diagram for a scalar field theory that cannot be expressed in terms of one-loop diagrams. We calculate this diagram for a massive scalar field at nonzero temperature, reducing it to expressions involving three-dimensional integrals that can be easily evaluated numerically. We use this result to calculate the free energy for a massive scalar field with a {phi}{sup 4} interaction to three-loop order. (c) 2000 The American Physical Society.

  16. Visualization design and verification of Ada tasking using timing diagrams

    NASA Technical Reports Server (NTRS)

    Vidale, R. F.; Szulewski, P. A.; Weiss, J. B.

    1986-01-01

    The use of timing diagrams is recommended in the design and testing of multi-task Ada programs. By displaying the task states vs. time, timing diagrams can portray the simultaneous threads of data flow and control which characterize tasking programs. This description of the system's dynamic behavior from conception to testing is a necessary adjunct to other graphical techniques, such as structure charts, which essentially give a static view of the system. A series of steps is recommended which incorporates timing diagrams into the design process. Finally, a description is provided of a prototype Ada Execution Analyzer (AEA) which automates the production of timing diagrams from VAX/Ada debugger output.

  17. 53. PHOTOCOPY OF DRAWING AMMONIA LEACHING PLANT FLOW DIAGRAM, REPRESENTING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    53. PHOTOCOPY OF DRAWING AMMONIA LEACHING PLANT FLOW DIAGRAM, REPRESENTING ONE COMPLETE CYCLE - Kennecott Copper Corporation, On Copper River & Northwestern Railroad, Kennicott, Valdez-Cordova Census Area, AK

  18. 54. PHOTOCOPY OF DRAWING AMMONIA LEACHING PLANT FLOW DIAGRAM, REPRESENTING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    54. PHOTOCOPY OF DRAWING AMMONIA LEACHING PLANT FLOW DIAGRAM, REPRESENTING ONE COMPLETE CYCLE - Kennecott Copper Corporation, On Copper River & Northwestern Railroad, Kennicott, Valdez-Cordova Census Area, AK

  19. 55. PHOTOCOPY OF DRAWING AMMONIA LEACHING PLANT FLOW DIAGRAM, REPRESENTING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    55. PHOTOCOPY OF DRAWING AMMONIA LEACHING PLANT FLOW DIAGRAM, REPRESENTING ONE COMPLETE CYCLE - Kennecott Copper Corporation, On Copper River & Northwestern Railroad, Kennicott, Valdez-Cordova Census Area, AK

  20. Thermodynamic Venn diagrams: Sorting out forces, fluxes, and Legendre transforms

    NASA Astrophysics Data System (ADS)

    Kerr, W. C.; Macosko, J. C.

    2011-09-01

    We show how to use a Venn diagram to illuminate the relations among the different thermodynamic potentials, forces, and fluxes of a simple system. A single diagram shows all of the thermodynamic potentials obtainable by Legendre transformations starting from the internal energy as the fundamental potential. From the diagram, we can also read off the Maxwell relations deduced from each of these potentials. We construct a second Venn diagram that shows the analogous information for the Massieu functions, obtained by Legendre transformations starting from the entropy as the fundamental thermodynamic function.

  1. Advanced Heart Failure

    MedlinePlus

    ... High Blood Pressure Tools & Resources Stroke More Advanced Heart Failure Updated:Oct 8,2015 When heart failure (HF) ... content was last reviewed on 04/06/2015. Heart Failure • Home • About Heart Failure • Causes and Risks for ...

  2. Progressive Failure Analysis Methodology for Laminated Composite Structures

    NASA Technical Reports Server (NTRS)

    Sleight, David W.

    1999-01-01

    A progressive failure analysis method has been developed for predicting the failure of laminated composite structures under geometrically nonlinear deformations. The progressive failure analysis uses C(exp 1) shell elements based on classical lamination theory to calculate the in-plane stresses. Several failure criteria, including the maximum strain criterion, Hashin's criterion, and Christensen's criterion, are used to predict the failure mechanisms and several options are available to degrade the material properties after failures. The progressive failure analysis method is implemented in the COMET finite element analysis code and can predict the damage and response of laminated composite structures from initial loading to final failure. The different failure criteria and material degradation methods are compared and assessed by performing analyses of several laminated composite structures. Results from the progressive failure method indicate good correlation with the existing test data except in structural applications where interlaminar stresses are important which may cause failure mechanisms such as debonding or delaminations.

  3. Oak Ridge K-25 Site Technology Logic Diagram. Volume 2, Technology Logic Diagrams

    SciTech Connect

    Fellows, R.L.

    1993-02-26

    The Oak Ridge K-25 Technology Logic Diagram (TLD), a decision support tool for the K-25 Site, was developed to provide a planning document that relates envirorunental restoration and waste management problems at the Oak Ridge K-25 Site to potential technologies that can remediate these problems. The TLD technique identifies the research necessary to develop these technologies to a state that allows for technology transfer and application to waste management, remedial action, and decontamination and decommissioning activities. The TLD consists of four separate volumes-Vol. 1, Vol. 2, Vol. 3A, and Vol. 3B. Volume 1 provides introductory and overview information about the TLD. This volume, Volume 2, contains logic diagrams with an index. Volume 3 has been divided into two separate volumes to facilitate handling and use.

  4. Steam generator tube failures

    SciTech Connect

    MacDonald, P.E.; Shah, V.N.; Ward, L.W.; Ellison, P.G.

    1996-04-01

    A review and summary of the available information on steam generator tubing failures and the impact of these failures on plant safety is presented. The following topics are covered: pressurized water reactor (PWR), Canadian deuterium uranium (CANDU) reactor, and Russian water moderated, water cooled energy reactor (VVER) steam generator degradation, PWR steam generator tube ruptures, the thermal-hydraulic response of a PWR plant with a faulted steam generator, the risk significance of steam generator tube rupture accidents, tubing inspection requirements and fitness-for-service criteria in various countries, and defect detection reliability and sizing accuracy. A significant number of steam generator tubes are defective and are removed from service or repaired each year. This wide spread damage has been caused by many diverse degradation mechanisms, some of which are difficult to detect and predict. In addition, spontaneous tube ruptures have occurred at the rate of about one every 2 years over the last 20 years, and incipient tube ruptures (tube failures usually identified with leak detection monitors just before rupture) have been occurring at the rate of about one per year. These ruptures have caused complex plant transients which have not always been easy for the reactor operators to control. Our analysis shows that if more than 15 tubes rupture during a main steam line break, the system response could lead to core melting. Although spontaneous and induced steam generator tube ruptures are small contributors to the total core damage frequency calculated in probabilistic risk assessments, they are risk significant because the radionuclides are likely to bypass the reactor containment building. The frequency of steam generator tube ruptures can be significantly reduced through appropriate and timely inspections and repairs or removal from service.

  5. Heuristic Diagrams as a Tool to Teach History of Science

    ERIC Educational Resources Information Center

    Chamizo, Jose A.

    2012-01-01

    The graphic organizer called here heuristic diagram as an improvement of Gowin's Vee heuristic is proposed as a tool to teach history of science. Heuristic diagrams have the purpose of helping students (or teachers, or researchers) to understand their own research considering that asks and problem-solving are central to scientific activity. The…

  6. 30 CFR 256.8 - Leasing maps and diagrams.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 2 2010-07-01 2010-07-01 false Leasing maps and diagrams. 256.8 Section 256.8 Mineral Resources MINERALS MANAGEMENT SERVICE, DEPARTMENT OF THE INTERIOR OFFSHORE LEASING OF SULPHUR OR..., General § 256.8 Leasing maps and diagrams. (a) Any area of the OCS which has been appropriately platted...

  7. Persistence Diagrams of High-Resolution Temporal Rainfall

    NASA Astrophysics Data System (ADS)

    Fernández Méndez, F.; Carsteanu, A. A.

    2015-12-01

    This study applies Topological Data Analysis (TDA), by generating persistence diagrams to uncover patterns in the data of high-resolution temporal rainfall intensities from Iowa City (IIHR, U of Iowa). Persistence diagrams are a way to identify essential cycles in state-space representations of the data.

  8. Pourbaix ("E"-pH-M) Diagrams in Three Dimensions

    ERIC Educational Resources Information Center

    Pesterfield, Lester L.; Maddox, Jeremy B.; Crocker, Michael S.; Schweitzer, George K.

    2012-01-01

    "E"-pH (Pourbaix) diagrams provide an important graphical link between the thermodynamic calculations of potential, pH, equilibrium constant, concentration, and changes in Gibbs energy and the experimentally observed behavior of species in aqueous solutions. The utility of "E"-pH diagrams is extended with the introduction of an additional…

  9. An Introductory Idea for Teaching Two-Component Phase Diagrams

    ERIC Educational Resources Information Center

    Peckham, Gavin D.; McNaught, Ian J.

    2011-01-01

    The teaching of two-component phase diagrams has attracted little attention in this "Journal," and it is hoped that this article will make a useful contribution. Current physical chemistry textbooks describe two-component phase diagrams adequately, but do so in a piecemeal fashion one section at a time; first solid-liquid equilibria, then…

  10. Improving Students' Diagram Comprehension with Classroom Instruction

    ERIC Educational Resources Information Center

    Cromley, Jennifer G.; Perez, Tony C.; Fitzhugh, Shannon L.; Newcombe, Nora S.; Wills, Theodore W.; Tanaka, Jacqueline C.

    2013-01-01

    The authors tested whether students can be taught to better understand conventional representations in diagrams, photographs, and other visual representations in science textbooks. The authors developed a teacher-delivered, workbook-and-discussion-based classroom instructional method called Conventions of Diagrams (COD). The authors trained 1…

  11. QCD Phase Diagram Using Dyson-Schwinger Equations

    SciTech Connect

    Liu Yuxin; Qin Sixue; Chang Lei; Roberts, Craig D.

    2011-05-24

    We describe briefly the Dyson-Schwinger equation approach of QCD and the study of the QCD phase diagram in this approach. The phase diagram in terms of the temperature and chemical potential, and that in the space of coupling strength and current-quark mass are given.

  12. Diagram, Gesture, Agency: Theorizing Embodiment in the Mathematics Classroom

    ERIC Educational Resources Information Center

    de Freitas, Elizabeth; Sinclair, Nathalie

    2012-01-01

    In this paper, we use the work of philosopher Gilles Chatelet to rethink the gesture/diagram relationship and to explore the ways mathematical agency is constituted through it. We argue for a fundamental philosophical shift to better conceptualize the relationship between gesture and diagram, and suggest that such an approach might open up new…

  13. Adding Value to Force Diagrams: Representing Relative Force Magnitudes

    ERIC Educational Resources Information Center

    Wendel, Paul

    2011-01-01

    Nearly all physics instructors recognize the instructional value of force diagrams, and this journal has published several collections of exercises to improve student skill in this area. Yet some instructors worry that too few students perceive the conceptual and problem-solving utility of force diagrams, and over recent years a rich variety of…

  14. Argument Diagramming and Critical Thinking in Introductory Philosophy

    ERIC Educational Resources Information Center

    Harrell, Maralee

    2011-01-01

    In a multi-study naturalistic quasi-experiment involving 269 students in a semester-long introductory philosophy course, we investigated the effect of teaching argument diagramming (AD) on students' scores on argument analysis tasks. An argument diagram is a visual representation of the content and structure of an argument. In each study, all of…

  15. Using a Spreadsheet To Explore Melting, Dissolving and Phase Diagrams.

    ERIC Educational Resources Information Center

    Goodwin, Alan

    2002-01-01

    Compares phase diagrams relating to the solubilities and melting points of various substances in textbooks with those generated by a spreadsheet using data from the literature. Argues that differences between the diagrams give rise to new chemical insights. (Author/MM)

  16. Diagramming Word Problems: A Strategic Approach for Instruction

    ERIC Educational Resources Information Center

    van Garderen, Delinda; Scheuermann, Amy M.

    2015-01-01

    While often recommended as a strategy to use in order to solve word problems, drawing a diagram is a complex process that requires a good depth of understanding. Many middle school students with learning disabilities (LD) often struggle to use diagrams in an effective and efficient manner. This article presents information for teaching middle…

  17. Water, Water Everywhere: Phase Diagrams of Ordinary Water Substance

    ERIC Educational Resources Information Center

    Glasser, L.

    2004-01-01

    The full phase diagram of water in the form of a graphical representation of the three-dimensional (3D) PVT diagram using authentic data is presented. An interesting controversy regarding the phase behavior of water was the much-touted proposal of a solid phase of water, polywater, supposedly stable under atmospheric conditions.

  18. Diagram This Headline in One Minute, if You Can

    ERIC Educational Resources Information Center

    Landecker, Heidi

    2009-01-01

    Say "sentence diagramming" to people of a certain age, and one gets different reactions. Say it to most college students, and one gets a blank look. But not from the 24 students in Lucy Ferriss's "Constructing Thought," a half-credit course in the English department at Trinity College. They know how to diagram a sentence--and they are passionate…

  19. Symbol-and-Arrow Diagrams in Teaching Pharmacokinetics.

    ERIC Educational Resources Information Center

    Hayton, William L.

    1990-01-01

    Symbol-and-arrow diagrams are helpful adjuncts to equations derived from pharmacokinetic models. Both show relationships among dependent and independent variables. Diagrams show only qualitative relationships, but clearly show which variables are dependent and which are independent, helping students understand complex but important functional…

  20. Phase diagram for inertial granular flows.

    PubMed

    DeGiuli, E; McElwaine, J N; Wyart, M

    2016-07-01

    Flows of hard granular materials depend strongly on the interparticle friction coefficient μ_{p} and on the inertial number I, which characterizes proximity to the jamming transition where flow stops. Guided by numerical simulations, we derive the phase diagram of dense inertial flow of spherical particles, finding three regimes for 10^{-4}≲I≲10^{-1}: frictionless, frictional sliding, and rolling. These are distinguished by the dominant means of energy dissipation, changing from collisional to sliding friction, and back to collisional, as μ_{p} increases from zero at constant I. The three regimes differ in their kinetics and rheology; in particular, the velocity fluctuations and the stress ratio both display nonmonotonic behavior with μ_{p}, corresponding to transitions between the three regimes of flow. We rationalize the phase boundaries between these regimes, show that energy balance yields scaling relations between microscopic properties in each of them, and derive the strain scale at which particles lose memory of their velocity. For the frictional sliding regime most relevant experimentally, we find for I≥10^{-2.5} that the growth of the macroscopic friction μ(I) with I is induced by an increase of collisional dissipation. This implies in that range that μ(I)-μ(0)∼I^{1-2b}, where b≈0.2 is an exponent that characterizes both the dimensionless velocity fluctuations L∼I^{-b} and the density of sliding contacts χ∼I^{b}. PMID:27575203

  1. Diagram of Cell to Cell Communication

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Diagram depicts the importance of cell-cell communication as central to the understanding of cancer growth and progression, the focus of the NASA bioreactor demonstration system (BDS-05) investigation. Microgravity studies will allow us to unravel the signaling and communication between these cells with the host and potential development of therapies for the treatment of cancer metastasis. The NASA Bioreactor provides a low turbulence culture environment which promotes the formation of large, three-dimensional cell clusters. Due to their high level of cellular organization and specialization, samples constructed in the bioreactor more closely resemble the original tumor or tissue found in the body. The Bioreactor is rotated to provide gentle mixing of fresh and spent nutrient without inducing shear forces that would damage the cells. The work is sponsored by NASA's Office of Biological and Physical Research. The bioreactor is managed by the Biotechnology Cell Science Program at NASA's Johnson Space Center (JSC). NASA-sponsored bioreactor research has been instrumental in helping scientists to better understand normal and cancerous tissue development. In cooperation with the medical community, the bioreactor design is being used to prepare better models of human colon, prostate, breast and ovarian tumors. Cartilage, bone marrow, heart muscle, skeletal muscle, pancreatic islet cells, liver and kidney are just a few of the normal tissues being cultured in rotating bioreactors by investigators. Credit: Emory University.

  2. Phase diagram for inertial granular flows

    NASA Astrophysics Data System (ADS)

    DeGiuli, E.; McElwaine, J. N.; Wyart, M.

    2016-07-01

    Flows of hard granular materials depend strongly on the interparticle friction coefficient μp and on the inertial number I , which characterizes proximity to the jamming transition where flow stops. Guided by numerical simulations, we derive the phase diagram of dense inertial flow of spherical particles, finding three regimes for 10-4≲I ≲10-1 : frictionless, frictional sliding, and rolling. These are distinguished by the dominant means of energy dissipation, changing from collisional to sliding friction, and back to collisional, as μp increases from zero at constant I . The three regimes differ in their kinetics and rheology; in particular, the velocity fluctuations and the stress ratio both display nonmonotonic behavior with μp, corresponding to transitions between the three regimes of flow. We rationalize the phase boundaries between these regimes, show that energy balance yields scaling relations between microscopic properties in each of them, and derive the strain scale at which particles lose memory of their velocity. For the frictional sliding regime most relevant experimentally, we find for I ≥10-2.5 that the growth of the macroscopic friction μ (I ) with I is induced by an increase of collisional dissipation. This implies in that range that μ (I ) -μ (0 ) ˜I1 -2 b , where b ≈0.2 is an exponent that characterizes both the dimensionless velocity fluctuations L ˜I-b and the density of sliding contacts χ ˜Ib .

  3. Calculation of Gallium-metal-Arsenic phase diagrams

    NASA Technical Reports Server (NTRS)

    Scofield, J. D.; Davison, J. E.; Ray, A. E.; Smith, S. R.

    1991-01-01

    Electrical contacts and metallization to GaAs solar cells must survive at high temperatures for several minutes under specific mission scenarios. The determination of which metallizations or alloy systems that are able to withstand extreme thermal excursions with minimum degradation to solar cell performance can be predicted by properly calculated temperature constitution phase diagrams. A method for calculating a ternary diagram and its three constituent binary phase diagrams is briefly outlined and ternary phase diagrams for three Ga-As-X alloy systems are presented. Free energy functions of the liquid and solid phase are approximated by the regular solution theory. Phase diagrams calculated using this method are presented for the Ga-As-Ge and Ga-As-Ag systems.

  4. Particle–hole ring diagrams for fermions in two dimensions

    SciTech Connect

    Kaiser, N.

    2014-11-15

    The set of particle–hole ring diagrams for a many-fermion system in two dimensions is studied. The complex-valued polarization function is derived in detail and shown to be expressible in terms of square-root functions. For a contact-interaction the perturbative contributions to the energy per particle Ē(k{sub f}) are calculated in a closed analytical form from third up to twelfth order. The resummation of the particle–hole ring diagrams to all orders is studied and a pronounced dependence on the dimensionless coupling parameter α is found. There is a substantial difference between the complete ring-sum with all exchange-type diagrams included and the standard resummation of the leading n-ring diagrams only. The spin factor S{sub n}(g) associated to the nth order ring diagrams is derived for arbitrary spin-degeneracy g.

  5. VennMaster: Area-proportional Euler diagrams for functional GO analysis of microarrays

    PubMed Central

    Kestler, Hans A; Müller, André; Kraus, Johann M; Buchholz, Malte; Gress, Thomas M; Liu, Hongfang; Kane, David W; Zeeberg, Barry R; Weinstein, John N

    2008-01-01

    Background Microarray experiments generate vast amounts of data. The functional context of differentially expressed genes can be assessed by querying the Gene Ontology (GO) database via GoMiner. Directed acyclic graph representations, which are used to depict GO categories enriched with differentially expressed genes, are difficult to interpret and, depending on the particular analysis, may not be well suited for formulating new hypotheses. Additional graphical methods are therefore needed to augment the GO graphical representation. Results We present an alternative visualization approach, area-proportional Euler diagrams, showing set relationships with semi-quantitative size information in a single diagram to support biological hypothesis formulation. The cardinalities of sets and intersection sets are represented by area-proportional Euler diagrams and their corresponding graphical (circular or polygonal) intersection areas. Optimally proportional representations are obtained using swarm and evolutionary optimization algorithms. Conclusion VennMaster's area-proportional Euler diagrams effectively structure and visualize the results of a GO analysis by indicating to what extent flagged genes are shared by different categories. In addition to reducing the complexity of the output, the visualizations facilitate generation of novel hypotheses from the analysis of seemingly unrelated categories that share differentially expressed genes. PMID:18230172

  6. The coupling of thermochemistry and phase diagrams for group III-V semiconductor systems. Final report

    SciTech Connect

    Anderson, T.J.

    1998-07-21

    The project was directed at linking the thermochemical properties of III-V compound semiconductors systems with the reported phase diagrams. The solid-liquid phase equilibrium problem was formulated and three approaches to calculating the reduced standard state chemical potential were identified and values were calculated. In addition, thermochemical values for critical properties were measured using solid state electrochemical techniques. These values, along with the standard state chemical potentials and other available thermochemical and phase diagram data, were combined with a critical assessment of selected III-V systems. This work was culminated with a comprehensive assessment of all the III-V binary systems. A novel aspect of the experimental part of this project was the demonstration of the use of a liquid encapsulate to measure component activities by a solid state emf technique in liquid III-V systems that exhibit high vapor pressures at the measurement temperature.

  7. The potential failure of Monte Nuovo at Ischia Island (Southern Italy): numerical assessment of a likely induced tsunami and its effects on a densely inhabited area

    NASA Astrophysics Data System (ADS)

    Zaniboni, F.; Pagnoni, G.; Tinti, S.; Della Seta, M.; Fredi, P.; Marotta, E.; Orsi, G.

    2013-11-01

    Ischia is the emergent top of a large volcanic complex that rises more than 1,000 m above the sea floor, at the north-western end of the Gulf of Naples. Caldera resurgence in the central part of the island has resulted in the formation of differentially displaced blocks, among which Mt. Epomeo (787 m a.s.l.) is the most uplifted. Deformation and slope instability have been recognised as common features induced by a block resurgence mechanism that causes uplift and favours gravitational loading and flank failure. The Monte Nuovo block, a topographic high on the north-western flank of Mt. Epomeo, has recently been interpreted as a block affected by deep-seated gravitational slope deformation. This block may undergo a catastrophic failure in the case of renewal of magmatic activity. This paper investigates the potential failure of the Monte Nuovo block as a rockslide-debris avalanche, the consequent tsunami generation and wave propagation, and discusses the catastrophic effects of such an event. Mobilization-prone volume has been estimated at about 160·106 m3 and would move from a maximum elevation of 400 m a.s.l. The landslide itself would sweep away a densely populated territory as large as 3.5 km2. The highest waves generated by the tsunami, on which this paper is mainly focussed, would hit the northern and western shores of Ischia. However, the high coast would prevent inundation and limit devastation to beaches, harbours and surrounding areas. Most of the tsunami energy would head towards the north-east, hitting the Campania coast. Severe inundation would affect an area of up to 20 km2 around the mouth of the Volturno river, including the urban area of Castel Volturno. In contrast, less energy would travel towards the south, and the Gulf of Naples would be perturbed by long persisting waves of limited damaging potential.

  8. Effects of Three Diagram Instruction Methods on Transfer of Diagram Comprehension Skills: The Critical Role of Inference While Learning

    ERIC Educational Resources Information Center

    Cromley, Jennifer G.; Bergey, Bradley W.; Fitzhugh, Shannon; Newcombe, Nora; Wills, Theodore W.; Shipley, Thomas F.; Tanaka, Jacqueline C.

    2013-01-01

    Can students be taught to better comprehend the diagrams in their textbooks? Can such teaching transfer to uninstructed diagrams in the same domain or even in a new domain? What methods work best for these goals? Building on previous research showing positive results compared to control groups in both laboratory studies and short-term…

  9. The ruthenium-yttrium system: An experimental calorimetric study with a phase diagram optimization

    SciTech Connect

    Selhaoui, N.; Bouirden, L.; Charles, J.; Gachon, J.C.; Kleppa, O.J.

    1998-07-01

    After an experimental determination of the standard enthalpies of formation of Ru{sub 0.67}Y{sub 0.33} and Ru{sub 0.286}Y{sub 0.714}, the Ru-Y system was numerically assessed with help of NANCYUN software to check the consistency between the experimental results and the phase diagram proposed in the literature.

  10. Analysis of sucker rod and sinkerbar failures

    SciTech Connect

    Waggoner, J.R.; Buchheit, R.G.

    1992-12-31

    This paper presents results from a study to analyze the performance and failures of the sucker rod/sinkerbar string used in beam-pumping operations through metallography, structural finite element analysis, and detailed failure data collection. Metallography demonstrated that microstructure of steel bar stock needs to be considered. Current specification based on tensile strength, or yield strength, may not be appropriate since failure occurs because of fatigue and not yielding. Finite element analysis of the threaded connection identifies stress and fatigue concentrations and quantitatively assesses the performance and failure of coupling designs under a variety of loading conditions. Subcritical fractures observed in the metallography are also suggested by the calculated stress distribution in the threaded coupling. Failure data illustrates both magnitude and frequency of the failures, as well as categorizing the suspected cause of failure. This failure information alone can reduce failures by indicating specific problem areas. These results are expected to yield improved choice of metal bar stock, thread design, and make-up practices which can reduce sucker rod failures.

  11. Analysis of sucker rod and sinkerbar failures

    SciTech Connect

    Waggoner, J.R.; Buchheit, R.G.

    1992-01-01

    This paper presents results from a study to analyze the performance and failures of the sucker rod/sinkerbar string used in beam-pumping operations through metallography, structural finite element analysis, and detailed failure data collection. Metallography demonstrated that microstructure of steel bar stock needs to be considered. Current specification based on tensile strength, or yield strength, may not be appropriate since failure occurs because of fatigue and not yielding. Finite element analysis of the threaded connection identifies stress and fatigue concentrations and quantitatively assesses the performance and failure of coupling designs under a variety of loading conditions. Subcritical fractures observed in the metallography are also suggested by the calculated stress distribution in the threaded coupling. Failure data illustrates both magnitude and frequency of the failures, as well as categorizing the suspected cause of failure. This failure information alone can reduce failures by indicating specific problem areas. These results are expected to yield improved choice of metal bar stock, thread design, and make-up practices which can reduce sucker rod failures.

  12. Plasma Glutamine Concentrations in Liver Failure

    PubMed Central

    Helling, Gunnel; Wahlin, Staffan; Smedberg, Marie; Pettersson, Linn; Tjäder, Inga; Norberg, Åke; Rooyackers, Olav; Wernerman, Jan

    2016-01-01

    Background Higher than normal plasma glutamine concentration at admission to an intensive care unit is associated with an unfavorable outcome. Very high plasma glutamine levels are sometimes seen in both acute and chronic liver failure. We aimed to systematically explore the relation between different types of liver failure and plasma glutamine concentrations. Methods Four different groups of patients were studies; chronic liver failure (n = 40), acute on chronic liver failure (n = 20), acute fulminant liver failure (n = 20), and post-hepatectomy liver failure (n = 20). Child-Pugh and Model for End-stage Liver Disease (MELD) scores were assessed as indices of liver function. All groups except the chronic liver failure group were followed longitudinally during hospitalisation. Outcomes were recorded up to 48 months after study inclusion. Results All groups had individuals with very high plasma glutamine concentrations. In the total group of patients (n = 100), severity of liver failure correlated significantly with plasma glutamine concentration, but the correlation was not strong. Conclusion Liver failure, regardless of severity and course of illness, may be associated with a high plasma glutamine concentration. Further studies are needed to understand whether high glutamine levels should be regarded as a biomarker or as a contributor to symptomatology in liver failure. PMID:26938452

  13. Oak Ridge National Laboratory Technology Logic Diagram. Volume 2, Technology Logic Diagram: Part B, Remedial Action

    SciTech Connect

    Not Available

    1993-09-01

    The Oak Ridge National Laboratory Technology Logic Diagram (TLD) was developed to provide a decision support tool that relates environmental restoration (ER) and waste management (WM) problems at Oak Ridge National Laboratory (ORNL) to potential technologies that can remediate these problems. The TLD identifies the research, development, demonstration, testing, and evaluation needed to develop these technologies to a state that allows technology transfer and application to decontamination and decommissioning (D&D), remedial action (RA), and WM activities. The TLD consists of three fundamentally separate volumes: Vol. 1 (Technology Evaluation), Vol. 2 (Technology Logic Diagram), and Vol. 3 (Technology Evaluation Data Sheets). Part A of Vols. 1. and 2 focuses on D&D. Part B of Vols. 1 and 2 focuses on the RA of contaminated facilities. Part C of Vols. 1 and 2 focuses on WM. Each part of Vol. 1 contains an overview of the TLD, an explanation of the program-specific responsibilities, a review of identified technologies, and the rankings of remedial technologies. Volume 2 (Pts. A, B, and C) contains the logic linkages among EM goals, environmental problems, and the various technologies that have the potential to solve these problems. Volume 3 (Pts. A, B, and C) contains the TLD data sheets. Remedial action is the focus of Vol. 2, Pt. B, which has been divided into the three necessary subelements of the RA: characterization, RA, and robotics and automation. Each of these sections address general ORNL problems, which are then broken down by problem area/constituents and linked to potential remedial technologies. The diagrams also contain summary information about a technology`s status, its science and technology needs, and its implementation needs.

  14. The high-z quasar Hubble Diagram

    SciTech Connect

    Melia, Fulvio

    2014-01-01

    Two recent discoveries have made it possible for us to begin using high-z quasars as standard candles to construct a Hubble Diagram (HD) at z > 6. These are (1) the recognition from reverberation mapping that a relationship exists between the optical/UV luminosity and the distance of line-emitting gas from the central ionizing source. Thus, together with a measurement of the velocity of the line-emitting gas, e.g., via the width of BLR lines, such as Mg II, a single observation can therefore in principle provide a determination of the black hole's mass; and (2) the identification of quasar ULAS J1120+0641 at z = 7.085, which has significantly extended the redshift range of these sources, providing essential leverage when fitting theoretical luminosity distances to the data. In this paper, we use the observed fluxes and Mg II line-widths of these sources to show that one may reasonably test the predicted high-z distance versus redshift relationship, and we assemble a sample of 20 currently available high-z quasars for this exercise. We find a good match between theory and observations, suggesting that a more complete, high-quality survey may indeed eventually produce an HD to complement the highly-detailed study already underway (e.g., with Type Ia SNe, GRBs, and cosmic chronometers) at lower redshifts. With the modest sample we have here, we show that the R{sub h} = ct Universe and ΛCDM both fit the data quite well, though the smaller number of free parameters in the former produces a more favorable outcome when we calculate likelihoods using the Akaike, Kullback, and Bayes Information Criteria. These three statistical tools result in similar probabilities, indicating that the R{sub h} = ct Universe is more likely than ΛCDM to be correct, by a ratio of about 85% to 15%.

  15. Energetic studies and phase diagram of thioxanthene.

    PubMed

    Freitas, Vera L S; Monte, Manuel J S; Santos, Luís M N B F; Gomes, José R B; Ribeiro da Silva, Maria D M C

    2009-11-19

    The molecular stability of thioxanthene, a key species from which very important compounds with industrial relevance are derived, has been studied by a combination of several experimental techniques and computational approaches. The standard (p degrees = 0.1 MPa) molar enthalpy of formation of crystalline thioxanthene (117.4 +/- 4.1 kJ x mol(-1)) was determined from the experimental standard molar energy of combustion, in oxygen, measured by rotating-bomb combustion calorimetry at T = 298.15 K. The enthalpy of sublimation was determined by a direct method, using the vacuum drop microcalorimetric technique, and also by an indirect method, using a static apparatus, where the vapor pressures at different temperatures were measured. The latter technique was used for both crystalline and undercooled liquid samples, and the phase diagram of thioxanthene near the triple point was obtained (triple point coordinates T = 402.71 K and p = 144.7 Pa). From the two methods, a mean value for the standard (p degrees = 0.1 MPa) molar enthalpy of sublimation, at T = 298.15 K (101.3 +/- 0.8 kJ x mol(-1)), was derived. From the latter value and from the enthalpy of formation of the solid, the standard (p degrees = 0.1 MPa) enthalpy of formation of gaseous thioxanthene was calculated as 218.7 +/- 4.2 kJ x mol(-1). Standard ab initio molecular orbital calculations were performed using the G3(MP2)//B3LYP composite procedure and several homodesmotic reactions in order to derive the standard molar enthalpy of formation of thioxanthene. The ab initio results are in excellent agreement with the experimental data. PMID:19821598

  16. Energetic studies and phase diagram of thioxanthene.

    PubMed

    Freitas, Vera L S; Monte, Manuel J S; Santos, Luís M N B F; Gomes, José R B; Ribeiro da Silva, Maria D M C

    2009-11-19

    The molecular stability of thioxanthene, a key species from which very important compounds with industrial relevance are derived, has been studied by a combination of several experimental techniques and computational approaches. The standard (p degrees = 0.1 MPa) molar enthalpy of formation of crystalline thioxanthene (117.4 +/- 4.1 kJ x mol(-1)) was determined from the experimental standard molar energy of combustion, in oxygen, measured by rotating-bomb combustion calorimetry at T = 298.15 K. The enthalpy of sublimation was determined by a direct method, using the vacuum drop microcalorimetric technique, and also by an indirect method, using a static apparatus, where the vapor pressures at different temperatures were measured. The latter technique was used for both crystalline and undercooled liquid samples, and the phase diagram of thioxanthene near the triple point was obtained (triple point coordinates T = 402.71 K and p = 144.7 Pa). From the two methods, a mean value for the standard (p degrees = 0.1 MPa) molar enthalpy of sublimation, at T = 298.15 K (101.3 +/- 0.8 kJ x mol(-1)), was derived. From the latter value and from the enthalpy of formation of the solid, the standard (p degrees = 0.1 MPa) enthalpy of formation of gaseous thioxanthene was calculated as 218.7 +/- 4.2 kJ x mol(-1). Standard ab initio molecular orbital calculations were performed using the G3(MP2)//B3LYP composite procedure and several homodesmotic reactions in order to derive the standard molar enthalpy of formation of thioxanthene. The ab initio results are in excellent agreement with the experimental data.

  17. The Dy-Zn phase diagram

    NASA Astrophysics Data System (ADS)

    Saccone, A.; Cardinale, A. M.; Delfino, S.; Ferro, R.

    2003-03-01

    The dysprosium-zinc phase diagram has been investigated over its entire composition range by using differential thermal analysis, (DTA) metallographic analysis, X-ray powder diffraction, and electron probe microanalysis (EPMA). Seven intermetallic phases have been found and their structures confirmed. DyZn, DyZn2, Dy13Zn58, and Dy2Zn17 melt congruently at 1095 °C, 1050 °C, 930 °C, and 930 °C, respectively. DyZn3, Dy3Zn11, and DyZn12 form through peritectic reactions at 895 °C, about 900 °C and 685 °C, respectively. Four eutectic reactions occur at 850 °C and 30.0 at pct Zn (between (Dy) and DyZn), 990 °C and 60.0 at pct Zn (between DyZn and DyZn2), 885 °C and 76.0 at pct Zn (between DyZn3 and Dy3Zn11), and 875 °C and 85.0 at pct Zn (involving Dy13Zn58 and Dy2Zn17). The Dy-rich end presents a catatectic equilibrium; a degenerate invariant effect has been found in the Zn-rich region. The phase equilibria of the Dy-Zn alloys are discussed and compared with those of the other known RE-Zn systems (RE=rare earth metal) in view of the regular change in the relative stabilities of the phases across the lanthanide series

  18. Energetic Studies and Phase Diagram of Thioxanthene

    NASA Astrophysics Data System (ADS)

    Freitas, Vera L. S.; Monte, Manuel J. S.; Santos, Luís M. N. B. F.; Gomes, José R. B.; Ribeiro da Silva, Maria D. M. C.

    2009-10-01

    The molecular stability of thioxanthene, a key species from which very important compounds with industrial relevance are derived, has been studied by a combination of several experimental techniques and computational approaches. The standard (p° = 0.1 MPa) molar enthalpy of formation of crystalline thioxanthene (117.4 ± 4.1 kJ·mol-1) was determined from the experimental standard molar energy of combustion, in oxygen, measured by rotating-bomb combustion calorimetry at T = 298.15 K. The enthalpy of sublimation was determined by a direct method, using the vacuum drop microcalorimetric technique, and also by an indirect method, using a static apparatus, where the vapor pressures at different temperatures were measured. The latter technique was used for both crystalline and undercooled liquid samples, and the phase diagram of thioxanthene near the triple point was obtained (triple point coordinates T = 402.71 K and p = 144.7 Pa). From the two methods, a mean value for the standard (p° = 0.1 MPa) molar enthalpy of sublimation, at T = 298.15 K (101.3 ± 0.8 kJ·mol-1), was derived. From the latter value and from the enthalpy of formation of the solid, the standard (p° = 0.1 MPa) enthalpy of formation of gaseous thioxanthene was calculated as 218.7 ± 4.2 kJ·mol-1. Standard ab initio molecular orbital calculations were performed using the G3(MP2)//B3LYP composite procedure and several homodesmotic reactions in order to derive the standard molar enthalpy of formation of thioxanthene. The ab initio results are in excellent agreement with the experimental data.

  19. Dynamic phase diagram of soft nanocolloids.

    PubMed

    Gupta, Sudipta; Camargo, Manuel; Stellbrink, Jörg; Allgaier, Jürgen; Radulescu, Aurel; Lindner, Peter; Zaccarelli, Emanuela; Likos, Christos N; Richter, Dieter

    2015-09-01

    We present a comprehensive experimental and theoretical study covering micro-, meso- and macroscopic length and time scales, which enables us to establish a generalized view in terms of structure-property relationship and equilibrium dynamics of soft colloids. We introduce a new, tunable block copolymer model system, which allows us to vary the aggregation number, and consequently its softness, by changing the solvophobic-to-solvophilic block ratio (m : n) over two orders of magnitude. Based on a simple and general coarse-grained model of the colloidal interaction potential, we verify the significance of interaction length σint governing both structural and dynamic properties. We put forward a quantitative comparison between theory and experiment without adjustable parameters, covering a broad range of experimental polymer volume fractions (0.001 ≤ϕ≤ 0.5) and regimes from ultra-soft star-like to hard sphere-like particles, that finally results in the dynamic phase diagram of soft colloids. In particular, we find throughout the concentration domain a strong correlation between mesoscopic diffusion and macroscopic viscosity, irrespective of softness, manifested in data collapse on master curves using the interaction length σint as the only relevant parameter. A clear reentrance in the glass transition at high aggregation numbers is found, recovering the predicted hard-sphere (HS) value in the hard-sphere like limit. Finally, the excellent agreement between our new experimental systems with different but already established model systems shows the relevance of block copolymer micelles as a versatile realization of soft colloids and the general validity of a coarse-grained approach for the description of the structure and dynamics of soft colloids. PMID:26219628

  20. Superconducting phase diagram of itinerant antiferromagnets

    NASA Astrophysics Data System (ADS)

    Rømer, A. T.; Eremin, I.; Hirschfeld, P. J.; Andersen, B. M.

    2016-05-01

    We study the phase diagram of the Hubbard model in the weak-coupling limit for coexisting spin-density-wave order and spin-fluctuation-mediated superconductivity. Both longitudinal and transverse spin fluctuations contribute significantly to the effective interaction potential, which creates Cooper pairs of the quasiparticles of the antiferromagnetic metallic state. We find a dominant dx2-y2-wave solution in both electron- and hole-doped cases. In the quasi-spin-triplet channel, the longitudinal fluctuations give rise to an effective attraction supporting a p -wave gap, but are overcome by repulsive contributions from the transverse fluctuations which disfavor p -wave pairing compared to dx2-y2. The subleading pair instability is found to be in the g -wave channel, but complex admixtures of d and g are not energetically favored since their nodal structures coincide. Inclusion of interband pairing, in which each fermion in the Cooper pair belongs to a different spin-density-wave band, is considered for a range of electron dopings in the regime of well-developed magnetic order. We demonstrate that these interband pairing gaps, which are nonzero in the magnetic state, must have the same parity under inversion as the normal intraband gaps. The self-consistent solution to the full system of five coupled gap equations gives intraband and interband pairing gaps of dx2-y2 structure and similar gap magnitude. In conclusion, the dx2-y2 gap dominates for both hole and electron doping inside the spin-density-wave phase.

  1. Contraceptive failure in China.

    PubMed

    Wang, Duolao

    2002-09-01

    This study examines patterns and differentials of contraceptive failure rates by method and characteristics of users, using the Chinese Two-per-Thousand Fertility Survey data. The results show that contraceptive failure rates for modern methods including sterilization are some of the highest in the world. The first year failure rates are 4.2% for male sterilization, 0.7% for female sterilization, 10.3% for IUD, 14.5% for pill, and 19.0% for condom. There are also some differentials in contraceptive failure rates by users' sociodemographic and fertility characteristics. Contraceptive failure rate declines with women's age for all reversible methods. Rural women have higher sterilization, IUD, and condom contraceptive failure rates than urban women. Women with two or more children have a higher failure rate for sterilization methods but have lower failure rates for other methods.

  2. In Support of Failure

    ERIC Educational Resources Information Center

    Carr, Allison

    2013-01-01

    In this essay, I propose a concerted effort to begin devising a theory and pedagogy of failure. I review the discourse of failure in Western culture as well as in composition pedagogy, ultimately suggesting that failure is not simply a judgement or indication of rank but is a relational, affect-bearing concept with tremendous relevance to…

  3. Ammonia tank failure

    SciTech Connect

    Sweat, M.E.

    1983-04-01

    An ammonia tank failure at Hawkeye Chemical of Clinton, Iowa is discussed. The tank was a double-wall, 27,000 metric-ton tank built in 1968 and commissioned in December 1969. The paper presented covers the cause of the failure, repair, and procedural changes made to prevent recurrence of the failure. (JMT)

  4. Penguin-like diagrams from the standard model

    NASA Astrophysics Data System (ADS)

    Ping, Chia Swee

    2015-04-01

    The Standard Model is highly successful in describing the interactions of leptons and quarks. There are, however, rare processes that involve higher order effects in electroweak interactions. One specific class of processes is the penguin-like diagram. Such class of diagrams involves the neutral change of quark flavours accompanied by the emission of a gluon (gluon penguin), a photon (photon penguin), a gluon and a photon (gluon-photon penguin), a Z-boson (Z penguin), or a Higgs-boson (Higgs penguin). Such diagrams do not arise at the tree level in the Standard Model. They are, however, induced by one-loop effects. In this paper, we present an exact calculation of the penguin diagram vertices in the `tHooft-Feynman gauge. Renormalization of the vertex is effected by a prescription by Chia and Chong which gives an expression for the counter term identical to that obtained by employing Ward-Takahashi identity. The on-shell vertex functions for the penguin diagram vertices are obtained. The various penguin diagram vertex functions are related to one another via Ward-Takahashi identity. From these, a set of relations is obtained connecting the vertex form factors of various penguin diagrams. Explicit expressions for the gluon-photon penguin vertex form factors are obtained, and their contributions to the flavor changing processes estimated.

  5. Penguin-like diagrams from the standard model

    SciTech Connect

    Ping, Chia Swee

    2015-04-24

    The Standard Model is highly successful in describing the interactions of leptons and quarks. There are, however, rare processes that involve higher order effects in electroweak interactions. One specific class of processes is the penguin-like diagram. Such class of diagrams involves the neutral change of quark flavours accompanied by the emission of a gluon (gluon penguin), a photon (photon penguin), a gluon and a photon (gluon-photon penguin), a Z-boson (Z penguin), or a Higgs-boson (Higgs penguin). Such diagrams do not arise at the tree level in the Standard Model. They are, however, induced by one-loop effects. In this paper, we present an exact calculation of the penguin diagram vertices in the ‘tHooft-Feynman gauge. Renormalization of the vertex is effected by a prescription by Chia and Chong which gives an expression for the counter term identical to that obtained by employing Ward-Takahashi identity. The on-shell vertex functions for the penguin diagram vertices are obtained. The various penguin diagram vertex functions are related to one another via Ward-Takahashi identity. From these, a set of relations is obtained connecting the vertex form factors of various penguin diagrams. Explicit expressions for the gluon-photon penguin vertex form factors are obtained, and their contributions to the flavor changing processes estimated.

  6. Automating the layout of network diagrams with specified visual organization

    SciTech Connect

    Kosak, C.; Marks, J.; Shieber, S.

    1994-03-01

    Network diagrams are a familiar graphic form that can express many different kinds of information. The problem of automating network-diagram layout has therefore received much attention. Previous research on network-diagram layout has focused on the problem of aesthetically optimal layout, using such criteria as the number of link crossings, the sum of all link lengths, and total diagram area. In this paper we propose a restatement of the network-diagram layout problem in which layout-aesthetic concerns are subordinated to perceptual-organization concerns. We present a notation for describing the visual organization of a network diagram. This notation is used in reformulating the layout task as a constrained-optimization problem in which constraints are derived from a visual-organization specification and optimality criteria are derived from layout-aesthetic considerations. Two new heuristic algorithms are presented for this version of the layout problem: one algorithm uses a rule-based strategy for computing a layout; the other is a massively parallel genetic algorithm. We demonstrate the capabilities of the two algorithms by testing them on a variety of network-diagram layout problems. 30 refs.

  7. Node, Node-Link, and Node-Link-Group Diagrams: An Evaluation.

    PubMed

    Saket, Bahador; Simonetto, Paolo; Kobourov, Stephen; Börner, Katy

    2014-12-01

    Effectively showing the relationships between objects in a dataset is one of the main tasks in information visualization. Typically there is a well-defined notion of distance between pairs of objects, and traditional approaches such as principal component analysis or multi-dimensional scaling are used to place the objects as points in 2D space, so that similar objects are close to each other. In another typical setting, the dataset is visualized as a network graph, where related nodes are connected by links. More recently, datasets are also visualized as maps, where in addition to nodes and links, there is an explicit representation of groups and clusters. We consider these three Techniques, characterized by a progressive increase of the amount of encoded information: node diagrams, node-link diagrams and node-link-group diagrams. We assess these three types of diagrams with a controlled experiment that covers nine different tasks falling broadly in three categories: node-based tasks, network-based tasks and group-based tasks. Our findings indicate that adding links, or links and group representations, does not negatively impact performance (time and accuracy) of node-based tasks. Similarly, adding group representations does not negatively impact the performance of network-based tasks. Node-link-group diagrams outperform the others on group-based tasks. These conclusions contradict results in other studies, in similar but subtly different settings. Taken together, however, such results can have significant implications for the design of standard and domain snecific visualizations tools. PMID:26356937

  8. Z/sup 0/. -->. ggg via AVV triangle diagram

    SciTech Connect

    Lee, S.; Su, W.

    1988-07-01

    In the standard model, Z/sup 0/ decays into three gluons via the VVVV box diagram and AVV triangle diagram in lowest-order perturbation theory. We calculated the latter contribution and found it comparable to the former. We also found that the dominant contribution of the Z/sup 0/..-->..ggg via the AVV diagram comes from the longitudinal component of Z/sup 0/. This fact can be used to reduce the background from Z/sup 0/..-->..qq-barg.

  9. [Medico-legal assessment of selected cases of perinatal complications resulting in death of the woman during childbirth. Medical error or therapeutic failure?].

    PubMed

    Chowaniec, Małgorzata; Chowaniec, Czesław; Jabłoński, Christian; Nowak, Agnieszka

    2005-01-01

    Medico-legal estimation of therapeutic management in cases of perinatal complications, especially those resulting in death of the women during childbirth is usually very difficult. The authors have investigated medical documentation supported by the results of autopsies of cases chosen from the casuistry of the Forensic Medicine Department, Medical University of Silesia, Katowice. Considering the limits of professional liability and legal responsibility of physicians, close attention was paid to standard therapeutic management and increased risk in treatment with regard to that relating to typical salubrious complications. The presented cases of deaths of women during childbirth can be the succeeding opinion in broad discussion on medical errors as well as an attempt to standardise and differentiate the medical error from therapeutic failure which occurred within the reach of risk in the undertaken treatment.

  10. Moving toward comprehensive acute heart failure risk assessment in the emergency department: the importance of self-care and shared decision making.

    PubMed

    Collins, Sean P; Storrow, Alan B

    2013-08-01

    Nearly 700,000 emergency department (ED) visits were due to acute heart failure (AHF) in 2009. Most visits result in a hospital admission and account for the largest proportion of a projected $70 billion to be spent on heart failure care by 2030. ED-based risk prediction tools in AHF rarely impact disposition decision making. This is a major factor contributing to the 80% admission rate for ED patients with AHF, which has remained unchanged over the last several years. Self-care behaviors such as symptom monitoring, medication taking, dietary adherence, and exercise have been associated with decreased hospital readmissions, yet self-care remains largely unaddressed in ED patients with AHF and thus represents a significant lost opportunity to improve patient care and decrease ED visits and hospitalizations. Furthermore, shared decision making encourages collaborative interaction between patients, caregivers, and providers to drive a care path based on mutual agreement. The observation that “difficult decisions now will simplify difficult decisions later” has particular relevance to the ED, given this is the venue for many such issues. We hypothesize patients as complex and heterogeneous as ED patients with AHF may need both an objective evaluation of physiologic risk as well as an evaluation of barriers to ideal self-care, along with strategies to overcome these barriers. Combining physician gestalt, physiologic risk prediction instruments, an evaluation of self-care, and an information exchange between patient and provider using shared decision making may provide the critical inertia necessary to discharge patients home after a brief ED evaluation. PMID:24159563

  11. Assessment of the Relationship between Galectin-3 and Ejection Fraction and Functional Capacity in the Patients with Compensated Systolic Heart Failure

    PubMed Central

    Atabakhshian, Roya; Kazerouni, Faranak; Raygan, Fariba; Amirrasouli, Hushang; Rahimipour, Ali; Shakeri, Nezhat

    2014-01-01

    Background: Galectin-3 is a soluble ß-galactoside–binding lectin released by activated cardiac macrophages. Galectin-3 has been proposed for diagnosis and prognosis of HF patients. Objectives: The present study aimed to investigate the relationship between galectin-3 as a biomarker and ejection fraction and functional capacity in the patients with compensated systolic heart failure. Patients and Methods: In this study, serum levels of Galectin-3 were measured in 76 patients with compensated heart failure with New York Heart Association class I–IV and left ventricular ejection fraction < 45%. Galectin-3 was measured by an ELISA kit. Besides, echocardiography was used to evaluate left ventricular ejection fraction. Additionally, functional capacity was determined based on the patients’ ability to perform a set of activities. After all, the data were analyzed used t-test, Kruskal-Wallis, one–way ANOVA, and chi-square test. P < 0.05 was considered as statistically significant. Results: The patients’ age ranged from 45 to 75 years, with the mean age of 63.85 ± 9 years. In addition 57.9% of the patients were male. The results revealed no significant correlation between Galectin-3 and age, body mass index, and estimated glomerular filtration rate. Also, no significant correlation was observed between Galectin-3 levels and left ventricular ejection fraction (P = 0.166) and functional capacity (P = 0.420). Yet, a significant difference was found between males and females regarding the mean of Galectin-3 (P = 0.039). Conclusions: The study results suggested that Galectin-3 could not be used as a marker of disease progression in the patients under treatment, which could probably be the result of medication use in these patients. PMID:25614856

  12. Moving toward comprehensive acute heart failure risk assessment in the emergency department: the importance of self-care and shared decision making.

    PubMed

    Collins, Sean P; Storrow, Alan B

    2013-08-01

    Nearly 700,000 emergency department (ED) visits were due to acute heart failure (AHF) in 2009. Most visits result in a hospital admission and account for the largest proportion of a projected $70 billion to be spent on heart failure care by 2030. ED-based risk prediction tools in AHF rarely impact disposition decision making. This is a major factor contributing to the 80% admission rate for ED patients with AHF, which has remained unchanged over the last several years. Self-care behaviors such as symptom monitoring, medication taking, dietary adherence, and exercise have been associated with decreased hospital readmissions, yet self-care remains largely unaddressed in ED patients with AHF and thus represents a significant lost opportunity to improve patient care and decrease ED visits and hospitalizations. Furthermore, shared decision making encourages collaborative interaction between patients, caregivers, and providers to drive a care path based on mutual agreement. The observation that “difficult decisions now will simplify difficult decisions later” has particular relevance to the ED, given this is the venue for many such issues. We hypothesize patients as complex and heterogeneous as ED patients with AHF may need both an objective evaluation of physiologic risk as well as an evaluation of barriers to ideal self-care, along with strategies to overcome these barriers. Combining physician gestalt, physiologic risk prediction instruments, an evaluation of self-care, and an information exchange between patient and provider using shared decision making may provide the critical inertia necessary to discharge patients home after a brief ED evaluation.

  13. The failure of earthquake failure models

    USGS Publications Warehouse

    Gomberg, J.

    2001-01-01

    In this study I show that simple heuristic models and numerical calculations suggest that an entire class of commonly invoked models of earthquake failure processes cannot explain triggering of seismicity by transient or "dynamic" stress changes, such as stress changes associated with passing seismic waves. The models of this class have the common feature that the physical property characterizing failure increases at an accelerating rate when a fault is loaded (stressed) at a constant rate. Examples include models that invoke rate state friction or subcritical crack growth, in which the properties characterizing failure are slip or crack length, respectively. Failure occurs when the rate at which these grow accelerates to values exceeding some critical threshold. These accelerating failure models do not predict the finite durations of dynamically triggered earthquake sequences (e.g., at aftershock or remote distances). Some of the failure models belonging to this class have been used to explain static stress triggering of aftershocks. This may imply that the physical processes underlying dynamic triggering differs or that currently applied models of static triggering require modification. If the former is the case, we might appeal to physical mechanisms relying on oscillatory deformations such as compaction of saturated fault gouge leading to pore pressure increase, or cyclic fatigue. However, if dynamic and static triggering mechanisms differ, one still needs to ask why static triggering models that neglect these dynamic mechanisms appear to explain many observations. If the static and dynamic triggering mechanisms are the same, perhaps assumptions about accelerating failure and/or that triggering advances the failure times of a population of inevitable earthquakes are incorrect.

  14. The neodymium-gold phase diagram

    SciTech Connect

    Saccone, A.; Maccio, D.; Delfino, S.; Ferro, R.

    1999-05-01

    The Nd-Au phase diagram was studied in the 0 to 100 at. pct Au composition range by differential thermal analysis (DTA), X-ray diffraction (XRD), optical microscopy (LOM), scanning electron microscopy (SEM), and electron probe microanalysis (EPMA). Six intermetallic phases were identified, the crystallographic structures were determined or confirmed, and the melting behavior was determined, as follows: Nd{sub 2}Au, orthorhombic oP12-Co{sub 2}Si type, peritectic decomposition at 810 C; NdAu, R.T. form, orthorhombic oP8-FeB type, H.T. forms, orthorhombic oC8-CrB type and, at a higher temperature, cubic cP2-CsCl type, melting point 1470 C; Nd{sub 3}Au{sub 4}, trigonal hR42-Pu{sub 3}Pd{sub 4} type, peritectic decomposition at 1250 C; Nd{sub 17}Au{sub 36}, tetragonal tP106-Nd{sub 17}Au{sub 36} type, melting point 1170 C; Nd{sub 14}Au{sub 51}, hexagonal hP65-Gd{sub 14}Ag{sub 51} type, melting point 1210 C; and NdAu{sub 6}, monoclinic mC28-PrAu{sub 6} type, peritectic decomposition at 875 C. Four eutectic reactions were found, respectively, at 19.0 at. pct Au and 655 C, at 63.0 at. pct Au and 1080 C, at 72.0 at. pct Au and 1050 C, and, finally, at 91.0 at. pct Au and 795 C. A catatectic decomposition of the ({beta}Nd) phase, at 825 C and {approx}1 at. pct Au, was also found. The results are briefly discussed and compared to those for the other rare earth-gold (R-Au) systems. A short discussion of the general alloying behavior of the coinage metals (Cu, Ag, and Au) with the rare-earth metals is finally presented.

  15. Cosmological test with the QSO Hubble diagram

    NASA Astrophysics Data System (ADS)

    López-Corredoira, M.; Melia, F.; Lusso, E.; Risaliti, G.

    2016-03-01

    A Hubble diagram (HD) has recently been constructed in the redshift range 0 ≲ z ≲ 6.5 using a nonlinear relation between the ultraviolet (UV) and X-ray luminosities of quasi stellar objects (QSOs). The Type Ia Supernovae (SN) HD has already provided a high-precision test of cosmological models, but the fact that the QSO distribution extends well beyond the supernova range (z ≲ 1.8), in principle provides us with an important complementary diagnostic whose significantly greater leverage in z can impose tighter constraints on the distance versus redshift relationship. In this paper, we therefore perform an independent test of nine different cosmological models, among which six are expanding, while three are static. Many of these are disfavored by other kinds of observations (including the aforementioned Type Ia SNe). We wish to examine whether the QSO HD confirms or rejects these earlier conclusions. We find that four of these models (Einstein-de Sitter, the Milne universe, the static universe with simple tired light and the static universe with plasma tired light) are excluded at the > 99% C.L. The quasi-steady state model is excluded at > 95% C.L. The remaining four models (ΛCDM/wCDM, the Rh = ct universe, the Friedmann open universe and a static universe with a linear Hubble law) all pass the test. However, only ΛCDM/wCDM and Rh = ct also pass the Alcock-Paczyński (AP) test. The optimized parameters in ΛCDM/wCDM are Ωm = 0.20-0.20+0.24 and wde = -1.2-∞+1.6 (the dark energy equation-of-state). Combined with the AP test, these values become Ωm = 0.38-0.19+0.20 and wde = -0.28-0.40+0.52. But whereas this optimization of parameters in ΛCDM/wCDM creates some tension with their concordance values, the Rh = ct universe has the advantage of fitting the QSO and AP data without any free parameters.

  16. Ferrian Ilmenites: Investigating the Magnetic Phase Diagram

    NASA Astrophysics Data System (ADS)

    Lagroix, F.

    2007-12-01

    The main objective of this study is to investigate the magnetic phase changes within the hematite-ilmenite solid solution, yFeTiO3·(1-y)·Fe2O3. Two sets of synthetic ferrian ilmenites of y-values equal to 0.7, 0.8, 0.9, and 1.0 were available for this study. As currently drawn, the magnetic phase diagram, proposed by Ishikawa et al. [1985, J. Phys. Soc. Jpn. v.54, 312-325], predicts for increasing y values (0.5

  17. Quantitative Integration of Ndt with Probabilistic Fracture Mechanics for the Assessment of Fracture Risk in Pipelines

    NASA Astrophysics Data System (ADS)

    Kurz, J. H.; Cioclov, D.; Dobmann, G.; Boller, C.

    2010-02-01

    In the context of probabilistic paradigm of fracture risk assessment in structural components a computer simulation rationale is presented which has at the base the integration of Quantitative Non-destructive Inspection and Probabilistic Fracture Mechanics. In this study the static failure under static loading is assessed in the format known as Failure Assessment Diagram (FAD). The fracture risk is evaluated in probabilistic terms. The superposed probabilistic pattern over the deterministic one is implemented via Monte-Carlo sampling. The probabilistic fracture simulation yields a more informative analysis in terms of probability of failure. The ability to simulate the influence of the quality and reliability of non-destructive inspection (NDI) is an important feature of this approach. It is achieved by integrating, algorithmically, probabilistic FAD analysis and the Probability of Detection (POD). The POD information can only be applied in a probabilistic analysis and leads to a refinement of the assessment. By this means, it can be ascertained the decrease of probability of failure when POD-characterized NDI is applied. Therefore, this procedure can be used as a tool for inspection based life time conceptions. In this paper results of sensitivity analyses are presented with the aim to outline, in terms of non-failure probabilities, the benefits of applying NDI, in various qualities, in comparison with the situation when NDI is lacking. A better substantiation is enabled of both the component reliability management and the costs-effectiveness of NDI timing.

  18. 24 CFR 902.62 - Failure to submit data.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 4 2013-04-01 2013-04-01 false Failure to submit data. 902.62... DEVELOPMENT PUBLIC HOUSING ASSESSMENT SYSTEM PHAS Scoring § 902.62 Failure to submit data. (a) Failure to submit data by due date. (1) If a PHA without a finding of good cause by HUD does not submit its...

  19. 24 CFR 902.62 - Failure to submit data.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 4 2012-04-01 2012-04-01 false Failure to submit data. 902.62... DEVELOPMENT PUBLIC HOUSING ASSESSMENT SYSTEM PHAS Scoring § 902.62 Failure to submit data. (a) Failure to submit data by due date. (1) If a PHA without a finding of good cause by HUD does not submit its...

  20. 24 CFR 902.62 - Failure to submit data.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 4 2014-04-01 2014-04-01 false Failure to submit data. 902.62... DEVELOPMENT PUBLIC HOUSING ASSESSMENT SYSTEM PHAS Scoring § 902.62 Failure to submit data. (a) Failure to submit data by due date. (1) If a PHA without a finding of good cause by HUD does not submit its...

  1. 24 CFR 902.62 - Failure to submit data.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 4 2011-04-01 2011-04-01 false Failure to submit data. 902.62... DEVELOPMENT PUBLIC HOUSING ASSESSMENT SYSTEM PHAS Scoring § 902.62 Failure to submit data. (a) Failure to submit data by due date. (1) If a PHA without a finding of good cause by HUD does not submit its...

  2. Modeling Yin-Yang balance in tai-chi diagram with a melting-freezing rotating device part 3 — The contemporary tai-chi diagram, the yuan-chi diagram and the Fu Xi's eight trigrams

    NASA Astrophysics Data System (ADS)

    Lin, Sui; Chen, Tzu-Fang

    2002-11-01

    The physical model describing the Yin-Yang balance in the tai-chi diagram via the melting and freezing processes in a rotating device presented in parts 1 and 2 is further developed for the contemporary tai-chi diagram and in the yuan-chi diagram. The contemporary tai-chi diagram shown in Fig.1 is a simplification form of the ancient tai-chi diagram presented in Reference [2]. There are two semi-circles forming the interface curve between the yin and yang in the contemporary tai-chi diagram. By knowing the location of the interface between the yin and yang in the contemporary tai-chi diagram, the requirement for the simulation model is to find the condition to match the interface location. The simplification changes not only the structure but also the physical insight of the ancient tai-chi diagram, which will be described in the present study. The yuan-chi diagram shown in Fig.2 is the combination of the Master Chen’s tai-chi diagram presented in References [1,2] and the contemporary tai-chi diagram. The formulation of the yuan-chi diagram is similar to that of contemporary tai-chi diagram. The Fu Xi’s eight trigrams present three levels of yin-yang relation that are a natural result from the contemporary tai-chi diagram, which will be described in the last part of this study.

  3. The Fermion Representation of Quantum Toroidal Algebra on 3D Young Diagrams

    NASA Astrophysics Data System (ADS)

    Cai, Li-Qiang; Wang, Li-Fang; Wu, Ke; Yang, Jie

    2014-07-01

    We develop an equivalence between the diagonal slices and the perpendicular slices of 3D Young diagrams via Maya diagrams. Furthermore, we construct the fermion representation of quantum toroidal algebra on the 3D Young diagrams perpendicularly sliced.

  4. 37. PHOTOGRAPHY OF ORIGINAL PLAN (MINNEAPOLIS CITY ENGINEER) GENERAL DIAGRAM ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    37. PHOTOGRAPHY OF ORIGINAL PLAN (MINNEAPOLIS CITY ENGINEER) GENERAL DIAGRAM OF STEEL ARCH BRIDGE (4 x 5 negative) - Steel Arch Bridge, Hennepin Avenue spanning west channel of Mississippi River, Minneapolis, Hennepin County, MN

  5. Flame Deflector Section, Elevation, Water Supply Flow Diagram, Exploded ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Flame Deflector - Section, Elevation, Water Supply Flow Diagram, Exploded Deflector Manifolds, and Interior Perspective - Marshall Space Flight Center, F-1 Engine Static Test Stand, On Route 565 between Huntsville and Decatur, Huntsville, Madison County, AL

  6. Influence diagrams as oil spill decision science tools

    EPA Science Inventory

    Making inferences on risks to ecosystem services (ES) from ecological crises can be more reliably handled using decision science tools. Influence diagrams (IDs) are probabilistic networks that explicitly represent the decisions related to a problem and evidence of their influence...

  7. Exercises in Drawing and Utilizing Free-Body Diagrams.

    ERIC Educational Resources Information Center

    Fisher, Kurt

    1999-01-01

    Finds that students taking algebra-based introductory physics have difficulty with one- and two-body problems in particle mechanics. Provides graded exercises for drawing and utilizing free-body diagrams. (CCM)

  8. 129. PLAN OF IMPROVEMENT, HUNTINGTON BEACH MUNICIPAL PIER: LIGHTING DIAGRAM. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    129. PLAN OF IMPROVEMENT, HUNTINGTON BEACH MUNICIPAL PIER: LIGHTING DIAGRAM. Sheet lO of 11 (#3283) - Huntington Beach Municipal Pier, Pacific Coast Highway at Main Street, Huntington Beach, Orange County, CA

  9. 30 CFR 256.8 - Leasing maps and diagrams.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... INTERIOR OFFSHORE LEASING OF SULPHUR OR OIL AND GAS IN THE OUTER CONTINENTAL SHELF Outer Continental Shelf Oil, Gas, and Sulphur Management, General § 256.8 Leasing maps and diagrams. (a) Any area of the...

  10. Penguin diagram dominance in radiative weak decays of bottom baryons

    SciTech Connect

    Kohara, Yoji

    2005-05-01

    Radiative weak decays of antitriplet bottom baryons are studied under the assumption of penguin diagram dominance and flavor-SU(3) (or SU(2)) symmetry. Relations among decay rates of various decay modes are derived.

  11. Heuristic Diagrams as a Tool to Teach History of Science

    NASA Astrophysics Data System (ADS)

    Chamizo, José A.

    2012-05-01

    The graphic organizer called here heuristic diagram as an improvement of Gowin's Vee heuristic is proposed as a tool to teach history of science. Heuristic diagrams have the purpose of helping students (or teachers, or researchers) to understand their own research considering that asks and problem-solving are central to scientific activity. The left side originally related in Gowin's Vee with philosophies, theories, models, laws or regularities now agrees with Toulmin's concepts (language, models as representation techniques and application procedures). Mexican science teachers without experience in science education research used the heuristic diagram to learn about the history of chemistry considering also in the left side two different historical times: past and present. Through a semantic differential scale teachers' attitude to the heuristic diagram was evaluated and its usefulness was demonstrated.

  12. 22. Power plant engine pipingcompressed air piping diagram and sections, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    22. Power plant engine piping-compressed air piping diagram and sections, sheet 81 of 130 - Naval Air Station Fallon, Power Plant, 800 Complex, off Carson Road near intersection of Pasture & Berney Roads, Fallon, Churchill County, NV

  13. 21. Power plant engine fuel oil piping diagrams, sheet 83 ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    21. Power plant engine fuel oil piping diagrams, sheet 83 of 130 - Naval Air Station Fallon, Power Plant, 800 Complex, off Carson Road near intersection of Pasture & Berney Roads, Fallon, Churchill County, NV

  14. 8. Photocopy of top half of an 1855 organizational diagram ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    8. Photocopy of top half of an 1855 organizational diagram of the New York and Erie Railroad. Original in the collections of the Library of Congress. - Erie Railway, New Jersey, New York, Pennsylvania, Deposit, Broome County, NY

  15. 9. Photocopy of bottom half of an 1855 organizational diagram ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    9. Photocopy of bottom half of an 1855 organizational diagram of the New York and Erie Railroad. Original in the collections of the Library of Congress. - Erie Railway, New Jersey, New York, Pennsylvania, Deposit, Broome County, NY

  16. A spinor technique in symbolic Feynman diagram calculation mesons

    SciTech Connect

    A. Pang; C. Ji

    1994-02-17

    The authors present a recursive diagrammatic method for evaluating tree-level Feynman diagrams involving multi-fermions which interact through gauge bosons (gluons or photons). Based on this method, a package called COMPUTE, which can generate and calculate all the possible Feynman diagrams for exclusive processes in perturbative QCD, has been developed (available in both Mathematics and Maple). As an example, a calculation of the nucleon Compton scattering amplitude is given.

  17. Latest Results Found with Ring-Diagram Analysis

    NASA Astrophysics Data System (ADS)

    Baldner, C. S.; Basu, S.; Bogart, R. S.; Burtseva, O.; González Hernández, I.; Haber, D.; Hill, F.; Howe, R.; Jain, K.; Komm, R. W.; Rabello-Soares, M. C.; Tripathy, S.

    2013-10-01

    Ring-diagram analysis is a helioseismic tool useful for studying the near-surface layers of the Sun. It has been employed to study near-surface shear, meridional circulation, flows around sunspots, and thermal structure beneath active regions. We review recent results obtained using ring-diagram analysis, state some of the more important outstanding difficulties in the technique, and point out several extensions to the technique that are just now beginning to bear fruit.

  18. Analysing Vee Diagram Reflections to Explore Pre-Service Science Teachers' Understanding the Nature of Science in Biology

    ERIC Educational Resources Information Center

    Savran-Gencer, Ayse

    2014-01-01

    Vee diagrams have been a metacognitive tool to help in learning the nature and structure of knowledge by reflecting on the scientific process and making knowledge much more explicit to learners during the practical work. This study aimed to assess pre-service science teachers' understanding some aspects of NOS by analyzing their reflections…

  19. Laboratory assessment of anti-thrombotic therapy in heart failure, atrial fibrillation and coronary artery disease: insights using thrombelastography and a micro-titre plate assay of thrombogenesis and fibrinolysis.

    PubMed

    Lau, Y C; Xiong, Q; Ranjit, P; Lip, G Y H; Blann, A D

    2016-08-01

    As heart failure, coronary artery disease and atrial fibrillation all bring a risk of thrombosis, anti-thrombotic therapy is recommended. Despite such treatment, major cardiovascular events such as myocardial infarction and stroke still occur, implying inadequate suppression of thrombus formation. Accordingly, identification of patients whose haemostasis remains unimpaired by treatment is valuable. We compared indices for assessing thrombogenesis and fibrinolysis by two different techniques in patients on different anti-thrombotic agents, i.e. aspirin or warfarin. We determined fibrin clot formation and fibrinolysis by a microplate assay and thromboelastography, and platelet marker soluble P selectin in 181 patients with acute or chronic heart failure, coronary artery disease who were taking either aspirin or warfarin. Five thromboelastograph indices and four microplate assay indices were different on aspirin versus warfarin (p < 0.05). In multivariate regression analysis, only microplate assay indices rate of clot formation and rate of clot dissolution were independently related to aspirin or warfarin use (p ≤ 0.001). Five microplate assay indices, but no thrombelastograph index, were different (p < 0.001) in aspirin users. Three microplate assay indices were different (p ≤ 0.002) in warfarin users. The microplate assay indices of lag time and rate of clot formation were abnormal in chronic heart failure patients on aspirin, suggesting increased risk of thrombosis despite anti-platelet use. Soluble P selectin was lower in patients on aspirin (p = 0.0175) but failed to correlate with any other index of haemostasis. The microplate assay shows promise as a tool for dissecting thrombogenesis and fibrinolysis in cardiovascular disease, and the impact of antithrombotic therapy. Prospective studies are required to determine a role in predicting thrombotic risk. PMID:26942726

  20. Endorectal MRI assessment of local relapse after surgery for prostate cancer: A model to define treatment field guidelines for adjuvant radiotherapy in patients at high risk for local failure

    SciTech Connect

    Miralbell, Raymond . E-mail: Raymond.Miralbell@hcuge.ch; Vees, Hansjoerg; Lozano, Joan; Khan, Haleem; Molla, Meritxell; Hidalgo, Alberto; Linero, Dolors; Rouzaud, Michel

    2007-02-01

    Purpose: To assess the role of endorectal magnetic resonance imaging (MRI) in defining local relapse after radical prostatectomy for prostate cancer to help to reassess the clinical target volume (CTV) for adjuvant postprostatectomy radiotherapy. Methods and Materials: Sixty patients undergoing an endorectal MRI before salvage radiotherapy were selected. Spatial coordinates of the relapses were assessed using two reference points: the inferior border of the pubic symphysis (point 1) and the urethro-vesical anastomosis (point 2). Every lesion on MRI was delineated on the planning computed tomography and center of mass coordinates were plotted in two separate diagrams (along the x, y, and z axes) with the urethro-vesical anastomosis as the coordinate origin. An 'ideal' CTV was constructed, centered at a point defined by the mathematical means of each of the three coordinates with dimensions defined as twice 2 standard deviations in each of the three axes. The dosimetric impact of the new CTV definition was evaluated in six adjuvantly treated patients. Results: The ideal CTV center of mass was located at coordinates 0 (x), -5 (y), and -3 (z) mm with SDs of 6 (x), 6 (y), and 9 (z) mm, respectively. The CTV size was 24 (x) x 24 (y) x 36 (z) mm. Significant rectal sparing was observed with the new CTV. Conclusions: A CTV with an approximately cylindrical shape ({approx}4 x 3 cm) centered 5 mm posterior and 3 mm inferior to the urethro-vesical anastomosis was defined. Such CTV may reduce the irradiation of normal nontarget tissue in the pelvis potentially improving treatment tolerance.

  1. Students' Learning Activities While Studying Biological Process Diagrams

    NASA Astrophysics Data System (ADS)

    Kragten, Marco; Admiraal, Wilfried; Rijlaarsdam, Gert

    2015-08-01

    Process diagrams describe how a system functions (e.g. photosynthesis) and are an important type of representation in Biology education. In the present study, we examined students' learning activities while studying process diagrams, related to their resulting comprehension of these diagrams. Each student completed three learning tasks. Verbal data and eye-tracking data were collected as indications of students' learning activities. For the verbal data, we applied a fine-grained coding scheme to optimally describe students' learning activities. For the eye-tracking data, we used fixation time and transitions between areas of interest in the process diagrams as indices of learning activities. Various learning activities while studying process diagrams were found that distinguished between more and less successful students. Results showed that between-student variance in comprehension score was highly predicted by meaning making of the process arrows (80%) and fixation time in the main area (65%). Students employed successful learning activities consistently across learning tasks. Furthermore, compared to unsuccessful students, successful students used a more coherent approach of interrelated learning activities for comprehending process diagrams.

  2. Voronoi diagram and spatial clustering in the presence of obstacles

    NASA Astrophysics Data System (ADS)

    Wang, Zuocheng; Xue, Lixia; Li, Yongshu; Wang, Linlin; Zhang, Xuewang

    2005-11-01

    Clustering in spatial data mining is to group similar objects based on their distance, connectivity, or their relative density in space. Clustering algorithms typically use the Euclidean distance. In the real world, there exist many physical obstacles such as rivers, lakes and highways, and their presence may affect the result of clustering substantially. In this paper, we study the problem of clustering in the presence of obstacles and propose spatial clustering by Voronoi distance in Voronoi diagram (Thiessen polygon). Voronoi diagram has lateral spatial adjacency character. Based on it, we can express the spatial lateral adjacency relation conveniently and solve the problem derived from spatial clustering in the presence of obstacles. The method has three steps. First, building the Voronoi diagram in the presence of obstacles. Second, defining the Voronoi distance. Based on Voronoi diagram, we propose the Voronoi distance. Giving two spatial objects, Pi and Pj, The Voronoi distance is defined that the minimum object Voronoi regions number between Pi and Pj in the Voronoi diagram. Third, we propose Following-Obstacle-Algorithm (FOA). FOA includes three steps: the initializing step, the querying step and the pruning step. By FOA, we can get the Voronoi distance between any two objects. By Voronoi diagram and the FOA, the spatial clustering in the presence of obstacles can be accomplished conveniently, and more precisely. We conduct various performance studies to show that the method is both efficient and effective.

  3. “Not the ‘Grim Reaper Service’”: An Assessment of Provider Knowledge, Attitudes, and Perceptions Regarding Palliative Care Referral Barriers in Heart Failure

    PubMed Central

    Kavalieratos, Dio; Mitchell, Emma M.; Carey, Timothy S.; Dev, Sandesh; Biddle, Andrea K.; Reeve, Bryce B.; Abernethy, Amy P.; Weinberger, Morris

    2014-01-01

    Background Although similar to cancer patients regarding symptom burden and prognosis, patients with heart failure (HF) tend to receive palliative care far less frequently. We sought to explore factors perceived by cardiology, primary care, and palliative care providers to impede palliative care referral for HF patients. Methods and Results We conducted semistructured interviews regarding (1) perceived needs of patients with advanced HF; (2) knowledge, attitudes, and experiences with specialist palliative care; (3) perceived indications for and optimal timing of palliative care referral in HF; and (4) perceived barriers to palliative care referral. Two investigators analyzed data using template analysis, a qualitative technique. We interviewed 18 physician, nurse practitioner, and physician assistant providers from 3 specialties: cardiology, primary care, and palliative care. Providers had limited knowledge regarding what palliative care is, and how it can complement traditional HF therapy to decrease HF‐related suffering. Interviews identified several potential barriers: the unpredictable course of HF; lack of clear referral triggers across the HF trajectory; and ambiguity regarding what differentiates standard HF therapy from palliative care. Nevertheless, providers expressed interest for integrating palliative care into traditional HF care, but were unsure of how to initiate collaboration. Conclusions Palliative care referral for HF patients may be suboptimal due to limited provider knowledge and misperceptions of palliative care as a service reserved for those near death. These factors represent potentially modifiable targets for provider education, which may help to improve palliative care referral for HF patients with unresolved disease‐related burden. PMID:24385453

  4. The development of a risk of failure evaluation tool for small dams in Mzingwane Catchment, Zimbabwe

    NASA Astrophysics Data System (ADS)

    Mufute, N. L.; Senzanje, A.; Kaseke, E.

    Small dams in Mzingwane Catchment in southern Zimbabwe are mostly in poor physical condition mainly due to lack of resources for repair and maintenance. Most of these dams are likely to fail thereby adversely affecting water availability and livelihoods in the area. To assist those involved in maintenance, repair and rehabilitation of small dams in resource poor and data sparse areas such as Mzingwane Catchment, a non-probabilistic but numerical risk of failure evaluation tool was developed. The tool helps to systematically, and objectively classify risk of failure of small dams, hence assist in the ranking of dams to prioritise and attend to first. This is important where resources are limited. The tool makes use of factors such as seepage, erosion and others that are traditionally used to assess condition of dams. In the development of the tool, an assessment of the physical condition of 44 (1 medium sized and 43 small dams) dams was done and the factors were identified and listed according to guidelines for design and maintenance of small dams. The description of the extent to which the factors affect the physical condition of small dams was then standardised. This was mainly guided by standard based and risk-based approaches to dam safety evaluation. Cause-effect diagrams were used to determine the stage at which each factor is involved in contributing to dam failure. Weights were then allocated to each factor depending on its stage or level in the process of causing dam failure. Scores were allocated to each factor based on its description and weight. Small dams design and maintenance guidelines were also used to guide the ranking and weighting of the factors. The tool was used to classify 10 dams. The risk of failure was low for one dam, moderate for one, high for four and very high for four dams, two of which had already failed. It was concluded that the tool could be used to rank the risk of failure of small dams in semi-arid areas. The tool needs to be

  5. Tensile failure criteria for fiber composite materials

    NASA Technical Reports Server (NTRS)

    Rosen, B. W.; Zweben, C. H.

    1972-01-01

    The analysis provides insight into the failure mechanics of these materials and defines criteria which serve as tools for preliminary design material selection and for material reliability assessment. The model incorporates both dispersed and propagation type failures and includes the influence of material heterogeneity. The important effects of localized matrix damage and post-failure matrix shear stress transfer are included in the treatment. The model is used to evaluate the influence of key parameters on the failure of several commonly used fiber-matrix systems. Analyses of three possible failure modes were developed. These modes are the fiber break propagation mode, the cumulative group fracture mode, and the weakest link mode. Application of the new model to composite material systems has indicated several results which require attention in the development of reliable structural composites. Prominent among these are the size effect and the influence of fiber strength variability.

  6. 30 CFR 1218.41 - Assessments for failure to submit payment of same amount as Form ONRR-2014 or bill document or to...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... same amount as Form ONRR-2014 or bill document or to provide adequate information. 1218.41 Section 1218... bill document or to provide adequate information. (a) The ONRR may assess an amount not to exceed $250... Form ONRR-2014, Form ONRR-4430, or a bill document, unless ONRR has authorized the difference in...

  7. How to Recognize Success and Failure: Practical Assessment of an Evolving, First-Semester Laboratory Program Using Simple, Outcome-Based Tools

    ERIC Educational Resources Information Center

    Gron, Liz U.; Bradley, Shelly B.; McKenzie, Jennifer R.; Shinn, Sara E.; Teague, M. Warfield

    2013-01-01

    This paper presents the use of simple, outcome-based assessment tools to design and evaluate the first semester of a new introductory laboratory program created to teach green analytical chemistry using environmental samples. This general chemistry laboratory program, like many introductory courses, has a wide array of stakeholders within and…

  8. Duralumin - Defects and Failures

    NASA Technical Reports Server (NTRS)

    Nelson, WM

    1927-01-01

    It is proposed in this paper to identify some of the defects and failures in duralumin most frequently encountered by the aircraft industry with a view to indicate their importance. The defects and failures in duralumin may be classified into the following groups: 1) defects produced during manufacture; 2) defects produced during fabrication; 3) corrosion and erosion; and 4) fatigue failures. Only the first two will be covered in this report.

  9. Nitrile/Buna N Material Failure Assessment for an O-Ring used on the Gaseous Hydrogen Flow Control Valve (FCV) of the Space Shuttle Main Engine

    NASA Technical Reports Server (NTRS)

    Wingard, Doug

    2006-01-01

    After the rollout of Space Shuttle Discovery in April 2005 in preparation for return-to-flight, there was a failure of the Orbiter (OV-103) helium signature leak test in the gaseous hydrogen (GH2) system. Leakage was attributed to the Flow Control Valve (FCV) in Main Engine 3. The FCV determined to be the source of the leak for OV-103 is designated as LV-58. The nitrile/Buna N rubber O-ring seal was removed from LV-58, and failure analysis indicated radial cracks providing leak paths in one quadrant. Cracks were eventually found in 6 of 9 FCV O-rings among the three Shuttle Orbiters, though none were as severe as those for LV-58, OV-103. Testing by EM10 at MSFC on all 9 FCV O- rings included: laser dimensional, Shore A hardness and properties from a dynamic mechanical analyzer (DMA) and an Instron tensile machine. The following test data was obtained on the cracked quadrant of the LV-58, OV-103 O-ring: (1) the estimated compression set was only 9.5%, compared to none for the rest of the O-ring; (2) Shore A hardness for the O.D. was higher by almost 4 durometer points than for the rest of the O-ring; and (3) DMA data showed that the storage/elastic modulus E was almost 25% lower than for the rest of the O-ring. Of the 8 FCV O-rings tested on an Instron, 4 yielded tensile strengths that were below the MIL spec requirement of 1350 psi-a likely influence of rubber cracking. Comparisons were made between values of modulus determined by DNA (elastic) and Instron (Young s). Each nitrile/Buna N O-ring used in the FCV conforms to the MIL-P-25732C specification. A number of such O-rings taken from shelf storage at MSFC and Kennedy Space Center (KSC) were used to generate a reference curve of DMA glass transition temperature (Tg) vs. shelf storage time ranging from 8 to 26 years. A similar reference curve of TGA onset temperature (of rubber weight loss) vs. shelf storage time was also generated. The DMA and TGA data for the used FCV O-rings were compared to the reference

  10. The participatory vulnerability scoping diagram - deliberative risk ranking for community water systems

    USGS Publications Warehouse

    Howe, Peter D.; Yarnal, Brent; Coletti, Alex; Wood, Nathan J.

    2013-01-01

    Natural hazards and climate change present growing challenges to community water system (CWS) managers, who are increasingly turning to vulnerability assessments to identify, prioritize, and adapt to risks. Effectively assessing CWS vulnerability requires information and participation from various sources, one of which is stakeholders. In this article, we present a deliberative risk-ranking methodology, the participatory vulnerability scoping diagram (P-VSD), which allows rapid assessment and integration of multiple stakeholder perspectives of vulnerability. This technique is based on methods of deliberative risk evaluation and the vulnerability scoping diagram. The goal of the methodology is to engage CWS managers and stakeholders collectively to provide qualitative contextual risk rankings as a first step in a vulnerability assessment. We conduct an initial assessment using a case study of CWS in two U.S. counties, sites with broadly similar exposures but differences in population, land use, and other social sensitivity factors. Results demonstrate that CWS managers and stakeholders in the two case study communities all share the belief that their CWS are vulnerable to hazards but differ in how this vulnerability manifests itself in terms of the exposure, sensitivity, and adaptive capacity of the system.

  11. Diagnostic accuracy of refractometer and Brix refractometer to assess failure of passive transfer in calves: protocol for a systematic review and meta-analysis.

    PubMed

    Buczinski, S; Fecteau, G; Chigerwe, M; Vandeweerd, J M

    2016-06-01

    Calves are highly dependent of colostrum (and antibody) intake because they are born agammaglobulinemic. The transfer of passive immunity in calves can be assessed directly by dosing immunoglobulin G (IgG) or by refractometry or Brix refractometry. The latter are easier to perform routinely in the field. This paper presents a protocol for a systematic review meta-analysis to assess the diagnostic accuracy of refractometry or Brix refractometry versus dosage of IgG as a reference standard test. With this review protocol we aim to be able to report refractometer and Brix refractometer accuracy in terms of sensitivity and specificity as well as to quantify the impact of any study characteristic on test accuracy. PMID:27427188

  12. Quantitative risk assessment system (QRAS)

    NASA Technical Reports Server (NTRS)

    Weinstock, Robert M (Inventor); Smidts, Carol S (Inventor); Mosleh, Ali (Inventor); Chang, Yung-Hsien (Inventor); Swaminathan, Sankaran (Inventor); Groen, Francisco J (Inventor); Tan, Zhibin (Inventor)

    2001-01-01

    A quantitative risk assessment system (QRAS) builds a risk model of a system for which risk of failure is being assessed, then analyzes the risk of the system corresponding to the risk model. The QRAS performs sensitivity analysis of the risk model by altering fundamental components and quantifications built into the risk model, then re-analyzes the risk of the system using the modifications. More particularly, the risk model is built by building a hierarchy, creating a mission timeline, quantifying failure modes, and building/editing event sequence diagrams. Multiplicities, dependencies, and redundancies of the system are included in the risk model. For analysis runs, a fixed baseline is first constructed and stored. This baseline contains the lowest level scenarios, preserved in event tree structure. The analysis runs, at any level of the hierarchy and below, access this baseline for risk quantitative computation as well as ranking of particular risks. A standalone Tool Box capability exists, allowing the user to store application programs within QRAS.

  13. Applications of Phase Diagrams in Metallurgy and Ceramics: Proceedings of a Workshop Held at the National Bureau of Standards, Gaithersburg, Maryland, January 10-12, 1977. Volumes 1 [and] 2.

    ERIC Educational Resources Information Center

    Carter, G. C., Ed.

    This document is a special National Bureau of Standards publication on a Workshop on Applications of Phase Diagrams in Metallurgy and Ceramics. The purposes of the Workshop were: (1) to assess the current national and international status of phase diagram determinations and evaluations for alloys, ceramics and semiconductors; (2) to determine the…

  14. Generation of Constant Life Diagram under Elevated Temperature Ratcheting of 316LN Stainless Steel

    NASA Astrophysics Data System (ADS)

    Sarkar, Aritra; Nagesha, A.; Sandhya, R.; Mathew, M. D.

    2016-04-01

    Combined influence of mean stress and stress amplitude on the cyclic life under elevated temperature (823-923 K) ratcheting of 316LN austenitic stainless steel is discussed. Constant life Haigh diagrams have been generated, using different combinations of stress amplitude and mean stress. In the plastic domain, the allowable stress was found to increase or decrease with mean stress depending on the temperature and combination of mean stress - stress amplitude employed. Strong influence of dynamic strain aging (DSA) was found at 823 K which affected the mode of deformation of the material in comparison with 923 K. Failure mode expressed through a fracture mechanism map was found to change from fatigue to necking depending on the test temperature as well as combinations of mean stress and stress amplitude. Occurrence of DSA at 823 K proved to be beneficial by way of extending the safe zone of operation to higher R-ratios in comparison with 923 K.

  15. Renal failure in burn patients: a review

    PubMed Central

    Emara, S.S.; Alzaylai, A.A.

    2013-01-01

    Summary Burn care providers are usually challenged by multiple complications during the management of acute burns. One of the most common complications worldwide is renal failure. This article reviews the various aspects of renal failure management in burn patients. Two different types of renal failures develop in these patients. The different aetiological factors, incidence, suspected prognosis, ways of diagnosing, as well as prevention methods, and the most accepted treatment modalities are all discussed. A good understanding and an effective assessment of the problem help to reduce both morbidity and mortality in burn management. PMID:23966893

  16. Analysis of sucker rod and sinkerbar failures

    SciTech Connect

    Waggoner, J.R.; Buchheit, R.G.

    1993-03-01

    This report presents results of a study of performance and failures of the sucker rod/sinkerbar string used in beam-pumping operations through metallography, finite element analysis, and failure data collection. Metallography showed that the microstructure of the steel bar stock needs to be considered to improve the fatigue resistance of the sucker rod strings. The current specification based on tensile strength, or yield strength, may not be appropriate since failure occurs because of fatigue and not yielding, and tensile strength is not always a good measure of fatigue resistance. Finite element analysis of the threaded connection quantitatively assesses the coupling designs under various loading conditions. Subcritical fractures in metallography are also suggested by calculated stress distribution in threaded coupling. Failure data illustrates both magnitude and frequency of failures, as well as categorizing the suspected cause of failure. Application of the results in each of these project areas is expected to yield improved choice of metal bar stock, thread design, and make-up practices which can significantly reduce the frequency of sucker rod failures. Sucker rod failures today are not inherent in the process, but can be minimized through the application of new technology and observation of common-sense practices.

  17. Physical modelling of failure in composites.

    PubMed

    Talreja, Ramesh

    2016-07-13

    Structural integrity of composite materials is governed by failure mechanisms that initiate at the scale of the microstructure. The local stress fields evolve with the progression of the failure mechanisms. Within the full span from initiation to criticality of the failure mechanisms, the governing length scales in a fibre-reinforced composite change from the fibre size to the characteristic fibre-architecture sizes, and eventually to a structural size, depending on the composite configuration and structural geometry as well as the imposed loading environment. Thus, a physical modelling of failure in composites must necessarily be of multi-scale nature, although not always with the same hierarchy for each failure mode. With this background, the paper examines the currently available main composite failure theories to assess their ability to capture the essential features of failure. A case is made for an alternative in the form of physical modelling and its skeleton is constructed based on physical observations and systematic analysis of the basic failure modes and associated stress fields and energy balances. This article is part of the themed issue 'Multiscale modelling of the structural integrity of composite materials'. PMID:27242307

  18. High Concordance Between Mental Stress–Induced and Adenosine-Induced Myocardial Ischemia Assessed Using SPECT in Heart Failure Patients: Hemodynamic and Biomarker Correlates

    PubMed Central

    Wawrzyniak, Andrew J.; Dilsizian, Vasken; Krantz, David S.; Harris, Kristie M.; Smith, Mark F.; Shankovich, Anthony; Whittaker, Kerry S.; Rodriguez, Gabriel A.; Gottdiener, John; Li, Shuying; Kop, Willem; Gottlieb, Stephen S.

    2016-01-01

    Mental stress can trigger myocardial ischemia, but the prevalence of mental stress–induced ischemia in congestive heart failure (CHF) patients is unknown. We characterized mental stress–induced and adenosine-induced changes in myocardial perfusion and neurohormonal activation in CHF patients with reduced left-ventricular function using SPECT to precisely quantify segment-level myocardial perfusion. Methods Thirty-four coronary artery disease patients (mean age ± SD, 62 ± 10 y) with CHF longer than 3 mo and ejection fraction less than 40% underwent both adenosine and mental stress myocardial perfusion SPECT on consecutive days. Mental stress consisted of anger recall (anger-provoking speech) followed by subtraction of serial sevens. The presence and extent of myocardial ischemia was quantified using the conventional 17-segment model. Results Sixty-eight percent of patients had 1 ischemic segment or more during mental stress and 81% during adenosine. On segment-by-segment analysis, perfusion with mental stress and adenosine were highly correlated. No significant differences were found between any 2 time points for B-type natriuretic peptide, tumor necrosis factor-α, IL-1b, troponin, vascular endothelin growth factor, IL-17a, matrix metallopeptidase-9, or C-reactive protein. However, endothelin-1 and IL-6 increased, and IL-10 decreased, between the stressor and 30 min after stress. Left-ventricular end diastolic dimension was 179 ± 65 mL at rest and increased to 217 ± 71 after mental stress and 229 ± 86 after adenosine (P < 0.01 for both). Resting end systolic volume was 129 ± 60 mL at rest and increased to 158 ± 66 after mental stress (P < 0.05) and 171 ± 87 after adenosine (P < 0.07), with no significant differences between adenosine and mental stress. Ejection fraction was 30 ± 12 at baseline, 29 ± 11 with mental stress, and 28 ± 10 with adenosine (P = not significant). Conclusion There was high concordance between ischemic perfusion defects induced

  19. Universal jamming phase diagram in the hard-sphere limit.

    PubMed

    Haxton, Thomas K; Schmiedeberg, Michael; Liu, Andrea J

    2011-03-01

    We present a new formulation of the jamming phase diagram for a class of glass-forming fluids consisting of spheres interacting via finite-ranged repulsions at temperature T, packing fraction ϕ or pressure p, and applied shear stress Σ. We argue that the natural choice of axes for the phase diagram are the dimensionless quantities T/pσ³, pσ³/ε, and Σ/p, where T is the temperature, p is the pressure, Σ is the stress, σ is the sphere diameter, ε is the interaction energy scale, and m is the sphere mass. We demonstrate that the phase diagram is universal at low pσ³/ε; at low pressure, observables such as the relaxation time are insensitive to details of the interaction potential and collapse onto the values for hard spheres, provided the observables are nondimensionalized by the pressure. We determine the shape of the jamming surface in the jamming phase diagram, organize previous results in relation to the jamming phase diagram, and discuss the significance of various limits.

  20. Ampoule Failure System

    NASA Technical Reports Server (NTRS)

    Watring, Dale A. (Inventor); Johnson, Martin L. (Inventor)

    1996-01-01

    An ampoule failure system for use in material processing furnaces comprising a containment cartridge and an ampoule failure sensor. The containment cartridge contains an ampoule of toxic material therein and is positioned within a furnace for processing. An ampoule failure probe is positioned in the containment cartridge adjacent the ampoule for detecting a potential harmful release of toxic material therefrom during processing. The failure probe is spaced a predetermined distance from the ampoule and is chemically chosen so as to undergo a timely chemical reaction with the toxic material upon the harmful release thereof. The ampoule failure system further comprises a data acquisition system which is positioned externally of the furnace and is electrically connected to the ampoule failure probe so as to form a communicating electrical circuit. The data acquisition system includes an automatic shutdown device for shutting down the furnace upon the harmful release of toxic material. It also includes a resistance measuring device for measuring the resistance of the failure probe during processing. The chemical reaction causes a step increase in resistance of the failure probe whereupon the automatic shutdown device will responsively shut down the furnace.

  1. Immune mediated liver failure

    PubMed Central

    Wang, Xiaojing; Ning, Qin

    2014-01-01

    Liver failure is a clinical syndrome of various etiologies, manifesting as jaundice, encephalopathy, coagulopathy and circulatory dysfunction, which result in subsequent multiorgan failure. Clinically, liver failure is classified into four categories: acute, subacute, acute-on-chronic and chronic liver failure. Massive hepatocyte death is considered to be the core event in the development of liver failure, which occurs when the extent of hepatocyte death is beyond the liver regenerative capacity. Direct damage and immune-mediated liver injury are two major factors involved in this process. Increasing evidence has suggested the essential role of immune-mediated liver injury in the pathogenesis of liver failure. Here, we review the evolved concepts concerning the mechanisms of immune-mediated liver injury in liver failure from human and animal studies. Both innate and adaptive immunity, especially the interaction of various immune cells and molecules as well as death receptor signaling system are discussed. In addition, we highlight the concept of “immune coagulation”, which has been shown to be related to the disease progression and liver injury exacerbation in HBV related acute-on-chronic liver failure. PMID:26417328

  2. Physicists and failure

    NASA Astrophysics Data System (ADS)

    Raheem, Ruby; zilch, Mortimer

    2015-08-01

    In reply to a post on the physicsworld.com blog about the central role of failure in physics (“Success, failure and women in physics”, 10 June, http://ow.ly/O8kvR; see also “Failing better,” Editorial, July p15).

  3. A Hertzsprung-Russell-like Diagram for Solar/Stellar Flares and Corona: Emission Measure versus Temperature Diagram

    NASA Astrophysics Data System (ADS)

    Shibata, Kazunari; Yokoyama, Takaaki

    2002-09-01

    In our previous paper, we presented a theory to explain the observed universal correlation between the emission measure (EM=n2V) and temperature (T) for solar/stellar flares on the basis of the magnetic reconnection model with heat conduction and chromospheric evaporation. Here n is the electron density and V is the volume. By extending our theory to general situations, we examined the EM-T diagram in detail and found the following properties: (1) The universal correlation sequence (``main-sequence flares'') with EM~T17/2 corresponds to the case of constant heating flux or, equivalently, the case of constant magnetic field strength in the reconnection model. (2) The EM-T diagram has a forbidden region, in which gas pressure of flares exceeds magnetic pressure. (3) There is a coronal branch with EM~T15/2 for T<107 K and EM~T13/2 for T>107 K. This branch is situated on the left side of the main-sequence flares in the EM-T diagram. (4) There is another forbidden region determined by the length of flare loop; the lower limit of the flare loop is 107 cm. Small flares near this limit correspond to nanoflares observed by the Solar and Heliospheric Observatory EUV Imaging Telescope. (5) We can plot the flare evolution track on the EM-T diagram. A flare evolves from the coronal branch to main-sequence flares, then returns to the coronal branch eventually. These properties of the EM-T diagram are similar to those of the H-R diagram for stars, and thus we propose that the EM-T diagram is quite useful for estimating the physical quantities (loop length, heating flux, magnetic field strength, total energy, and so on) of flares and coronae when there are no spatially resolved imaging observations.

  4. Generalized energy failure criterion.

    PubMed

    Qu, R T; Zhang, Z J; Zhang, P; Liu, Z Q; Zhang, Z F

    2016-01-01

    Discovering a generalized criterion that can predict the mechanical failure of various different structural materials is one of ultimate goals for scientists in both material and mechanics communities. Since the first study on the failure criterion of materials by Galileo, about three centuries have passed. Now we eventually find the "generalized energy criterion", as presented here, which appears to be one universal law for various different kinds of materials. The validity of the energy criterion for quantitatively predicting the failure is experimentally confirmed using a metallic glass. The generalized energy criterion reveals the competition and interaction between shear and cleavage, the two fundamental inherent failure mechanisms, and thus provides new physical insights into the failure prediction of materials and structural components. PMID:26996781

  5. Acute Decompensated Heart Failure

    PubMed Central

    Joseph, Susan M.; Cedars, Ari M.; Ewald, Gregory A.; Geltman, Edward M.; Mann, Douglas L.

    2009-01-01

    Hospitalizations for acute decompensated heart failure are increasing in the United States. Moreover, the prevalence of heart failure is increasing consequent to an increased number of older individuals, as well as to improvement in therapies for coronary artery disease and sudden cardiac death that have enabled patients to live longer with cardiovascular disease. The main treatment goals in the hospitalized patient with heart failure are to restore euvolemia and to minimize adverse events. Common in-hospital treatments include intravenous diuretics, vasodilators, and inotropic agents. Novel pharmaceutical agents have shown promise in the treatment of acute decompensated heart failure and may simplify the treatment and reduce the morbidity associated with the disease. This review summarizes the contemporary management of patients with acute decompensated heart failure. PMID:20069075

  6. Generalized energy failure criterion

    PubMed Central

    Qu, R. T.; Zhang, Z. J.; Zhang, P.; Liu, Z. Q.; Zhang, Z. F.

    2016-01-01

    Discovering a generalized criterion that can predict the mechanical failure of various different structural materials is one of ultimate goals for scientists in both material and mechanics communities. Since the first study on the failure criterion of materials by Galileo, about three centuries have passed. Now we eventually find the “generalized energy criterion”, as presented here, which appears to be one universal law for various different kinds of materials. The validity of the energy criterion for quantitatively predicting the failure is experimentally confirmed using a metallic glass. The generalized energy criterion reveals the competition and interaction between shear and cleavage, the two fundamental inherent failure mechanisms, and thus provides new physical insights into the failure prediction of materials and structural components. PMID:26996781

  7. Liquid/liquid metal extraction: Phase diagram topology resulting from molecular interactions between extractant, ion, oil and water

    NASA Astrophysics Data System (ADS)

    Bauer, C.; Bauduin, P.; Dufrêche, J. F.; Zemb, T.; Diat, O.

    2012-11-01

    We consider the class of surfactants called "extractants" since they specifically interact with some cations and are used in liquid-liquid separation processes. We review here features of water-poor reverse micelles in water/oil/ extractant systems as determined by combined structural studies including small angle scattering techniques on absolute scale. Origins of instabilities, liquid-liquid separation as well as emulsification failure are detected. Phase diagrams contain the same multi-phase domains as classical microemulsions, but special unusual features appear due to the high spontaneous curvature directed towards the polar cores of aggregates as well as rigidity of the film made by extracting molecules.

  8. Fast Formal Analysis of Requirements via "Topoi Diagrams"

    NASA Technical Reports Server (NTRS)

    Menzies, Tim; Powell, John; Houle, Michael E.; Kelly, John C. (Technical Monitor)

    2001-01-01

    Early testing of requirements can decrease the cost of removing errors in software projects. However, unless done carefully, that testing process can significantly add to the cost of requirements analysis. We show here that requirements expressed as topoi diagrams can be built and tested cheaply using our SP2 algorithm, the formal temporal properties of a large class of topoi can be proven very quickly, in time nearly linear in the number of nodes and edges in the diagram. There are two limitations to our approach. Firstly, topoi diagrams cannot express certain complex concepts such as iteration and sub-routine calls. Hence, our approach is more useful for requirements engineering than for traditional model checking domains. Secondly, out approach is better for exploring the temporal occurrence of properties than the temporal ordering of properties. Within these restrictions, we can express a useful range of concepts currently seen in requirements engineering, and a wide range of interesting temporal properties.

  9. Interpretation of the Hubble diagram in a nonhomogeneous universe

    NASA Astrophysics Data System (ADS)

    Fleury, Pierre; Dupuy, Hélène; Uzan, Jean-Philippe

    2013-06-01

    In the standard cosmological framework, the Hubble diagram is interpreted by assuming that the light emitted by standard candles propagates in a spatially homogeneous and isotropic spacetime. However, the light from “point sources”—such as supernovae—probes the Universe on scales where the homogeneity principle is no longer valid. Inhomogeneities are expected to induce a bias and a dispersion of the Hubble diagram. This is investigated by considering a Swiss-cheese cosmological model, which (1) is an exact solution of the Einstein field equations, (2) is strongly inhomogeneous on small scales, but (3) has the same expansion history as a strictly homogeneous and isotropic universe. By simulating Hubble diagrams in such models, we quantify the influence of inhomogeneities on the measurement of the cosmological parameters. Though significant in general, the effects reduce drastically for a universe dominated by the cosmological constant.

  10. Generation of Finite Life Distributional Goodman Diagrams for Reliability Prediction

    NASA Technical Reports Server (NTRS)

    Kececioglu, D.; Guerrieri, W. N.

    1971-01-01

    The methodology of developing finite life distributional Goodman diagrams and surfaces is described for presenting allowable combinations of alternating stress and mean stress to the design engineer. The combined stress condition is that of an alternating bending stress and a constant shear stress. The finite life Goodman diagrams and surfaces are created from strength distributions developed at various ratios of alternating to mean stress at particular cycle life values. The conclusions indicate that the Von Mises-Hencky ellipse, for cycle life values above 1000 cycles, is an adequate model of the finite life Goodman diagram. In addition, suggestions are made which reduce the number of experimental data points required in a fatigue data acquisition program.

  11. Phase diagram studies on the Na-Mo-O system

    NASA Astrophysics Data System (ADS)

    Gnanasekaran, T.; Mahendran, K. H.; Kutty, K. V. G.; Mathews, C. K.

    1989-06-01

    The phase diagram of the Na-Mo-O ternary system is of interest in interpreting the behaviour of structural materials in the sodium circuits of fast breeder reactors and sodium-filled heat pipes. Experiments involving heating of sodium oxide with molybdenum metal under vacuum, selective removal of oxygen from polymolybdates by reducing them under hydrogen and confirmation of the coexistence of various phase mixtures were conducted in the temperature range of 673 to 923 K. Phase fields involving molybdenum metal, dioxide of molybdenum and ternary compounds were derived from these results. The ternary phase diagram of the Na-Mo-O system was constructed and isothermal cross sections of the phase diagram are presented.

  12. How to Draw Energy Level Diagrams in Excitonic Solar Cells.

    PubMed

    Zhu, X-Y

    2014-07-01

    Emerging photovoltaic devices based on molecular and nanomaterials are mostly excitonic in nature. The initial absorption of a photon in these materials creates an exciton that can subsequently dissociate in each material or at their interfaces to give charge carriers. Any attempt at mechanistic understanding of excitonic solar cells must start with drawing energy level diagrams. This seemingly elementary exercise, which is described in textbooks for inorganic solar cells, has turned out to be a difficult subject in the literature. The problem stems from conceptual confusion of single-particle energy with quasi-particle energy and the misleading practice of mixing the two on the same energy level diagram. Here, I discuss how to draw physically accurate energy diagrams in excitonic solar cells using only single-particle energies (ionization potentials and electron affinities) of both ground and optically excited states. I will briefly discuss current understanding on the electronic energy landscape responsible for efficient charge separation in excitonic solar cells.

  13. O-C diagrams and period changes in stellar systems

    NASA Astrophysics Data System (ADS)

    Liska, J.; Skarka, M.

    2015-02-01

    Based on the visual inspection of all O-C diagrams available in the O-C gateway managed by the Variable stars and exoplanet section of the Czech astronomical society we present an overview of possible shapes of O-C diagrams together with discussion of possible effects causing such dependences. The nature of these effects is discussed for various types of periodic variable. We also give short remarks on interesting eclipsing systems BV Dra and BW Dra which form a visual pair and show antiparallel changes of their O-C diagrams. In addition we comment on period changes of UU Cam, and argue that it probably shows long - term Light Time Effect (LiTE) rather than sudden period change. Effects which are observable only in ultra - precise, quasi - continual measurements gathered by the Kepler satellite are discussed at the end of this contribution.

  14. UML activity diagrams in requirements specification of logic controllers

    NASA Astrophysics Data System (ADS)

    Grobelna, Iwona; Grobelny, Michał

    2015-12-01

    Logic controller specification can be prepared using various techniques. One of them is the wide understandable and user-friendly UML language and its activity diagrams. Using formal methods during the design phase increases the assurance that implemented system meets the project requirements. In the approach we use the model checking technique to formally verify a specification against user-defined behavioral requirements. The properties are usually defined as temporal logic formulas. In the paper we propose to use UML activity diagrams in requirements definition and then to formalize them as temporal logic formulas. As a result, UML activity diagrams can be used both for logic controller specification and for requirements definition, what simplifies the specification and verification process.

  15. Assessing Educational Processes Using Total-Quality-Management Measurement Tools.

    ERIC Educational Resources Information Center

    Macchia, Peter, Jr.

    1993-01-01

    Discussion of the use of Total Quality Management (TQM) assessment tools in educational settings highlights and gives examples of fishbone diagrams, or cause and effect charts; Pareto diagrams; control charts; histograms and check sheets; scatter diagrams; and flowcharts. Variation and quality are discussed in terms of continuous process…

  16. The Spectrum of Renal Allograft Failure

    PubMed Central

    Chand, Sourabh; Atkinson, David; Collins, Clare; Briggs, David; Ball, Simon; Sharif, Adnan; Skordilis, Kassiani; Vydianath, Bindu; Neil, Desley; Borrows, Richard

    2016-01-01

    Background Causes of “true” late kidney allograft failure remain unclear as study selection bias and limited follow-up risk incomplete representation of the spectrum. Methods We evaluated all unselected graft failures from 2008–2014 (n = 171; 0–36 years post-transplantation) by contemporary classification of indication biopsies “proximate” to failure, DSA assessment, clinical and biochemical data. Results The spectrum of graft failure changed markedly depending on the timing of allograft failure. Failures within the first year were most commonly attributed to technical failure, acute rejection (with T-cell mediated rejection [TCMR] dominating antibody-mediated rejection [ABMR]). Failures beyond a year were increasingly dominated by ABMR and ‘interstitial fibrosis with tubular atrophy’ without rejection, infection or recurrent disease (“IFTA”). Cases of IFTA associated with inflammation in non-scarred areas (compared with no inflammation or inflammation solely within scarred regions) were more commonly associated with episodes of prior rejection, late rejection and nonadherence, pointing to an alloimmune aetiology. Nonadherence and late rejection were common in ABMR and TCMR, particularly Acute Active ABMR. Acute Active ABMR and nonadherence were associated with younger age, faster functional decline, and less hyalinosis on biopsy. Chronic and Chronic Active ABMR were more commonly associated with Class II DSA. C1q-binding DSA, detected in 33% of ABMR episodes, were associated with shorter time to graft failure. Most non-biopsied patients were DSA-negative (16/21; 76.1%). Finally, twelve losses to recurrent disease were seen (16%). Conclusion This data from an unselected population identifies IFTA alongside ABMR as a very important cause of true late graft failure, with nonadherence-associated TCMR as a phenomenon in some patients. It highlights clinical and immunological characteristics of ABMR subgroups, and should inform clinical practice and

  17. UML activity diagram swimlanes in logic controller design

    NASA Astrophysics Data System (ADS)

    Grobelny, Michał; Grobelna, Iwona

    2015-12-01

    Logic controller behavior can be specified using various techniques, including UML activity diagrams and control Petri nets. Each technique has its advantages and disadvantages. Application of both specification types in one project allows to take benefits from both of them. Additional elements of UML models make it possible to divide a specification into some parts, considered from other point of view (logic controller, user or system). The paper introduces an idea to use UML activity diagrams with swimlanes to increase the understandability of design models.

  18. Strain-Temperature-Transformation (STT) Diagram for Soft Solids

    NASA Astrophysics Data System (ADS)

    Li, Shoubo; Xiong, Wentao; Wang, Xiaorong

    Soft materials comprise a variety of physical states that are easily deformed by shear stains or thermal fluctuations. They include suspensions, colloids, polymers, foams, gels, liquid crystals, and a number of biological materials. In this contribution, a generalized strain-temperature-transformation (STT) diagram for many soft materials is presented in which the physical states encountered are related to the strain and temperature changes. The boundary defined for the solid-to-liquid transformation in the STT diagram displays a surprising Z-shaped curve. We discuss this feature with respect to the physical nature of materials.

  19. Theoretical phase diagrams for solid H{sub 2}

    SciTech Connect

    Surh, M.P.; Runge, K.J.

    1993-07-01

    Possible phase diagrams for solid molecular para-hydrogen in the 0-200 GPa pressure regime are constructed on the basis of ab initio calculations. Structures for the broken symmetry phase (BSP) and H-A phase have recently been proposed under the assumption that the molecules are centered on sites of a hexagonal close-packed lattice with the ideal c/a ratio, i.e., only molecular orientational and electronic changes are allowed. Symmetry considerations then dictate the simplest phase diagrams consistent with experimental observations, although the possibility of additional transitions cannot be ruled out. A simple model is introduced to describe the BSP and H-A transitions.

  20. Size Dependent Phase Diagrams of Nickel-Carbon Nanoparticles

    NASA Astrophysics Data System (ADS)

    Magnin, Y.; Zappelli, A.; Amara, H.; Ducastelle, F.; Bichara, C.

    2015-11-01

    The carbon rich phase diagrams of nickel-carbon nanoparticles, relevant to catalysis and catalytic chemical vapor deposition synthesis of carbon nanotubes, are calculated for system sizes up to about 3 nm (807 Ni atoms). A tight binding model for interatomic interactions drives the grand canonical Monte Carlo simulations used to locate solid, core shell and liquid stability domains, as a function of size, temperature, and carbon chemical potential or concentration. Melting is favored by carbon incorporation from the nanoparticle surface, resulting in a strong relative lowering of the eutectic temperature and a phase diagram topology different from the bulk one. This should lead to a better understanding of the nanotube growth mechanisms.