CPM and PERT in Library Management.
ERIC Educational Resources Information Center
Main, Linda
1989-01-01
Discusses two techniques of systems analysis--Critical Path Method (CPM) and Program Evaluation Review Techniques (PERT)--and their place in library management. An overview of CPM and PERT charting procedures is provided. (11 references) (Author/MES)
Information System Design Methodology Based on PERT/CPM Networking and Optimization Techniques.
ERIC Educational Resources Information Center
Bose, Anindya
The dissertation attempts to demonstrate that the program evaluation and review technique (PERT)/Critical Path Method (CPM) or some modified version thereof can be developed into an information system design methodology. The methodology utilizes PERT/CPM which isolates the basic functional units of a system and sets them in a dynamic time/cost…
Using the MCNP Taylor series perturbation feature (efficiently) for shielding problems
NASA Astrophysics Data System (ADS)
Favorite, Jeffrey
2017-09-01
The Taylor series or differential operator perturbation method, implemented in MCNP and invoked using the PERT card, can be used for efficient parameter studies in shielding problems. This paper shows how only two PERT cards are needed to generate an entire parameter study, including statistical uncertainty estimates (an additional three PERT cards can be used to give exact statistical uncertainties). One realistic example problem involves a detailed helium-3 neutron detector model and its efficiency as a function of the density of its high-density polyethylene moderator. The MCNP differential operator perturbation capability is extremely accurate for this problem. A second problem involves the density of the polyethylene reflector of the BeRP ball and is an example of first-order sensitivity analysis using the PERT capability. A third problem is an analytic verification of the PERT capability.
A heuristic method for consumable resource allocation in multi-class dynamic PERT networks
NASA Astrophysics Data System (ADS)
Yaghoubi, Saeed; Noori, Siamak; Mazdeh, Mohammad Mahdavi
2013-06-01
This investigation presents a heuristic method for consumable resource allocation problem in multi-class dynamic Project Evaluation and Review Technique (PERT) networks, where new projects from different classes (types) arrive to system according to independent Poisson processes with different arrival rates. Each activity of any project is operated at a devoted service station located in a node of the network with exponential distribution according to its class. Indeed, each project arrives to the first service station and continues its routing according to precedence network of its class. Such system can be represented as a queuing network, while the discipline of queues is first come, first served. On the basis of presented method, a multi-class system is decomposed into several single-class dynamic PERT networks, whereas each class is considered separately as a minisystem. In modeling of single-class dynamic PERT network, we use Markov process and a multi-objective model investigated by Azaron and Tavakkoli-Moghaddam in 2007. Then, after obtaining the resources allocated to service stations in every minisystem, the final resources allocated to activities are calculated by the proposed method.
NASA Astrophysics Data System (ADS)
Dolez, Patricia
Le travail de recherche effectue dans le cadre de ce projet de doctorat a permis la mise au point d'une methode de mesure des pertes ac destinee a l'etude des supraconducteurs a haute temperature critique. Pour le choix des principes de cette methode, nous nous sommes inspires de travaux anterieurs realises sur les supraconducteurs conventionnels, afin de proposer une alternative a la technique electrique, presentant lors du debut de cette these des problemes lies a la variation du resultat des mesures selon la position des contacts de tension sur la surface de l'echantillon, et de pouvoir mesurer les pertes ac dans des conditions simulant la realite des futures applications industrielles des rubans supraconducteurs: en particulier, cette methode utilise la technique calorimetrique, associee a une calibration simultanee et in situ. La validite de la methode a ete verifiee de maniere theorique et experimentale: d'une part, des mesures ont ete realisees sur des echantillons de Bi-2223 recouverts d'argent ou d'alliage d'argent-or et comparees avec les predictions theoriques donnees par Norris, nous indiquant la nature majoritairement hysteretique des pertes ac dans nos echantillons; d'autre part, une mesure electrique a ete realisee in situ dont les resultats correspondent parfaitement a ceux donnes par notre methode calorimetrique. Par ailleurs, nous avons compare la dependance en courant et en frequence des pertes ac d'un echantillon avant et apres qu'il ait ete endommage. Ces mesures semblent indiquer une relation entre la valeur du coefficient de la loi de puissance modelisant la dependance des pertes avec le courant, et les inhomogeneites longitudinales du courant critique induites par l'endommagement. De plus, la variation en frequence montre qu'au niveau des grosses fractures transverses creees par l'endommagement dans le coeur supraconducteur, le courant se partage localement de maniere a peu pres equivalente entre les quelques grains de matiere supraconductrice qui restent fixes a l'interface coeur-enveloppe, et le revetement en alliage d'argent. L'interet d'une methode calorimetrique par rapport a la technique electrique, plus rapide, plus sensible et maintenant fiable, reside dans la possibilite de realiser des mesures de pertes ac dans des environnements complexes, reproduisant la situation presente par exemple dans un cable de transport d'energie ou dans un transformateur. En particulier, la superposition d'un courant dc en plus du courant ac habituel nous a permis d'observer experimentalement, pour la premiere fois a notre connaissance, un comportement particulier des pertes ac en fonction de la valeur du courant dc decrit theoriquement par LeBlanc. Nous avons pu en deduire la presence d'un courant d'ecrantage Meissner de 16 A, ce qui nous permet de determiner les conditions dans lesquelles une reduction du niveau de pertes ac pourrait etre obtenue par application d'un courant dc, phenomene denomme "vallee de Clem".
PERT and CPM: A Comparison with Implications for Education.
ERIC Educational Resources Information Center
Ragan, Stephen W.
Two systematic ways of analyzing and planning the components of a program or project, both used extensively by industry and government, are discussed in this paper. The methods are Program Evaluation and Review Technique (PERT) Networks and Critical Path Method (CPM) Arrow Diagrams. The purposes of this paper are (1) to explore the need for…
Optimizing Department of Defense Acquisition Development Test and Evaluation Scheduling
2015-06-01
CPM Critical Path Method DOD Department of Defense DT&E development test and evaluation EMD engineering and manufacturing development GAMS...these, including the Program Evaluation Review Technique (PERT), the Critical Path Method ( CPM ), and the resource- constrained project-scheduling...problem (RCPSP). These are of particular interest to this thesis as the current scheduling method uses elements of the PERT/ CPM , and the test
Galmer, Andrew; Weinberg, Ido; Giri, Jay; Jaff, Michael; Weinberg, Mitchell
2017-09-01
Pulmonary embolism response teams (PERTs) are multidisciplinary response teams aimed at delivering a range of diagnostic and therapeutic modalities to patients with pulmonary embolism. These teams have gained traction on a national scale. However, despite sharing a common goal, individual PERT programs are quite individualized-varying in their methods of operation, team structures, and practice patterns. The tendency of such response teams is to become intensely structured, algorithmic, and inflexible. However, in their current form, PERT programs are quite the opposite. They are being creatively customized to meet the needs of the individual institution based on available resources, skills, personnel, and institutional goals. After a review of the essential core elements needed to create and operate a PERT team in any form, this article will discuss the more flexible feature development of the nascent PERT team. These include team planning, member composition, operational structure, benchmarking, market analysis, and rudimentary financial operations. Copyright © 2017 Elsevier Inc. All rights reserved.
Levin, Roy J
2009-09-01
The post-ejaculation refractory time (PERT), the period after a single ejaculation when further erections and ejaculations are inhibited, has been studied and well-documented in male rats. Since its first attribution in men by Masters and Johnson and its inaccurate delineation in their graphic sexual response model in 1966 it has been infrequently studied whereas scant attention has been paid to any such possible activity in women after female ejaculation. To critically review our current knowledge about PERT in rats and humans and describe and correct shortcomings and errors in previous publications and propose corrections. Review of published literature. Identifying evidence-based data to support authority-based facts. The review exposes the extremely limited evidence-based data that our knowledge of PERT is based on. The paucity of data for most aspects of human PERT is remarkable; even the generally accepted statement that the duration of PERT increases with age has no published support data. Despite numerous studies in rats the mechanisms and site(s) of the activity are poorly understood. Dopaminergic and adrenergic pathways are thought to shorten PERT whereas serotonergic pathways lengthen its duration. Raising the brain serotonin levels in men using SSRIs helps reduce early or premature ejaculation. Rats have an absolute PERT (aPERT) during which erection and ejaculation is inhibited and a relative PERT (rPERT) when a stronger or novel stimulus can, whether such phases exist in men is unexamined. Apart from possible depressed activity in the amygdala and penile dorsal nerve and rejection of prolactin as a major factor in PERT little or no significant advance in understanding human male PERT has occurred. No evidence-based data on women's PERT after female ejaculation exists. New investigations in young and older men utilizing brain imaging and electromagnetic tomography are priority studies to accomplish.
Theoretical study on the photoabsorption in the Herzberg I band system of the O 2 molecule
NASA Astrophysics Data System (ADS)
Takegami, Ryuta; Yabushita, Satoshi
2005-01-01
The Herzberg I band system of the oxygen molecule is electric-dipole forbidden and its absorption strength has been explained by intensity borrowing models which include the spin-orbit (SO) and L-uncoupling (RO) interactions as perturbations. We employed three different levels of theoretical models to evaluate these two interactions, and obtained the rotational and vibronic absorption strengths using the ab initio method. The first model calculates the transition moments induced by the SO interaction variationally with the SO configuration interaction method (SOCI), and uses the first-order perturbation theory for the RO interaction, and is called SOCI. The second is based on the first-order perturbation theory for both the SO and RO interactions, and is called Pert(Full). The last is a limited version of Pert(Full), in that the first-order perturbation wavefunction for the initial and final state is represented by only one dominant basis, namely the 1 3Π g and B3Σu- state, respectively, as originally used by England et al. [Can. J. Phys. 74 (1996) 185], and is called Pert(England). The vibronic oscillator strengths calculated by these three models were in good agreement with the experimental values. As for the integrated rotational linestrengths, the SOCI and Pert(Full) models reproduced the experimental results very well, however the Pert(England) model did not give satisfactory results. Since the Pert(England) model takes only the 1 3Π g and B3Σu- states into consideration, it cannot contain the complicated configuration interactions with highly excited states induced by the SO and RO interaction, which plays an important role for calculating the delicate integrated rotational linestrength. This result suggests that the configuration interaction with highly excited states due to some perturbations cannot be neglected in the case of very weak absorption band systems.
Low Phase Noise Fiber Optics Links for Space Applications
2005-07-13
photo- oscillateur intégré pour 17 dB de pertes optiques Liaison active à 874,2 MHz avec photo- oscillateur intégré pour 18 dB de pertes optiques Liaison...active à 874,2 MHz avec photo- oscillateur intégré pour 20 dB de pertes optiques Liaison active à 874,2 MHz avec photo- oscillateur intégré pour 23 dB...de pertes optiques Liaison active à 874,2 MHz avec photo- oscillateur intégré pour 25 dB de pertes optiques Liaison active à 874,2 MHz avec photo
Barnes, Geoffrey; Giri, Jay; Courtney, D Mark; Naydenov, Soophia; Wood, Todd; Rosovsky, Rachel; Rosenfield, Kenneth; Kabrhel, Christopher
2017-08-01
Pulmonary embolism response teams (PERT) are developing rapidly to operationalize multi-disciplinary care for acute pulmonary embolism patients. Our objective is to describe the core components of PERT necessary for newly developing programs. An online organizational survey of active National PERT™ Consortium members was performed between April and June 2016. Analysis, including descriptive statistics and Kruskal-Wallis tests, was performed on centers self-reporting a fully operational PERT program. The survey response rate was 80%. Of the 31 institutions that responded (71% academic), 19 had fully functioning PERT programs. These programs were run by steering committees (17/19, 89%) more often than individual physicians (2/19, 11%). Most PERT programs involved 3-5 different specialties (14/19, 74%), which did not vary based on hospital size or academic affiliation. Of programs using multidisciplinary discussions, these occurred via phone or conference call (12/18, 67%), with a minority of these utilizing 'virtual meeting' software (2/12, 17%). Guidelines for appropriate activations were provided at 16/19 (84%) hospitals. Most PERT programs offered around-the-clock catheter-based or surgical care (17/19, 89%). Outpatient follow up usually occurred in personal physician clinics (15/19, 79%) or dedicated PERT clinics (9/19, 47%), which were only available at academic institutions. PERT programs can be implemented, with similar structures, at small and large, community and academic medical centers. While all PERT programs incorporate team-based multi-disciplinary care into their core structure, several different models exist with varying personnel and resource utilization. Understanding how different PERT programs impact clinical care remains to be investigated.
Kabrhel, Christopher
2017-03-01
Pulmonary embolism response teams (PERTs) have recently been developed to streamline care for patients with life-threatening pulmonary embolism (PE). PERTs are unique among rapid response teams, in that they bring together a multidisciplinary team of specialists to care for a single disease for which there are novel treatments but few comparative data to guide treatment. The PERT model describes a process that includes activation of the team; real-time, multidisciplinary consultation; communication of treatment recommendations; mobilization of resources; and collection of research data. Interventional radiologists, along with cardiologists, emergency physicians, hematologists, pulmonary/critical care physicians, and surgeons, are core members of most PERTs. Bringing together such a wide array of experts leverages the expertise and strengths of each specialty. However, it can also lead to challenges that threaten team cohesion and cooperation. The purpose of this article is to discuss ways to integrate multiple specialists, with diverse perspectives and skills, into a cohesive PERT. The authors will discuss the purpose of forming a PERT, strengths of different PERT specialties, strategies to leverage these strengths to optimize participation and cooperation across team members, as well as unresolved challenges.
Fan, Xiao-Yong; Lü, Guo-Zhen; Wu, Li-Na; Chen, Jing-Hua; Xu, Wen-Qing; Zhao, Chun-Nü; Guo, Sheng-Qi
2006-12-01
Current regulations and recommendations proposed for the production of vaccines in continuous cell lines of any origin demand that these be free of exogenous viruses, particularly retroviruses. Recently, the ultra-sensitive product-enhanced reverse transcriptase (PERT) assay can be used to detect minute of reverse transcriptase (RTase) in single retroviral particle and is 10(6) times more sensitive than the conventional RTase assays. However, coincidental with this increase in sensitivity is an increase in false-positive reactions derived from contaminating cellular DNA polymerases, which are known to have RTase-like activities. To develop a modified single-tube one-step PERT (mSTOS-PERT) assay with improvements on decreasing significantly the level of false-positive reactions, and to evaluate the mSTOS-PERT assay for sensitivity and specificity. Ampliwaxtrade mark was used to compartmentalize the reverse transcription (RT) and PCR step in the same micro-tube with more efficiency and reproducibility, while maintaining the high sensitivity. The DNA amplification products were separated by 2% agarose gel electrophoresis, and then analyzed by non-isotopic Southern blot hybridization. A wide variety of cell lines used in biologicals production were detected to validate the improved mSTOS-PERT assay. The detection limit for the mSTOS-PERT assay was at least 10(-9)U, when using AMV-RTase as a positive control. Furthermore, heparin involvement in the RT step can eliminate completely the false-positive PERT signals which are exhibited by cellular polymerases such as DNA-dependent DNA polymerase alpha, gamma released by cell death. Most mammalian cells (MRC-5, Vero, WISH, 2BS, RK-13, MDCK, etc.) are PERT-negative in cell supernatants. Some PERT-positive signals in cell lysates were found to be introduced by the cellular DNA polymerases and could be inhibited specifically by heparin. Chick cells derived from either chick embryo fibroblasts (CEF) or allantoic fluid from SPF embryonated eggs, murine hybridoma cell SP2/0, etc., contained authentic RTase activities, which could not be inactivated by heparin. The improved mSTOS-PERT assay described here may distinguish the genuine RTase activity from cellular polymerases with high sensitivity and specificity, and is rapid and easy to perform to screen for the possible contamination of minute retroviruses in the cell substrates used in vaccine production.
Bartels, Rosalie H; Bourdon, Céline; Potani, Isabel; Mhango, Brian; van den Brink, Deborah A; Mponda, John S; Muller Kobold, Anneke C; Bandsma, Robert H; Boele van Hensbroek, Michael; Voskuijl, Wieger P
2017-11-01
To assess the benefits of pancreatic enzyme replacement therapy (PERT) in children with complicated severe acute malnutrition. We conducted a randomized, controlled trial in 90 children aged 6-60 months with complicated severe acute malnutrition at the Queen Elizabeth Central Hospital in Malawi. All children received standard care; the intervention group also received PERT for 28 days. Children treated with PERT for 28 days did not gain more weight than controls (13.7 ± 9.0% in controls vs 15.3 ± 11.3% in PERT; P = .56). Exocrine pancreatic insufficiency was present in 83.1% of patients on admission and fecal elastase-1 levels increased during hospitalization mostly seen in children with nonedematous severe acute malnutrition (P <.01). Although the study was not powered to detect differences in mortality, mortality was significantly lower in the intervention group treated with pancreatic enzymes (18.6% vs 37.8%; P < .05). Children who died had low fecal fatty acid split ratios at admission. Exocrine pancreatic insufficiency was not improved by PERT, but children receiving PERT were more likely to be discharged with every passing day (P = .02) compared with controls. PERT does not improve weight gain in severely malnourished children but does increase the rate of hospital discharge. Mortality was lower in patients on PERT, a finding that needs to be investigated in a larger cohort with stratification for edematous and nonedematous malnutrition. Mortality in severe acute malnutrition is associated with markers of poor digestive function. ISRCTN.com: 57423639. Copyright © 2017 Elsevier Inc. All rights reserved.
HEALTH PROGRAM INPLEMENTATION THROUGH PERT, ADMINISTRATIVE AND EDUCATIONAL USES.
ERIC Educational Resources Information Center
ARNOLD, MARY F.; AND OTHERS
THE MAIN ADVANTAGE OF THE PROGRAM EVALUATION AND REVIEW TECHNIQUE (PERT) IS THE PROVISION OF A GRAPHIC MODEL OF ACTIVITIES WITH ESTIMATES OF THE TIME, RESOURCES, PERSONNEL, AND FACILITIES NECESSARY TO ACCOMPLISH A SEQUENCE OF INTERDEPENDENT ACTIVITIES, AS IN PROGRAM IMPLEMENTATION. A PERT MODEL CAN ALSO IMPROVE COMMUNICATION BETWEEN PERSONS AND…
NASA Astrophysics Data System (ADS)
Kholil, Muhammad; Nurul Alfa, Bonitasari; Hariadi, Madjumsyah
2018-04-01
Network planning is one of the management techniques used to plan and control the implementation of a project, which shows the relationship between activities. The objective of this research is to arrange network planning on house construction project on CV. XYZ and to know the role of network planning in increasing the efficiency of time so that can be obtained the optimal project completion period. This research uses descriptive method, where the data collected by direct observation to the company, interview, and literature study. The result of this research is optimal time planning in project work. Based on the results of the research, it can be concluded that the use of the both methods in scheduling of house construction project gives very significant effect on the completion time of the project. The company’s CPM (Critical Path Method) method can complete the project with 131 days, PERT (Program Evaluation Review and Technique) Method takes 136 days. Based on PERT calculation obtained Z = -0.66 or 0,2546 (from normal distribution table), and also obtained the value of probability or probability is 74,54%. This means that the possibility of house construction project activities can be completed on time is high enough. While without using both methods the project completion time takes 173 days. So using the CPM method, the company can save time up to 42 days and has time efficiency by using network planning.
PERT and CPM: Workshop Material.
ERIC Educational Resources Information Center
Burroughs Corp., Detroit, MI.
This is a workbook containing problems in PERT (program evaluation review technique). It is intended to be used in a workshop or classroom to train management personnel in the basic methodology and capability of PERT. This material is not adequate in depth to create an expert in these techniques, but it is felt that the material is adequate to…
de la Iglesia-García, Daniel; Huang, Wei; Szatmary, Peter; Baston-Rey, Iria; Gonzalez-Lopez, Jaime; Prada-Ramallal, Guillermo; Mukherjee, Rajarshi; Nunes, Quentin M; Domínguez-Muñoz, J Enrique
2017-01-01
Objective The benefits of pancreatic enzyme replacement therapy (PERT) in chronic pancreatitis (CP) are inadequately defined. We have undertaken a systematic review and meta-analysis of randomised controlled trials of PERT to determine the efficacy of PERT in exocrine pancreatic insufficiency (EPI) from CP. Design Major databases were searched from 1966 to 2015 inclusive. The primary outcome was coefficient of fat absorption (CFA). Effects of PERT versus baseline and versus placebo, and of different doses, formulations and schedules were determined. Results A total of 17 studies (511 patients with CP) were included and assessed qualitatively (Jadad score). Quantitative data were synthesised from 14 studies. PERT improved CFA compared with baseline (83.7±6.0 vs 63.1±15.0, p<0.00001; I2=89%) and placebo (83.2±5.5 vs 67.4±7.0, p=0.0001; I2=86%). PERT improved coefficient of nitrogen absorption, reduced faecal fat excretion, faecal nitrogen excretion, faecal weight and abdominal pain, without significant adverse events. Follow-up studies demonstrated that PERT increased serum nutritional parameters, improved GI symptoms and quality of life without significant adverse events. High-dose or enteric-coated enzymes showed a trend to greater effectiveness than low-dose or non-coated comparisons, respectively. Subgroup, sensitive and meta-regression analyses revealed that sample size, CP diagnostic criteria, study design and enzyme dose contributed to heterogeneity; data on health inequalities were lacking. Conclusions PERT is indicated to correct EPI and malnutrition in CP and may be improved by higher doses, enteric coating, administration during food and acid suppression. Further studies are required to determine optimal regimens, the impact of health inequalities and long-term effects on nutrition. PMID:27941156
Dependence of PERT endpoint on endogenous lipase activity.
Gao, Wen-Yi; Mulberg, Andrew E
2014-11-01
To clarify and to understand the potential for misinterpretation of change in fecal fat quantitation during pancreatic enzyme replacement therapy (PERT) trials for treatment of exocrine pancreatic insufficiency. Analysis of clinical trials submitted to the U.S. Food and Drug Administration (FDA) for approval of PERT that enrolled 123 cystic fibrosis adult and pediatric patients treated with Creon, Pertzye, Ultresa, and Zenpep. The CFA% defines lipase activity as a percentage of converting substrate of "Total Daily Dietary Fat Intake." PERT trials performed to date have modified the definition to converting the "Shared Daily Fat Intake," generating "Partial CFA" for the exogenous lipase: the higher the activity of coexisting endogenous lipase, the lower the "Partial CFA" of exogenous measured. This review shows that "Partial CFA" is not CFA. Enrollment of patients with low HPLA during treatment may improve the interpretability of "Partial CFA" measured by PERT trials.
Structured patient education: the X-PERT Programme.
Deakin, Trudi; Whitham, Claire
2009-09-01
The X-PERT Programme seeks to develop the knowledge, skills and confidence in diabetes treatment for health-care professionals and diabetes self-management. The programme trains health-care professionals to deliver the six-week structured patient education programme to people with diabetes. Over 850 health-care professionals have attended the X-PERT 'Train the Trainer' course and audit results document improved job satisfaction and competence in diabetes treatment and management. National audit statistics for X-PERT implementation to people with diabetes illustrate excellent attendance rates, improved diabetes control, reduced weight, blood pressure, cholesterol and waist circumference and more confidence in self-managing diabetes that has impacted positively on quality of life.
ERIC Educational Resources Information Center
MCKEE, ROBERT L.; RIDLEY, KATHRYN J.
TO ESTABLISH A COLLEGE IN 100 DAYS PRESENTED AN OPPORTUNITY TO TEST THE VALUE OF PROGRAMED ORGANIZATIONAL PROCEDURES USING PROGRAM PERFORMANCE EVALUATION AND REVIEW TECHNIQUE (PERT) UNDER ACTUAL OPERATIONAL CONDITIONS, NOT IN A SIMULATED THEORETICAL SITUATION. THROUGH THE AID OF THE PERT PLANNING SYSTEM, IT WAS DETERMINED THAT THERE WERE NINE…
de la Iglesia-García, Daniel; Huang, Wei; Szatmary, Peter; Baston-Rey, Iria; Gonzalez-Lopez, Jaime; Prada-Ramallal, Guillermo; Mukherjee, Rajarshi; Nunes, Quentin M; Domínguez-Muñoz, J Enrique; Sutton, Robert
2017-08-01
The benefits of pancreatic enzyme replacement therapy (PERT) in chronic pancreatitis (CP) are inadequately defined. We have undertaken a systematic review and meta-analysis of randomised controlled trials of PERT to determine the efficacy of PERT in exocrine pancreatic insufficiency (EPI) from CP. Major databases were searched from 1966 to 2015 inclusive. The primary outcome was coefficient of fat absorption (CFA). Effects of PERT versus baseline and versus placebo, and of different doses, formulations and schedules were determined. A total of 17 studies (511 patients with CP) were included and assessed qualitatively (Jadad score). Quantitative data were synthesised from 14 studies. PERT improved CFA compared with baseline (83.7±6.0 vs 63.1±15.0, p<0.00001; I 2 =89%) and placebo (83.2±5.5 vs 67.4±7.0, p=0.0001; I 2 =86%). PERT improved coefficient of nitrogen absorption, reduced faecal fat excretion, faecal nitrogen excretion, faecal weight and abdominal pain, without significant adverse events. Follow-up studies demonstrated that PERT increased serum nutritional parameters, improved GI symptoms and quality of life without significant adverse events. High-dose or enteric-coated enzymes showed a trend to greater effectiveness than low-dose or non-coated comparisons, respectively. Subgroup, sensitive and meta-regression analyses revealed that sample size, CP diagnostic criteria, study design and enzyme dose contributed to heterogeneity; data on health inequalities were lacking. PERT is indicated to correct EPI and malnutrition in CP and may be improved by higher doses, enteric coating, administration during food and acid suppression. Further studies are required to determine optimal regimens, the impact of health inequalities and long-term effects on nutrition. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
PERTS: A Prototyping Environment for Real-Time Systems
NASA Technical Reports Server (NTRS)
Liu, Jane W. S.; Lin, Kwei-Jay; Liu, C. L.
1991-01-01
We discuss an ongoing project to build a Prototyping Environment for Real-Time Systems, called PERTS. PERTS is a unique prototyping environment in that it has (1) tools and performance models for the analysis and evaluation of real-time prototype systems, (2) building blocks for flexible real-time programs and the support system software, (3) basic building blocks of distributed and intelligent real time applications, and (4) an execution environment. PERTS will make the recent and future theoretical advances in real-time system design and engineering readily usable to practitioners. In particular, it will provide an environment for the use and evaluation of new design approaches, for experimentation with alternative system building blocks and for the analysis and performance profiling of prototype real-time systems.
Method for the Study of Category III Airborne Procedure Reliability
DOT National Transportation Integrated Search
1973-03-01
A method for the study of Category 3 airborne-procedure reliability is presented. The method, based on PERT concepts, is considered to have utility at the outset of a procedure-design cycle and during the early accumulation of actual performance data...
An assessment of PERT as a technique for schedule planning and control
NASA Technical Reports Server (NTRS)
Sibbers, C. W.
1982-01-01
The PERT technique including the types of reports which can be computer generated using the NASA/LaRC PPARS System is described. An assessment is made of the effectiveness of PERT on various types of efforts as well as for specific purposes, namely, schedule planning, schedule analysis, schedule control, monitoring contractor schedule performance, and management reporting. This assessment is based primarily on the author's knowledge of the usage of PERT by NASA/LaRC personnel since the early 1960's. Both strengths and weaknesses of the technique for various applications are discussed. It is intended to serve as a reference guide for personnel performing project planning and control functions and technical personnel whose responsibilities either include schedule planning and control or require a general knowledge of the subject.
Smith, Ross C; Smith, Sarah F; Wilson, Jeremy; Pearce, Callum; Wray, Nick; Vo, Ruth; Chen, John; Ooi, Chee Y; Oliver, Mark; Katz, Tamarah; Turner, Richard; Nikfarjam, Mehrdad; Rayner, Christopher; Horowitz, Michael; Holtmann, Gerald; Talley, Nick; Windsor, John; Pirola, Ron; Neale, Rachel
2016-01-01
Because of increasing awareness of variations in the use of pancreatic exocrine replacement therapy, the Australasian Pancreatic Club decided it was timely to re-review the literature and create new Australasian guidelines for the management of pancreatic exocrine insufficiency (PEI). A working party of expert clinicians was convened and initially determined that by dividing the types of presentation into three categories for the likelihood of PEI (definite, possible and unlikely) they were able to consider the difficulties of diagnosing PEI and relate these to the value of treatment for each diagnostic category. Recent studies confirm that patients with chronic pancreatitis receive similar benefit from pancreatic exocrine replacement therapy (PERT) to that established in children with cystic fibrosis. Severe acute pancreatitis is frequently followed by PEI and PERT should be considered for these patients because of their nutritional requirements. Evidence is also becoming stronger for the benefits of PERT in patients with unresectable pancreatic cancer. However there is as yet no clear guide to help identify those patients in the 'unlikely' PEI group who would benefit from PERT. For example, patients with coeliac disease, diabetes mellitus, irritable bowel syndrome and weight loss in the elderly may occasionally be given a trial of PERT, but determining its effectiveness will be difficult. The starting dose of PERT should be from 25,000-40,000 IU lipase taken with food. This may need to be titrated up and there may be a need for proton pump inhibitors in some patients to improve efficacy. Copyright © 2016 IAP and EPC. Published by Elsevier India Pvt Ltd. All rights reserved.
Synthesis of Carbonate-Based Micro/Nanoscale Particles With Controlled Morphology and Mineralogy
2013-04-01
patterns were obtained using a Panalytical X’Pert Pro diffractometer using iron-filtered cobalt radiation, and analyzed using Panalytical X’Pert...develop composites by hydrothermal recrystallization of metastable phases. 15. SUBJECT TERMS Aragonite Calcite Calcium carbonate Dopant Mineralogy
Network-Based Management Procedures.
ERIC Educational Resources Information Center
Buckner, Allen L.
Network-based management procedures serve as valuable aids in organizational management, achievement of objectives, problem solving, and decisionmaking. Network techniques especially applicable to educational management systems are the program evaluation and review technique (PERT) and the critical path method (CPM). Other network charting…
Risk Management for Weapon Systems Acquisition: A Decision Support System
1985-02-28
includes the program evaluation and review technique (PERT) for network analysis, the PMRM for quantifying risk , an optimization package for generating...Despite the inclusion of uncertainty in time, PERT can at best be considered as a tool for quantifying risk with regard to the time element only. Moreover
Dispositif de mesure de pertes dans les conducteurs supraconducteurs utilisés en régime variable
NASA Astrophysics Data System (ADS)
Le Naour, S.; Lacaze, A.; Laumond, Y.
1998-01-01
A thermometric apparatus to measure AC losses in superconductor wires for 50 Hz applications is described. This method consists in isolating the sample from a helium bath via a thermal resistance. Dissipated power is determined by two thermometers located on both edges of a thermal resistance. The measurement's calibration is done using an ohmic heater. The measurement accuracy is 10% for losses in excess of 2 mW/m. Un dispositif expérimental, pour mesurer les pertes générées dans les conducteurs supraconducteurs utilisés en régime alternatif 50 Hz, est décrit. La méthode, basée sur le principe thermométrique, consiste à isoler l'échantillon du bain d'hélium par une résistance thermique. La puissance dissipée est déterminée à l'aide de deux sondes de température disposées de part et d'autre de la résistance. L'étalonnage de la mesure est assuré par une chaufferette. La précision des mesures est de 10% pour des pertes linéiques supérieures à 2 mW/m.
A Time-Cost Management System for use in Educational Planning.
ERIC Educational Resources Information Center
McIsaac, Donald N., Jr.; and Others
Prepared specifically for the Denver Public Schools, this manual nevertheless provides some of the basic understanding required for the proper execution of educational planning based upon PERT/CPM techniques. The theory of PERT/CPM and the fundamental processes involved therein are elucidated in the first part of the manual while the operating…
NASA Astrophysics Data System (ADS)
Strodel, Paul; Tavan, Paul
2002-09-01
We present a revised multi-reference configuration interaction (MRCI) algorithm for balanced and efficient calculation of electronic excitations in molecules. The revision takes up an earlier method, which had been designed for flexible, state-specific, and individual selection (IS) of MRCI expansions, included perturbational corrections (PERT), and used the spin-coupled hole-particle formalism of Tavan and Schulten (1980) for matrix-element evaluation. It removes the deficiencies of this method by introducing tree structures, which code the CI bases and allow us to efficiently exploit the sparseness of the Hamiltonian matrices. The algorithmic complexity is shown to be optimal for IS/MRCI applications. The revised IS/MRCI/PERT module is combined with the effective valence shell Hamiltonian OM2 suggested by Weber and Thiel (2000). This coupling serves the purpose of making excited state surfaces of organic dye molecules accessible to relatively cheap and sufficiently precise descriptions.
NASA Astrophysics Data System (ADS)
Louis, Ognel Pierre
Le but de cette etude est de developper un outil permettant d'estimer le niveau de risque de perte de vigueur des peuplements forestiers de la region de Gounamitz au nord-ouest du Nouveau-Brunswick via des donnees d'inventaires forestiers et des donnees de teledetection. Pour ce faire, un marteloscope de 100m x 100m et 20 parcelles d'echantillonnages ont ete delimites. A l'interieur de ces derniers, le niveau de risque de perte de vigueur des arbres ayant un DHP superieur ou egal a 9 cm a ete determine. Afin de caracteriser le risque de perte de vigueur des arbres, leurs positions spatiales ont ete repertoriees a partir d'un GPS en tenant compte des defauts au niveau des tiges. Pour mener a bien ce travail, les indices de vegetation et de textures et les bandes spectrales de l'image aeroportee ont ete extraits et consideres comme variables independantes. Le niveau de risque de perte de vigueur obtenu par espece d'arbre a travers les inventaires forestiers a ete considere comme variable dependante. En vue d'obtenir la superficie des peuplements forestiers de la region d'etude, une classification dirigee des images a partir de l'algorithme maximum de vraisemblance a ete effectuee. Le niveau de risque de perte de vigueur par type d'arbre a ensuite ete estime a l'aide des reseaux de neurones en utilisant un reseau dit perceptron multicouches. Il s'agit d'un modele de reseau de neurones compose de : 11 neurones sur la couche d'entree, correspondant aux variables independantes, 35 neurones sur la couche cachee et 4 neurones sur la couche de sortie. La prediction a partir des reseaux de neurones produit une matrice de confusion qui permet d'obtenir des mesures quantitatives d'estimation, notamment un pourcentage de classification globale de 91,7% pour la prediction du risque de perte de vigueur du peuplement de resineux et de 89,7% pour celui du peuplement de feuillus. L'evaluation de la performance des reseaux de neurones fournit une valeur de MSE globale de 0,04, et une RMSE (Mean Square Error) globale de 0,20 pour le peuplement de feuillus. Quant au peuplement de resineux, une valeur de MSE (Mean Square Error) globale de 0,05 et une valeur de RMSE globale de 0,22 ont ete obtenues. Pour la validation des resultats, le niveau de risque de perte de vigueur predit a ete compare avec le risque de perte de vigueur de reference. Les resultats obtenus donnent un coefficient de determination de 0,98 pour le peuplement de feuillus et 0,93 pour le peuplement de resineux.
Integrate Evaluation into the Planning Process.
ERIC Educational Resources Information Center
Camp, William
1985-01-01
In an attempt to correct for limitations in the Program Evaluation and Review Technique-Critical Path Method (PERT-CPM), the Graphical Evaluation and Review Technique (GERT) has been developed. This management tool allows for evaluation during the facilities' development process. Two figures and two references are provided. (DCS)
QUANTITATIVE DECISION TOOLS AND MANAGEMENT DEVELOPMENT PROGRAMS.
ERIC Educational Resources Information Center
BYARS, LLOYD L.; NUNN, GEOFFREY E.
THIS ARTICLE OUTLINED THE CURRENT STATUS OF QUANTITATIVE METHODS AND OPERATIONS RESEARCH (OR), SKETCHED THE STRENGTHS OF TRAINING EFFORTS AND ISOLATED WEAKNESSES, AND FORMULATED WORKABLE CRITERIA FOR EVALUATING SUCCESS OF OPERATIONS RESEARCH TRAINING PROGRAMS. A SURVEY OF 105 COMPANIES REVEALED THAT PERT, INVENTORY CONTROL THEORY AND LINEAR…
Vis-A-Plan /visualize a plan/ management technique provides performance-time scale
NASA Technical Reports Server (NTRS)
Ranck, N. H.
1967-01-01
Vis-A-Plan is a bar-charting technique for representing and evaluating project activities on a performance-time basis. This rectilinear method presents the logic diagram of a project as a series of horizontal time bars. It may be used supplementary to PERT or independently.
PERT/CPM and Supplementary Analytical Techniques. An Analysis of Aerospace Usage
1978-09-01
of a number of new...rapid pace of technological progress in the last 75 years has spawned the development of a. number of very interesting managorial tools, and one of ...support of the oversll effort. PR L g. At one time, use of PERT was mandatory on all major L]OD acquioition contracts . Since that time, the use of
Calvo-Lerma, Joaquim; Hulst, Jessie M; Asseiceira, Inês; Claes, Ine; Garriga, Maria; Colombo, Carla; Fornés, Victoria; Woodcock, Sandra; Martins, Tiago; Boon, Mieke; Ruperto, Mar; Walet, Sylvia; Speziali, Chiara; Witters, Peter; Masip, Etna; Barreto, Celeste; de Boeck, Kris; Ribes-Koninckx, Carmen
2017-07-01
The New European guidelines have established the most updated recommendations on nutrition and pancreatic enzyme replacement therapy (PERT) in CF. In the context of MyCyFAPP project - a European study in children with CF aimed at developing specific tools for improvement of self-management - the objective of the current study was to assess nutritional status, daily energy and macronutrient intake, and PERT dosing with reference to these new guidelines. Cross sectional study in paediatric patients with CF from 6 European centres. SD-scores for weight-for-age (WFA), height-for-age (HFA) and body mass index-for-age (BMI) were obtained. Through a specific 4-day food and enzyme-dose record, energy and macronutrients intake and PERT-use (LU/g lipids) were automatically calculated by the MyCyFAPP system. Comparisons were made using linear regression models. The lowest quartiles for BMI and HFA were between 0 and -1SD in all the centres with no significant differences, and 33.5% of the patients had a SD-score <0 for all three parameters. The minimum energy intake recommendation was not reached by 40% of the children and mean nutrients intake values were 14%, 51% and 34% of the total energy for protein, carbohydrates and lipids respectively. When assessed per centre, reported PERT doses were in the recommended range in only 13.8% to 46.6% of the patients; from 5.6% up to 82.7% of children were above the recommended doses and 3.3% to 75% were below. Among the 6 centres, a large variability and inconsistency with new guidelines on nutrition and PERT-use was found. Our findings document the lack of a general criterion to adjust PERT and suggest the potential benefit of educational and self-managerial tools to ensure adherence to therapies, both for clinical staff and families. They will be taken into account when developing these new tools during the next stages of MyCyFAPP Project. Copyright © 2017 European Cystic Fibrosis Society. Published by Elsevier B.V. All rights reserved.
Simulating Mission Command for Planning and Analysis
2015-06-01
mission plan. 14. SUBJECT TERMS Mission Planning, CPM , PERT, Simulation, DES, Simkit, Triangle Distribution, Critical Path 15. NUMBER OF...Battalion Task Force CO Company CPM Critical Path Method DES Discrete Event Simulation FA BAT Field Artillery Battalion FEL Future Event List FIST...management tools that can be utilized to find the critical path in military projects. These are the Critical Path Method ( CPM ) and the Program Evaluation and
NASA Astrophysics Data System (ADS)
Matuszak, Zbigniew; Bartosz, Michał; Barta, Dalibor
2016-09-01
In the article are characterized two network methods (critical path method - CPM and program evaluation and review technique - PERT). On the example of an international furniture company's product, it presented the exemplification of methods to transport cargos (furniture elements). Moreover, the study showed diagrams for transportation of cargos from individual components' producers to the final destination - the showroom. Calculations were based on the transportation of furniture elements via small commercial vehicles.
A PERT/CPM of the Computer Assisted Completion of The Ministry September Report. Research Report.
ERIC Educational Resources Information Center
Feeney, J. D.
Using two statistical analysis techniques (the Program Evaluation and Review Technique and the Critical Path Method), this study analyzed procedures for compiling the required yearly report of the Metropolitan Separate School Board (Catholic) of Toronto, Canada. The computer-assisted analysis organized the process of completing the report more…
Arundale, Amelia J H; Cummer, Kathleen; Capin, Jacob J; Zarzycki, Ryan; Snyder-Mackler, Lynn
2017-10-01
Athletes often are cleared to return to activities 6 months after anterior cruciate ligament (ACL) reconstruction; however, knee function measures continue to improve up to 2 years after surgery. Interventions beyond standard care may facilitate successful return to preinjury activities and improve functional outcomes. Perturbation training has been used in nonoperative ACL injury and preoperative ACL reconstruction rehabilitation, but has not been examined in postoperative ACL reconstruction rehabilitation, specifically return to sport rehabilitation. The purpose of this study was to determine whether there were differences at 1 and 2 years after ACL reconstruction between the male SAP (strengthening, agility, and secondary prevention) and SAP+PERT (SAP protocol with the addition of perturbation training) groups with respect to (1) quadriceps strength and single-legged hop limb symmetry; (2) patient-reported knee outcome scores; (3) the proportion who achieve self-reported normal knee function; and (4) the time from surgery to passing return to sport criteria. Forty men who had completed ACL reconstruction rehabilitation and met enrollment criteria (3-9 months after ACL reconstruction, > 80% quadriceps strength limb symmetry, no pain, full ROM, minimal effusion) were randomized into the SAP or SAP+PERT groups of the Anterior Cruciate Ligament-Specialised Post-Operative Return to Sports trial (ACL-SPORTS), a single-blind randomized clinical study of secondary prevention and return to sport. Quadriceps strength, single-legged hopping, the International Knee Documentation Committee (IKDC) 2000 subjective knee form, Knee Injury and Osteoarthritis Outcome Score (KOOS)-sports and recreation, and KOOS-quality-of-life subscales were collected 1 and 2 years after surgery by investigators blind to group. Athletes were categorized as having normal or abnormal knee function at each time point based on IKDC score, and the time until athletes passed strict return-to-sport criteria was also recorded. T-tests, chi square tests, and analyses of variance were used to identify differences between the treatment groups over time. There were no differences between groups for quadriceps symmetry (1 year: SAP = 101% ± 14%, SAP+PERT = 101% ± 14%; 2 years: SAP = 103% ± 11%, SAP+PERT = 98% ± 14%; mean differences between groups at 1 year: 0.4 [-9.0 to 9.8], 2 years = 4.5 [-4.3 to 13.1]; mean difference between 1 and 2 years: SAP = -1.0 [-8.6 to 6.6], SAP+PERT = 3.0 [-4.3 to 10.3], p = 0.45) or single-legged hop test limb symmetry. There were no clinically meaningful differences for any patient-reported outcome measures. There was no difference in the proportion of athletes in each group who achieved normal knee function at 1 year (SAP 14 of 19, SAP+PERT 18 of 20, odds ratio 0.31 [0.5-19.0]; p = 0.18); however, the SAP+PERT group had fewer athletes with normal knee function at 2 years (SAP 17 of 17, SAP+PERT 14 of 19, p = 0.03). There were no differences between groups in the time to pass return to sport criteria (SAP = 325 ± 199 days, SAP+PERT = 233 ± 77 days; mean difference 92 [-9 to 192], p = 0.09). This randomized trial found few differences between an ACL rehabilitation program consisting of strengthening, agility, and secondary prevention and one consisting of those elements as well as perturbation training. In the absence of clinically meaningful differences between groups in knee function and self-reported outcomes measures, the results indicate that perturbation training may not contribute additional benefit to the strengthening, agility, and secondary prevention base of the ACL-SPORTS training program. Level II, therapeutic study.
PERTS: A Prototyping Environment for Real-Time Systems
NASA Technical Reports Server (NTRS)
Liu, Jane W. S.; Lin, Kwei-Jay; Liu, C. L.
1993-01-01
PERTS is a prototyping environment for real-time systems. It is being built incrementally and will contain basic building blocks of operating systems for time-critical applications, tools, and performance models for the analysis, evaluation and measurement of real-time systems and a simulation/emulation environment. It is designed to support the use and evaluation of new design approaches, experimentations with alternative system building blocks, and the analysis and performance profiling of prototype real-time systems.
Radiation Diffusion:. AN Overview of Physical and Numerical Concepts
NASA Astrophysics Data System (ADS)
Graziani, Frank
2005-12-01
An overview of the physical and mathematical foundations of radiation transport is given. Emphasis is placed on how the diffusion approximation and its transport corrections arise. An overview of the numerical handling of radiation diffusion coupled to matter is also given. Discussions center on partial temperature and grey methods with comments concerning fully implicit methods. In addition finite difference, finite element and Pert representations of the div-grad operator is also discussed
NASA Astrophysics Data System (ADS)
Dinh, Thanh Vu; Cabon, Béatrice; Daoud, Nahla; Chilo, Jean
1992-11-01
This paper presents a simple and efficient method for calculating the propagating line parameters (actually, a microstrip one) and its magnetic fields, by simulating an original equivalent circuit with an electrical nodal simulator (SPICE). The losses in the normal conducting line (due to DC losses and to skin effect losses) and also in the superconducting one can be investigated. This allows us to integrate the electromagnetic solutions to the CAD softwares. Dans ce papier, une méthode simple et efficace pour calculer les paramètres de propagation d'une ligne microruban et les champs magnétiques qu'elle engendre est présentée; pour cela, nous simulons un circuit original équivalent à l'aide du simulateur nodal SPICE. Les pertes dans une ligne conductrice (pertes continues et par effet de peau) ainsi que dans une ligne supraconductrice peuvent être considérées. Les solutions électromagnétiques peuvent être intégrées dans les simulateurs de CAO.
ERIC Educational Resources Information Center
Farnsworth, Clayton
The principal of the Southern Nevada Vocational-Technical Center at Las Vegas, Nevada, briefly outlines its development and function. The facility cost approximately 3 million dollars and was built on 390 acres of land purchased from the Federal government. The PERT method was used in planning. Instructional facilities, including those for auto…
Davey, James A; Chica, Roberto A
2014-05-01
Multistate computational protein design (MSD) with backbone ensembles approximating conformational flexibility can predict higher quality sequences than single-state design with a single fixed backbone. However, it is currently unclear what characteristics of backbone ensembles are required for the accurate prediction of protein sequence stability. In this study, we aimed to improve the accuracy of protein stability predictions made with MSD by using a variety of backbone ensembles to recapitulate the experimentally measured stability of 85 Streptococcal protein G domain β1 sequences. Ensembles tested here include an NMR ensemble as well as those generated by molecular dynamics (MD) simulations, by Backrub motions, and by PertMin, a new method that we developed involving the perturbation of atomic coordinates followed by energy minimization. MSD with the PertMin ensembles resulted in the most accurate predictions by providing the highest number of stable sequences in the top 25, and by correctly binning sequences as stable or unstable with the highest success rate (≈90%) and the lowest number of false positives. The performance of PertMin ensembles is due to the fact that their members closely resemble the input crystal structure and have low potential energy. Conversely, the NMR ensemble as well as those generated by MD simulations at 500 or 1000 K reduced prediction accuracy due to their low structural similarity to the crystal structure. The ensembles tested herein thus represent on- or off-target models of the native protein fold and could be used in future studies to design for desired properties other than stability. Copyright © 2013 Wiley Periodicals, Inc.
Dispositif de mesure calorimétrique des pertes dans les condensateurs de puissance
NASA Astrophysics Data System (ADS)
Seguin, B.; Gosse, J. P.
1997-02-01
A calorimetric technique is used to measure the power losses in capacitors. The power dissipated in the component is measured as the difference between the heat powers delivered by a temperature regulation when the capacitor is energetized or not. The original feature of the apparatus lies in the use of the isothermal calorimetry and in the measurement of an electrical power, in condradistinction with previous and dissatisfacting attempts based on the measurement of a temperature increase. The result is an improvement of the accuracy and sensibility of the apparatus which can be used to determine the equivalent series resistance of capacitors having very low losses. Measurements realized on a polypropylene capacitor and a sinusoidal applied voltage have allowed us to separate the ohmic losses from the dielectric ones and to study their variations with temperature. Une technique de calorimétrie isotherme est appliquée à la mesure des pertes électriques dans les condensateurs. La puissance calorifique dissipée par le composant est mesurée comme la différence des puissances de chauffe délivrées par une régulation de température quand le condensateur est hors tension ou sous tension. L'originalité du dispositif réside dans l'utilisation du principe de calorimétrie isotherme et dans la nature de la grandeur physique mesurée qui est une puissance électrique alors que les tentatives antérieures de mesures calorimétriques reposaient sur la mesure d'une élévation de température. Il en résulte une amélioration de la précision et de la sensibilité de ce type d'appareillage qui peut être employé pour caractériser la résistance série équivalente des condensateurs à très faibles pertes. Une série de mesures, réalisée sur un condensateur au polypropylène soumis à une tension sinusoïdale, a permis de séparer les pertes ohmiques des pertes diélectriques et d'étudier leurs variations avec la température.
Less common etiologies of exocrine pancreatic insufficiency
Singh, Vikesh K; Haupt, Mark E; Geller, David E; Hall, Jerry A; Quintana Diez, Pedro M
2017-01-01
Exocrine pancreatic insufficiency (EPI), an important cause of maldigestion and malabsorption, results from primary pancreatic diseases or secondarily impaired exocrine pancreatic function. Besides cystic fibrosis and chronic pancreatitis, the most common etiologies of EPI, other causes of EPI include unresectable pancreatic cancer, metabolic diseases (diabetes); impaired hormonal stimulation of exocrine pancreatic secretion by cholecystokinin (CCK); celiac or inflammatory bowel disease (IBD) due to loss of intestinal brush border proteins; and gastrointestinal surgery (asynchrony between motor and secretory functions, impaired enteropancreatic feedback, and inadequate mixing of pancreatic secretions with food). This paper reviews such conditions that have less straightforward associations with EPI and examines the role of pancreatic enzyme replacement therapy (PERT). Relevant literature was identified by database searches. Most patients with inoperable pancreatic cancer develop EPI (66%-92%). EPI occurs in patients with type 1 (26%-57%) or type 2 diabetes (20%-36%) and is typically mild to moderate; by definition, all patients with type 3c (pancreatogenic) diabetes have EPI. EPI occurs in untreated celiac disease (4%-80%), but typically resolves on a gluten-free diet. EPI manifests in patients with IBD (14%-74%) and up to 100% of gastrointestinal surgery patients (47%-100%; dependent on surgical site). With the paucity of published studies on PERT use for these conditions, recommendations for or against PERT use remain ambiguous. The authors conclude that there is an urgent need to conduct robust clinical studies to understand the validity and nature of associations between EPI and medical conditions beyond those with proven mechanisms, and examine the potential role for PERT. PMID:29093615
System analysis for technology transfer readiness assessment of horticultural postharvest
NASA Astrophysics Data System (ADS)
Hayuningtyas, M.; Djatna, T.
2018-04-01
Availability of postharvest technology is becoming abundant, but only a few technologies are applicable and useful to a wider community purposes. Based on this problem it requires a significant readiness level of transfer technology approach. This system is reliable to access readiness a technology with level, from 1-9 and to minimize time of transfer technology in every level, time required technology from the selection process can be minimum. Problem was solved by using Relief method to determine ranking by weighting feasible criteria on postharvest technology in each level and PERT (Program Evaluation Review Technique) to schedule. The results from ranking process of post-harvest technology in the field of horticulture is able to pass level 7. That, technology can be developed to increase into pilot scale and minimize time required for technological readiness on PERT with optimistic time of 7,9 years. Readiness level 9 shows that technology has been tested on the actual conditions also tied with estimated production price compared to competitors. This system can be used to determine readiness of technology innovation that is derived from agricultural raw materials and passes certain stages.
Process Engineering with the Evolutionary Spiral Process Model. Version 01.00.06
1994-01-01
program . Process Definition and SPC-92041-CMC Provides methods for defining and Modeling Guidebook documenting processes so they can be analyzed, modified...and Program Evaluation and Review Technique (PERT) support the activity of developing a project schedule. A variety of automated tools, such as...keep the organiza- tion from becoming disoriented during the improvement program (Curtis, Kellner, and Over 1992). Analyzing and documenting how
NASA Astrophysics Data System (ADS)
Lepage, Martin
1998-12-01
Cette these est presentee a la Faculte de medecine de l'Universite de Sherbrooke en vue de l'obtention du grade de Ph.D. en Radiobiologie. Elle contient des resultats experimentaux enregistres avec un spectrometre d'electrons a haute resolution. Ces resultats portent sur la formation de resonances electroniques en phase condensee et de differents canaux pour leur decroissance. En premier lieu, nous presentons des mesures d'excitations vibrationnelles de l'oxygene dilue en matrice d'argon pour des energies des electrons incidents de 1 a 20 eV. Les resultats suggerent que le temps de vie des resonances de l'oxygene est modifie par la densite d'etats d'electrons dans la bande de conduction de l'argon. Nous presentons aussi des spectres de pertes d'energie d'electrons des molecules de tetrahydrofuranne (THF) et d'acetone. Dans les deux cas, la position en energie des pertes associees aux excitations vibrationnelles est en excellent accord avec les resultats trouves dans la litterature. Les fonctions d'excitation de ces modes revelent la presence de plusieurs nouvelles resonances electroniques. Nous comparons les resonances du THF et celles de la molecule de cyclopentane en phase gazeuse. Nous proposons une origine commune aux resonances ce qui implique qu'elles ne sont pas necessairement attribuees a l'excitation des electrons non-apparies de l'oxygene du THF. Nous proposons une nouvelle methode basee sur la spectroscopie par pertes d'energie des electrons pour detecter la production de fragments neutres qui demeurent a l'interieur d'un film mince condense a basse temperature. Cette methode se base sur la detection des excitations electroniques du produit neutre. Nous presentons des resultats de la production de CO dans un film de methanol. Le taux de production de CO en fonction de l'energie incidente des electrons est calibre en termes d'une section efficace totale de diffusion des electrons. Les resultats indiquent une augmentation lineaire du taux de production de CO en fonction de l'epaisseur du film et de la dose d'electrons incidente sur le film. Ces donnees experimentales cadrent dans un modele simple ou un electron cause la fragmentation de la molecule sans reaction avec les molecules avoisinantes. Le mecanisme propose pour la fragmentation unimoleculaire du methanol est la formation de resonances qui decroissent dans un etat electronique excite. Nous suggerons l'action combinee de la presence d'un trou dans une orbitale de coeur du methanol et de la presence de deux electrons dans la premiere orbitale vide pour expliquer la dehydrogenation complete du methanol pour des energies des electrons entre 8 et 18 eV. Pour des energies plus grandes, la fragmentation par l'intermediaire de l'ionisation de la molecule a deja ete suggeree. La methode de detection des etats electroniques offre une alternative a la detection des excitations vibrationnelles puisque les spectres de pertes d'energie des electrons sont congestionnes dans cette region d'energie pour les molecules polyatomiques.
Saito, Tomotaka; Hirano, Kenji; Isayama, Hiroyuki; Nakai, Yousuke; Saito, Kei; Umefune, Gyotane; Akiyama, Dai; Watanabe, Takeo; Takagi, Kaoru; Hamada, Tsuyoshi; Takahara, Naminatsu; Uchino, Rie; Mizuno, Suguru; Kogure, Hirofumi; Matsubara, Saburo; Yamamoto, Natsuyo; Tada, Minoru; Koike, Kazuhiko
2017-03-01
Although patients with pancreatic cancer (PC) are prone to exocrine pancreatic insufficiency, there are little evidence about pancreatic enzyme replacement therapy (PERT) in patients with PC, especially those receiving chemotherapy. This is a prospective consecutive observational study of PERT in patients with unresectable PC. We prospectively enrolled patients receiving chemotherapy for unresectable PC from April 2012 to February 2014 and prescribed oral pancrelipase of 48,000 lipase units per meal (pancrelipase group). N-benzoyl-tryrosyl para-aminobenzoic acid test was performed at baseline. Patients receiving chemotherapy before April 2012 were retrospectively studied as a historical cohort. Data on the nutritional markers at baseline and 16 weeks were extracted, and serial changes, defined as the ratio of markers at 16 weeks/baseline, were compared between 2 groups. A total of 91 patients (46 in the pancrelipase group and 45 in the historical cohort) were analyzed. N-benzoyl-tryrosyl para-aminobenzoic acid test was low in 94% of the pancrelipase group. Serial change in the pancrelipase group versus historical cohort was 1.01 versus 0.95 in body mass index (P < 0.001) and 1.03 versus 0.97 in serum albumin (P = 0.131). The rate of exocrine pancreatic insufficiency in unresectable PC was high, and PERT can potentially improve the nutritional status during chemotherapy.
Perl Tools for Automating Satellite Ground Systems
NASA Technical Reports Server (NTRS)
McLean, David; Haar, Therese; McDonald, James
2000-01-01
The freeware scripting language Pert offers many opportunities for automating satellite ground systems for new satellites as well as older, in situ systems. This paper describes a toolkit that has evolved from of the experiences gained by using Pert to automate the ground system for the Compton Gamma Ray Observatory (CGRO) and for automating some of the elements in the Earth Observing System Data and Operations System (EDOS) ground system at Goddard Space Flight Center (GSFC). CGRO is an older ground system that was forced to automate because of fund cuts. Three 8 hour shifts were cut back to one 8 hour shift, 7 days per week. EDOS supports a new mission called Terra, launched December 1999 that requires distribution and tracking of mission-critical reports throughout the world. Both of these ground systems use Pert scripts to process data and display it on the Internet as well as scripts to coordinate many of the other systems that make these ground systems work as a coherent whole. Another task called Automated Multimodal Trend Analysis System (AMTAS) is looking at technology for isolation and recovery of spacecraft problems. This effort has led to prototypes that seek to evaluate various tools and technology that meet at least some of the AMTAS goals. The tools, experiences, and lessons learned by implementing these systems are described here.
Lambeaux autofermants pour le traitement des brulures electriques du scalp par haut voltage
Hafidi, J.; El Mazouz, S.; El Mejatti, H.; Fejjal, N.; Gharib, N.E.; Abbassi, A.; Belmahi, A.M.
2011-01-01
Summary Les brûlures électriques par haut voltage sont responsables de gros dégâts tissulaires en immédiat et dans les jours suivant l’accident du fait de la chaleur importante dégagée par effet joule et de la thrombose microvasculaire évolutive. Les pertes de substances du scalp secondaires à ces brûlures nécessitent une couverture par lambeaux vu la destruction du périoste et du calvarium en regard. De juin 1997 à juin 2008, 15 patients ont été traités pour des pertes de substance du scalp secondaires à des brûlures électriques par haut voltage de diamètre allant de 8 à 11 cm et siégeant dans la région tonsurale. Ces patients ont été opérés dans la première semaine suivant l’accident. Les pertes de substance du scalp de taille moyenne secondaires à ces brûlures peuvent être couvertes per primam de façon fiable par des lambeaux locaux axialisés et multiples. Nous relatons l’expérience du Service de Chirurgie Plastique du Centre Hospitalier Universitaire Ibn-Sina, Rabat, Maroc, dans la gestion et la prise en charge de ces brûlures. PMID:22262963
Sen. Whitehouse, Sheldon [D-RI
2012-03-29
Senate - 03/29/2012 Read twice and referred to the Committee on Health, Education, Labor, and Pensions. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
NASA Technical Reports Server (NTRS)
Heneghan, C.
1999-01-01
The traditional centralized planning and scheduling of complex fast moving projects are value-added activites. However, centralized scheduling has some severe deficiencies that have plagued managers since the Polaris project when PERT analysis was invented.
Spano, Frank; Donovan, Jeff C.
2015-01-01
Résumé Objectif Présenter aux médecins de famille des renseignements de base pour faire comprendre l’épidémiologie, la pathogenèse, l’histologie et l’approche clinique au diagnostic de la pelade par plaques. Sources des données Une recension a été effectuée dans PubMed pour trouver des articles pertinents concernant la pathogenèse, le diagnostic et le pronostic de la pelade par plaques. Message principal La pelade par plaques est une forme de perte pileuse auto-immune dont la prévalence durant une vie est d’environ 2 %. Des antécédents personnels ou familiaux de troubles auto-immuns concomitants, comme le vitiligo ou une maladie de la thyroïde, peuvent être observés dans un petit sous-groupe de patients. Le diagnostic peut souvent être posé de manière clinique en se fondant sur la perte de cheveux non cicatricielle et circulaire caractéristique, accompagnée de cheveux en « point d’exclamation » en périphérie chez ceux dont le problème en est aux premiers stades. Le diagnostic des cas plus complexes ou des présentations inhabituelles peut être facilité par une biopsie et un examen histologique. Le pronostic varie largement et de mauvais résultats sont associés à une apparition à un âge précoce, une perte importante, la variante ophiasis, des changements aux ongles, des antécédents familiaux ou des troubles auto-immuns concomitants. Conclusion La pelade par plaques est une forme auto-immune de perte de cheveux périodiquement observée en soins primaires. Les médecins de famille sont bien placés pour identifier la pelade par plaques, déterminer la gravité de la maladie et poser le diagnostic différentiel approprié. De plus, ils sont en mesure de renseigner leurs patients à propos de l’évolution clinique de la maladie ainsi que du pronostic général selon le sous-type de patients.
Stabilité et pertes des conducteurs pour régime alternatif
NASA Astrophysics Data System (ADS)
Estop, P.; Lacaze, A.
1994-04-01
Recent progresses obtained on low-T_c superconductors usable at industrial frequencies allow to envisage a new and innovating application as the current limiter. The device conception determines a cryogenic cost due to the dissipated energy in superconducting wires. In order to reduce the losses sufficiently, fine wires with submicromic filaments must be used. In relation to the need to obtain an infaillible protection during the quench, a new type of conductor, has been worked out, tested and validated. The main technical aspects concerning the losses of low-T_c superconductors have been developed on this paper. Les récents progrès obtenus sur les conducteurs supraconducteurs basse température critique utilisables aux fréquences industrielles permettent d'entrevoir une application nouvelle et innovante comme le limiteur de courant. De la conception d'un tel système résulte un coût cryogénique lié à l'énergie dissipée dans les brins supraconducteurs. La réduction des pertes nécessite l'utilisation de brins suffisamment fins et disposant de filaments submicromiques. Concernant le besoin d'obtenir une protection infaillible au moment de la transition, un conducteur d'un concept nouveau a été élaboré, testé et validé. L'article passe en revue les principaux aspects techniques liés à la stabilité et aux pertes dans les conducteurs supraconducteurs basse température critique.
Inventory-transportation integrated optimization for maintenance spare parts of high-speed trains
Wang, Jiaxi; Wang, Huasheng; Wang, Zhongkai; Li, Jian; Lin, Ruixi; Xiao, Jie; Wu, Jianping
2017-01-01
This paper presents a 0–1 programming model aimed at obtaining the optimal inventory policy and transportation mode for maintenance spare parts of high-speed trains. To obtain the model parameters for occasionally-replaced spare parts, a demand estimation method based on the maintenance strategies of China’s high-speed railway system is proposed. In addition, we analyse the shortage time using PERT, and then calculate the unit time shortage cost from the viewpoint of train operation revenue. Finally, a real-world case study from Shanghai Depot is conducted to demonstrate our method. Computational results offer an effective and efficient decision support for inventory managers. PMID:28472097
Briefing Number 3 to Space Station Operations Task Force Oversight Committee
NASA Technical Reports Server (NTRS)
Lyman, Peter; Shelley, Carl
1987-01-01
This document reviews certain issues in relationship to the operation of the Space Station Freedom. The document is in outline format and includes some organizational hierarchy charts, pert charts and decision charts.
Hunting Spinning Asteroids with the Faulkes Telescopes
NASA Astrophysics Data System (ADS)
Miles, Richard
2008-08-01
The Faulkes telescopes are proving a dab hand at allowing schools and amateurs to do real science. The author discusses the latest Faulkes research project, and his record-breaking discovery that was pert of it.
Slipping during side-step cutting: anticipatory effects and familiarization.
Oliveira, Anderson Souza Castelo; Silva, Priscila Brito; Lund, Morten Enemark; Farina, Dario; Kersting, Uwe Gustav
2014-04-01
The aim of the present study was to verify whether the expectation of perturbations while performing side-step cutting manoeuvres influences lower limb EMG activity, heel kinematics and ground reaction forces. Eighteen healthy men performed two sets of 90° side-step cutting manoeuvres. In the first set, 10 unperturbed trials (Base) were performed while stepping over a moveable force platform. In the second set, subjects were informed about the random possibility of perturbations to balance throughout 32 trials, of which eight were perturbed (Pert, 10cm translation triggered at initial contact), and the others were "catch" trials (Catch). Center of mass velocity (CoMVEL), heel acceleration (HAC), ground reaction forces (GRF) and surface electromyography (EMG) from lower limb and trunk muscles were recorded for each trial. Surface EMG was analyzed prior to initial contact (PRE), during load acceptance (LA) and propulsion (PRP) periods of the stance phase. In addition, hamstrings-quadriceps co-contraction ratios (CCR) were calculated for these time-windows. The results showed no changes in CoMVEL, HAC, peak GRF and surface EMG PRE among conditions. However, during LA, there were increases in tibialis anterior EMG (30-50%) concomitant to reduced EMG for quadriceps muscles, gluteus and rectus abdominis for Catch and Pert conditions (15-40%). In addition, quadriceps EMG was still reduced during PRP (p<.05). Consequently, CCR was greater for Catch and Pert in comparison to Base (p<.05). These results suggest that there is modulation of muscle activity towards anticipating potential instability in the lower limb joints and assure safety to complete the task. Copyright © 2014. Published by Elsevier B.V.
Les inconvénients de perdre du poids
Bosomworth, N. John
2012-01-01
Résumé Objectif Explorer les raisons pour lesquelles la perte de poids à long terme échoue la plupart du temps et évaluer les conséquences de diverses trajectoires pondérales, y compris la stabilité, la perte et le gain. Source des données Les études qui évaluent les paramètres pondéraux dans la population sont en majorité observationnelles. Des données probantes de niveau I ont été publiées pour évaluer l’influence des interventions relatives au poids sur la mortalité et la qualité de vie. Message principal Seulement un petit pourcentage des personnes qui désirent perdre du poids réussissent à le faire de manière durable. La mortalité est la plus faible chez les personnes se situant dans la catégorie de poids élevé-normal et surpoids. La trajectoire pondérale la plus sécuritaire est la stabilité du poids avec une optimisation de la condition physique et métabolique. Il est démontré que la mortalité est plus faible chez les personnes ayant des comorbidités reliées à l’obésité si elles perdent du poids. Il est aussi établi que la qualité de vie sur le plan de la santé est meilleure chez les personnes obèses qui perdent du poids. Par contre, la perte de poids chez une personne obèse autrement en santé est associée à une mortalité accrue. Conclusion La perte de poids est recommandable seulement chez les personnes qui ont des comorbidités reliées à l’obésité. Les personnes obèses en santé qui veulent perdre du poids devraient être informées qu’il peut y avoir des risques à le faire. Une stratégie qui se traduit par un indice de masse corporelle stable avec une condition physique et métabolique optimisée, peu importe le poids, est l’option d’intervention la plus sécuritaire en ce qui concerne le poids.
HRP Integrated Research Plan Analysis
NASA Technical Reports Server (NTRS)
Elliott, Todd
2009-01-01
The charts, that are the totality of this document, presents tasks, duration of the tasks, the start and finish of the tasks, and subtasks. Also presented are PERT charts that display the beginning, external milestones, and end points for the tasks, and sub tasks.
Problems of Global Networks of Gravitational Detectors
NASA Astrophysics Data System (ADS)
Kuchik, E. K.; Rudenko, V. N.
We describe the network of gravitational wave detectors which now exist in the world: Stanford-Louisiana-Pert-Geneva-Moscow. A computer simulation of a gravitational wave detection is performed. Proposals for the creation of a global observational gravitational wave service are made.
1994-06-23
were studied as-cast and for pertItgoal FsI cNhi,, measured in the prsenlt work and calcult•ted for the after annealing for four days at 1000 ’C and...H. Eschrig MGP Research Group "Electron Systems," Technical University Dresden, D-01062 Dresden, Germany Magnetic and specific-heat studies of U2T2X...University, Kazan 420 008, Russia The phase transition in the continual random n-component Potts model is studied by the renormalization group method. It is
Data Management & Decision Making. Technical Report No. 14.
ERIC Educational Resources Information Center
Speedie, Stuart M.; Sanders, Susan
"Data Management and Decision Making" is a set of instructional materials designed to teach practicing and potential educational administrators about the uses of operations research in educational administration. It consists of five units--"Operations Research in Education,""PERT/CPM: A Planning and Analysis…
Gait mechanics and tibiofemoral loading in men of the ACL-SPORTS randomized control trial.
Capin, Jacob J; Khandha, Ashutosh; Zarzycki, Ryan; Arundale, Amelia J H; Ziegler, Melissa L; Manal, Kurt; Buchanan, Thomas S; Snyder-Mackler, Lynn
2018-03-25
The risk for post-traumatic osteoarthritis is elevated after anterior cruciate ligament reconstruction (ACLR), and may be especially high among individuals with aberrant walking mechanics, such as medial tibiofemoral joint underloading 6 months postoperatively. Rehabilitation training programs have been proposed as one strategy to address aberrant gait mechanics. We developed the anterior cruciate ligament specialized post-operative return-to-sports (ACL-SPORTS) randomized control trial to test the effect of 10 post-operative training sessions consisting of strength, agility, plyometric, and secondary prevention exercises (SAPP) or SAPP plus perturbation (SAPP + PERT) training on gait mechanics after ACLR. A total of 40 male athletes (age 23 ± 7 years) after primary ACLR were randomized to SAPP or SAPP + PERT training and tested at three distinct, post-operative time points: 1) after impairment resolution (Pre-training); 2) following 10 training sessions (Post-training); and 3) 2 years after ACLR. Knee kinematic and kinetic variables as well as muscle and joint contact forces were calculated via inverse dynamics and a validated electromyography-informed musculoskeletal model. There were no significant improvements from Pre-training to Post-training in either intervention group. Smaller peak knee flexion angles, extension moments, extensor muscle forces, medial compartment contact forces, and tibiofemoral contact forces were present across group and time, however the magnitude of interlimb differences were generally smaller and likely not meaningful 2 years postoperatively. Neither SAPP nor SAPP + PERT training appears effective at altering gait mechanics in men in the short-term; however, meaningful gait asymmetries mostly resolved between post-training and 2 years after ACLR regardless of intervention group. © 2018 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res. © 2018 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
PERT - What Possible Value for Mobilization.
1982-04-16
29. Ibid. 30. George R. Terry Principles of Management (Homewood, Illinois: Richard D. Irwin, Inc., 1968), p. 171. 31. Tosi and Carroll, pp. 451-453...Terry, George R. Principles of Management . Homewood, Illinois: Richard D. Irwin, Inc., 1968. Torgersen, Paul E., and Irwin T. Weinstock. Management
NASA Astrophysics Data System (ADS)
Pickard, William F.
2004-10-01
The classical PERT inverse statistics problem requires estimation of the mean, \\skew1\\bar{m} , and standard deviation, s, of a unimodal distribution given estimates of its mode, m, and of the smallest, a, and largest, b, values likely to be encountered. After placing the problem in historical perspective and showing that it is ill-posed because it is underdetermined, this paper offers an approach to resolve the ill-posedness: (a) by interpreting a and b modes of order statistic distributions; (b) by requiring also an estimate of the number of samples, N, considered in estimating the set {m, a, b}; and (c) by maximizing a suitable likelihood, having made the traditional assumption that the underlying distribution is beta. Exact formulae relating the four parameters of the beta distribution to {m, a, b, N} and the assumed likelihood function are then used to compute the four underlying parameters of the beta distribution; and from them, \\skew1\\bar{m} and s are computed using exact formulae.
Oliveira, Rita de Cássia Silva de; Brito, Marcus Vinicius Henriques; Ribeiro, Rubens Fernando Gonçalves; Oliveira, Leonam Oliver Durval; Monteiro, Andrew Moraes; Brandão, Fernando Mateus Viegas; Cavalcante, Lainy Carollyne da Costa; Gouveia, Eduardo Henrique Herbster; Henriques, Higor Yuri Bezerra
2017-03-01
To evaluate the effects of tramadol hydrochloride associated to remote ischemic perconditioning on oxidative stress. Twenty five male rats (Wistar) underwent right nephrectomy and were distributed into five groups: Sham group (S); Ischemia/Reperfusion group (I/R) with 30 minutes of renal ischemia; Remote ischemic perconditioning group (Per) with three cycles of 10 minutes of I/R performed during kidney ischemia; Tramadol group (T) treated with tramadol hydrochloride (40mg/kg); remote ischemic perconditioning + Tramadol group (Per+T) with both treatments. Oxidative stress was assessed after 24 hours of reperfusion. Statistical differences were observed in MDA levels between I/R group with all groups (p<0.01), in addition there was difference between Tramadol with Sham, Per and Per+T groups (p<0.05), both in plasma and renal tissue. Remote ischemic perconditioning was more effective reducing renal ischemia-reperfusion injury than administration of tramadol or association of both treatments.
The Computer in Educational Decision Making. An Introduction and Guide for School Administrators.
ERIC Educational Resources Information Center
Sanders, Susan; And Others
This text provides educational administrators with a working knowledge of the problem-solving techniques of PERT (planning, evaluation, and review technique), Linear Programming, Queueing Theory, and Simulation. The text includes an introduction to decision-making and operations research, four chapters consisting of indepth explanations of each…
Systems Management for Force Modernization Equipment.
1982-04-15
PERT (New York: John Wiley & Sons, Inc., 1963), p. ’. 8. Fred Luthans, Introduction to Management (New York: McGraw- Hill Book Company, 1976), p...34Fielding Army Systems: Experiences and Lessons Learned." C, Vol. 3, No. 4, Autumn, 1980. Luthans, Fred. Introduction to Management . New York: McGraw
PERT Planning for Physical Educational Facilities.
ERIC Educational Resources Information Center
Moriarty, R. J.
1973-01-01
Because of the high degree of interest in education and physical education in Canada, there has been a phenomenal growth in physical education facilities. Physical educators must become facility specialists in order to contribute to the planning, procurement, and utilization of the new complexes that are being developed. Among the most difficult…
GREMEX- GODDARD RESEARCH AND ENGINEERING MANAGEMENT EXERCISE SIMULATION SYSTEM
NASA Technical Reports Server (NTRS)
Vaccaro, M. J.
1994-01-01
GREMEX is a man-machine management simulation game of a research and development project. It can be used to depict a project from just after the development of the project plan through the final construction phase. The GREMEX computer programs are basically a program evaluation and review technique (PERT) reporting system. In the usual PERT program, the operator inputs each month the amount of work performed on each activity and the computer does the bookkeeping to determine the expected completion date of the project. GREMEX automatically assumes that all activities due to be worked in the current month will be worked. GREMEX predicts new durations (and costs) each month based on management actions taken by the players and the contractor's abilities. Each activity is assigned the usual cost and duration estimates but must also be assigned three parameters that relate to the probability that the time estimate is correct, the probability that the cost estimate is correct, and the probability of technical success. Management actions usually can be expected to change these probabilities. For example, use of overtime or double shifts in research and development work will decrease duration and increase cost by known proportions and will also decrease the probability of technical success due to an increase in the likelihood of accidents or mistakes. These re-estimating future events and assigning probability factors provides life to the model. GREMEX is not a production job for project management. GREMEX is a game that can be used to train management personnel in the administration of research and development type projects. GREMEX poses no 'best way' to manage a project. The emphasis of GREMEX is to expose participants to many of the factors involved in decision making when managing a project in a government research and development environment. A management team can win the game by surpassing cost, schedule, and technical performance goals established when the simulation began. The serious management experimenter can use GREMEX to explore the results of management methods they could not risk in real life. GREMEX can operate with any research and development type project with up to 15 subcontractors and produces reports simulating monthly or quarterly updates of the project PERT network. Included with the program is a data deck for simulation of a fictitious spacecraft project. Instructions for substituting other projects are also included. GREMEX is written in FORTRAN IV for execution in the batch mode and has been implemented on an IBM 360 with a central memory requirement of approximately 350K (decimal) of 8 bit bytes. The GREMEX system was developed in 1973.
The US EPA ToxCast program is using in vitro high-throughput screening assays to profile the bioactivity of environmental chemicals, with the ultimate goal of predicting in vivo toxicity. We hypothesize that in modeling toxicity it will be more constructive to understand the pert...
Decisions That Affect Outcomes in the Distant Future.
1979-12-01
cost-he.n- efit analysis is a special case of the results lerived in this rvso arch . %; In Chapter 3, the assumptions underlying willingness to pay are...p°,m°,z°). This information would generally cone from ex- perts or econometric studies. 5.5 Consistency Conditions There are two ways in which we
Les supraconducteurs en courant alternatif
NASA Astrophysics Data System (ADS)
Lacaze, A.; Laumond, Y.
1991-02-01
Since 1983, when the very first AC wire became available, the comprehension of electromagnetic phenomenas ruling over stability and losses of multifilamentary superconductors in AC use, has much improved. Improvements of manufacturing process has opened up the possibility of industrial scale manufacturing of up to one million, 140 nm in diameter, filaments. The AC loss performances and stability remains at the best level up to date. Les premiers brins supraconducteurs utilisables en courants alternatifs sont apparus en 1983. Depuis, des progrès importants ont été réalisés sur le plan de la compréhension des phénomènes électromagnétiques commandant les pertes et la stabilité dans des brins multifilamentaires à filaments ultrafins. L'amélioration des performances et des procédés de fabrication nous permet aujourd'hui de présenter des brins, fabriqués à l'échelle industrielle, comprenant jusqu'à près d'un million de filaments de 140 nm de diamètre, avec des niveaux de pertes et de stabilité inégalés à ce jour.
Diagnosis and management of pancreatic exocrine insufficiency.
Nikfarjam, Mehrdad; Wilson, Jeremy S; Smith, Ross C
2017-08-21
In 2015, the Australasian Pancreatic Club (APC) published the Australasian guidelines for the management of pancreatic exocrine insufficiency (http://pancreas.org.au/2016/01/pancreatic-exocrine-insufficiency-guidelines). Pancreatic exocrine insufficiency (PEI) occurs when normal digestion cannot be sustained due to insufficient pancreatic digestive enzyme activity. This may be related to a breakdown, at any point, in the pancreatic digestive chain: pancreatic stimulation; synthesis, release or transportation of pancreatic enzymes; or synchronisation of secretions to mix with ingested food. Main recommendations: The guidelines provide advice on diagnosis and management of PEI, noting the following: A high prevalence of PEI is seen in certain diseases and conditions, such as cystic fibrosis, acute and chronic pancreatitis, pancreatic cancer and pancreatic surgery. The main symptoms of PEI are steatorrhoea or diarrhoea, abdominal pain, bloating and weight loss. These symptoms are non-specific and often go undetected and untreated. PEI diagnosis is predominantly based on clinical findings and the presence of underlying disease. The likelihood of PEI in suspected patients has been categorised into three groups: definite, possible and unlikely. If left untreated, PEI may lead to complications related to fat malabsorption and malnutrition, and have an impact on quality of life. Pancreatic enzyme replacement therapy (PERT) remains the mainstay of PEI treatment with the recommended adult initial enzyme dose being 25 000-40 000 units of lipase per meal, titrating up to a maximum of 75 000-80 000 units of lipase per meal. Adjunct acid-suppressing therapy may be useful when patients still experience symptoms of PEI on high dose PERT. Nutritional management by an experienced dietitian is essential. Changes in management as a result of these guidelines: These are the first guidelines to classify PEI as being definite, possible or unlikely, and provide a diagnostic algorithm to facilitate the early diagnosis of PEI and appropriate use of PERT.
McGarry, Meghan E; Neuhaus, John M; Nielson, Dennis W; Burchard, Esteban; Ly, Ngoc P
2017-12-01
Hispanic patients with cystic fibrosis (CF) have decreased life expectancy compared to non-Hispanic white patients. Pulmonary function is a main predictor of life expectancy in CF. Ethnic differences in pulmonary function in CF have been understudied. The objective was to compare longitudinal pulmonary function between Hispanic and non-Hispanic white patients with CF. This cohort study of 15 018 6-25 years old patients in the CF Foundation Patient Registry from 2008 to 2013 compared FEV 1 percent predicted and longitudinal change in FEV 1 percent predicted in Hispanic to non-Hispanic white patients. We used linear mixed effects models with patient-specific slopes and intercepts, adjusting for 14 demographic and clinical variables. We did sub-analyses by CFTR class, F508del copies, and PERT use. Hispanic patients had lower FEV 1 percent predicted (79.9%) compared with non-Hispanic white patients (85.6%); (-5.8%, 95%CI -6.7% to -4.8%, P < 0.001), however, there was no difference in FEV 1 decline over time. Patients on PERT had a larger difference between Hispanic and non-Hispanic white patients in FEV 1 percent predicted than patients not on PERT (-6.0% vs -4.1%, P = 0.02). The ethnic difference in FEV 1 percent predicted was not statistically significant between CFTR classes (Class I-III: -6.1%, Class IV-V: -5.9%, Unclassified: -5.7%, P > 0.05) or between F508del copies (None: -7.6%, Heterozygotes: -5.6%, Homozygotes: -5.3%, P > 0.05). Disparities in pulmonary function exist in Hispanic patients with CF early in life and then persist without improving or worsening over time. It is valuable to investigate the factors contributing to pulmonary function in Hispanic patients with CF. © 2017 Wiley Periodicals, Inc.
New global electron density observations from GPS-RO in the D- and E-Region ionosphere
NASA Astrophysics Data System (ADS)
Wu, Dong L.
2018-06-01
A novel retrieval technique is developed for electron density (Ne) in the D- and E-region (80-120 km) using the high-quality 50-Hz GPS radio occultation (GPS-RO) phase measurements. The new algorithm assumes a slow, linear variation in the F-region background when the GPS-RO passes through the D- and E-region, and extracts the Ne profiles at 80-130 km from the phase advance signal caused by Ne. Unlike the conventional Abel function, the new approach produces a sharp Ne weighting function in the lower ionosphere, and the Ne retrievals are in good agreement with the IRI (International Reference Ionosphere) model in terms of monthly maps, zonal means and diurnal variations. The daytime GPS-RO Ne profiles can be well characterized by the α-Chapman function of three parameters (NmE, hmE and H), showing that the bottom of E-region is deepening and sharpening towards the summer pole. At high latitudes the monthly GPS-RO Ne maps at 80-120 km reveal clear enhancement in the auroral zones, more prominent at night, as a result of energetic electron precipitation (EEP) from the outer radiation belt. The D-/E-region auroral Ne is strongly correlated with Kp on a daily basis. The new Ne data allow further comprehensive analyses of the sporadic E (Es) phenomena in connection with the background Ne in the E-region. The layered (2-10 km) and fluctuated (<2 km) Es components, namely Ne_Layer than Ne_Pert, are extracted with respect to the background Ne_Region on a profile-by-profile basis. The Ne_Layer component has a strong but highly-refined peak at ∼105 km, with an amplitude smaller than Ne_Region approximately by an order of magnitude. The Ne_Pert component, which was studied extensively in the past, is ∼2 orders of magnitude weaker than Ne_Layer. Both Ne_Layer and Ne_Pert are subject to significant diurnal and semidiurnal variations, showing downward progression with local time in amplitude. The 11-year solar cycle dominates the Ne interannual variations, showing larger Ne_Region and Ne_Layer but smaller Ne_Pert amplitudes in the solar maximum years. Enhanced Ne profiles are often observed in the polar winter, showing good correlation with solar proton events (SPEs) and geomagnetic activity. The new methodology offers great potential for retrieving low Ne in the D-region, where radio propagation and communication blackouts can occur due to enhanced ionization. For space weather applications it is recommended for GPS-RO operations to raise the top of high-rate data acquisition to ∼140 km in the future.
Modélisation des phénomènes électromagnétiques dans les matériaux supraconducteurs
NASA Astrophysics Data System (ADS)
Maslouh, M.; Bouillault, F.
1998-03-01
This paper describes a numerical method to determine the losses in a solid superconductor plunged in transverse magnetic field or in transport current. The model which is based on the Bean critical state, shows that the hysteretic phenomena are taken in account. Cet article présente une méthode de calcul permettant la détermination des pertes dans un matériau massif supraconducteur soumis à un champ magnétique transversal ou parcouru par un courant de transport. Le modèle, basé sur celui de l'état critique de Bean, met en évidence le caractère hystérétique des phénomènes.
Applying the TOC Project Management to Operation and Maintenance Scheduling of a Research Vessel
NASA Astrophysics Data System (ADS)
Manti, M. Firdausi; Fujimoto, Hideo; Chen, Lian-Yi
Marine research vessels and their systems are major assets in the marine resources development. Since the running costs for the ship are very high, it is necessary to reduce the total cost by an efficient scheduling for operation and maintenance. To reduce project period and make it efficient, we applied TOC project management method that is a project management approach developed by Dr. Eli Goldratt. It challenges traditional approaches to project management. It will become the most important improvement in the project management since the development of PERT and critical path methodologies. As a case study, we presented the marine geology research project for the purpose of operations in addition to repair on the repairing dock projects for maintenance of vessels.
Seismic waves in 3-D: from mantle asymmetries to reliable seismic hazard assessment
NASA Astrophysics Data System (ADS)
Panza, Giuliano F.; Romanelli, Fabio
2014-10-01
A global cross-section of the Earth parallel to the tectonic equator (TE) path, the great circle representing the equator of net lithosphere rotation, shows a difference in shear wave velocities between the western and eastern flanks of the three major oceanic rift basins. The low-velocity layer in the upper asthenosphere, at a depth range of 120 to 200 km, is assumed to represent the decoupling between the lithosphere and the underlying mantle. Along the TE-perturbed (TE-pert) path, a ubiquitous LVZ, about 1,000-km-wide and 100-km-thick, occurs in the asthenosphere. The existence of the TE-pert is a necessary prerequisite for the existence of a continuous global flow within the Earth. Ground-shaking scenarios were constructed using a scenario-based method for seismic hazard analysis (NDSHA), using realistic and duly validated synthetic time series, and generating a data bank of several thousands of seismograms that account for source, propagation, and site effects. Accordingly, with basic self-organized criticality concepts, NDSHA permits the integration of available information provided by the most updated seismological, geological, geophysical, and geotechnical databases for the site of interest, as well as advanced physical modeling techniques, to provide a reliable and robust background for the development of a design basis for cultural heritage and civil infrastructures. Estimates of seismic hazard obtained using the NDSHA and standard probabilistic approaches are compared for the Italian territory, and a case-study is discussed. In order to enable a reliable estimation of the ground motion response to an earthquake, three-dimensional velocity models have to be considered, resulting in a new, very efficient, analytical procedure for computing the broadband seismic wave-field in a 3-D anelastic Earth model.
After the Fall: The Use of Surplus Capacity in an Academic Library Automation System.
ERIC Educational Resources Information Center
Wright, A. J.
The possible uses of excess central processing unit capacity in an integrated academic library automation system discussed in this draft proposal include (1) in-house services such as word processing, electronic mail, management decision support using PERT/CPM techniques, and control of physical plant operation; (2) public services such as the…
THE EDUCATIONAL INSTITUTION AS A SYSTEM--A PROPOSED GENERALIZED PROCEDURE FOR ANALYSIS.
ERIC Educational Resources Information Center
REISMAN, ARNOLD; TAFT, MARTIN I.
A UNIFIED APPROACH TO THE ANALYSIS AND SYNTHESIS OF THE FUNCTIONS AND OPERATIONS IN EDUCATIONAL INSTITUTIONS IS PRESENTED. SYSTEMS ANALYSIS TECHNIQUES USED IN OTHER AREAS SUCH AS CRAFT, PERT, CERBS, AND OPERATIONS RESEARCH ARE SUGGESTED AS POTENTIALLY ADAPTABLE FOR USE IN HIGHER EDUCATION. THE MAJOR OBJECTIVE OF A SCHOOL IS TO ALLOCATE AVAILABLE…
Assessment of the Florida College and Career Readiness Initiative: Year 2 Report
ERIC Educational Resources Information Center
Mokher, Christine; Jacobson, Lou
2014-01-01
The Florida College and Career Readiness Initiative is a statewide policy that mandates college placement testing of 11th-graders who meet high school graduation criteria but are unlikely to meet college readiness criteria. Students who score below college-ready on the Postsecondary Education Readiness Test (PERT) are required to take math and…
ERIC Educational Resources Information Center
Hanson, Janet
2017-01-01
This study explored the relationship between school level and the psychosocial construct of an academic mindset operationalized on the Likert-style Project for "Educational Research That Scales" (PERTS) instrument; widely used in testing academic mindset interventions at the classroom level. Analyses were conducted using existing school…
Statistical PERT: An Improved Subnetwork Analysis Procedure
1975-11-01
z -^■zzzzzzzzzzzz - u < t- < VIVII/II/IU’. Vll/tVlV. WI^IA O li ’ V *->■>>■■>■ >>■>>*■*’* t, < K^^ Kfc -^-Kh-f-^t-H li. t...A.R. Dawe Office of Naval Research San Francisco Area Office 760 Market St. - ROOD 447 San Francisco, California 94103 Technical Library Naval
Persistance de la veine cave supérieure gauche: à propos d'un cas
Abidi, Kamel; Jellouli, Manel; Hammi, Yousra; Gargah, Tahar
2015-01-01
La persistance de la veine cave supérieure gauche (PVCSG) est une malformation congénitale rare et bénigne. Elle est souvent asymptomatique et sa découverte est dans la majorité des cas fortuite. Nous rapportons le cas d'un enfant chez lequel on découvre cette anomalie suite à une perte de connaissance. S.M, âgé de 9 ans, sans antécédents pathologiques notables, admis pour perte de connaissance de durée brève, sans mouvements anormaux toniques ou cloniques. L'examen physique à son admission est normal. L’électrocardiogramme est sans anomalies. La radiographie du thorax a montré un arc moyen gauche en double contour. Le Holter rythmique a montré des signes d'hyperréactivité vagale. L’échocardiographie trans-thoracique (ETT) a mis en évidence une dilatation nette du sinus coronaire et a éliminé la présence d'une cardiopathie. Une angio- IRM cardiaque a confirmé le diagnostic de PVCSG. Par ailleurs l'aorte thoracique a été normale dans ces différents segments. PMID:26664537
Adaptive Modeling and Real-Time Simulation
1984-01-01
34 Artificial Inteligence , Vol. 13, pp. 27-39 (1980). Describes circumscription which is just the assumption that everything that is known to have a particular... Artificial Intelligence Truth Maintenance Planning Resolution Modeling Wcrld Models ~ .. ~2.. ASSTR AT (Coninue n evrse sieIf necesaran Identfy by...represents a marriage of (1) the procedural-network st, planning technology developed in artificial intelligence with (2) the PERT/CPM technology developed in
2005-07-13
du froid: l’électronique refroidie, la supraconductivité et la cryogénie. Les supraconducteurs à haute température (HTS) permettent la conception de...récepteurs à très faible bruit, des filtres compacts et sans pertes. Les supraconducteurs à basse température (LTS) qui sont utilisés dans les circuits
On the Implementation of Iterative Detection in Real-World MIMO Wireless Systems
2003-12-01
multientr~es et multisorties (MIMO) permettent une exploitation remarquable du spectre comparativement aux syst~mes traditionnels A antenne unique...vecteurs symboliques pilotes connus cause une perte de rendement n~gligeable comparativement au cas hypothdtique des connaissances des voies parfaites...useful design guidelines for iterative systems. it does not provide any fundamental understanding as to how the design of the detector can improve the
Additional of polyethylene glycol on the preparation of LaPO4:Eu3+ phosphor
NASA Astrophysics Data System (ADS)
Panatarani, Camellia; Joni, I. Made
2013-09-01
Solution phase method was used to synthesis nanocrystal LaPO4:Eu3+. Polyethylene glycol with vary molecular weight (MW) was added to allow an exothermic reaction to get a high crystalinity of LaPO4:Eu3+. The x-ray pattern of as prepared LaPO4 was obtained by using an X'pert PANalytical diffractometer with CuKα radiation (λ = 1.5406 Å) and the photoluminescent measurement spectra is obtained by using Fluorescence Spectrometer LS55, Perkin Elmer. The additional of various MW of polyethylene glycol into the precursor solution of LaPO4:Eu3+ affected the crystal structure and luminescent properties. Higher MW of PEG depressing the luminescent spectra. The emission origin from 5D0-7F4 transition vanished by additional 500,000 and 2,000,000 MW of PEG.
A Technical History of the SEI
2017-01-01
service-oriented architecture concepts by leading a team of technical ex - perts from several Air Force financial management programs of record in...the application of computing, that trend re- versed dramatically in the 1970s for a variety of reasons, including the difficulty the DoD was ex ...implement it in programming languages other than Ada. The lessons learned from the testbed ex - periments were incorporated into a comprehensive guide
Psychological Analyses of Courageous Performance in Military Personnel
1990-08-01
British Columbia ARI Scientific Coordination Office, London Milton Katz, Chief Office of Basic Research Michael Kaplan , Director OTIC August1990 ELECTE...For NTIS Ri University of British Columbia DTic TAB Just fitIcat lo Technical review by Dy __Dttibut ton/ Michael Kaplan Avallability CodesDIst...22c. OFFICE SYMBOL Michael Kaplan (202) 274-8722 PERT-BR DD Form 1473, JUN 86 Previous editions are obsolete. SECURITY CLASSIFICATION OF THIS PAGE
Hartigan, Erin H.; Axe, Michael J.; Snyder-Mackler, Lynn
2013-01-01
STUDY DESIGN Randomized clinical trial. OBJECTIVES Determine effective interventions for improving readiness to return to sports post-operatively in patients with complete, unilateral, anterior cruciate ligament (ACL) rupture who do not compensate well after the injury (noncopers). Specifically, we compared the effects of 2 preoperative interventions on quadriceps strength and functional outcomes. BACKGROUND The percentage of athletes who return to sports after ACL reconstruction varies considerably, possibly due to differential responses after acute ACL rupture and different management. Prognostic data for noncopers following ACL reconstruction is absent in the literature. METHODS Forty noncopers were randomly assigned to receive either progressive quadriceps strength-training exercises (STR group) or perturbation training in conjunction with strength-training exercises (PERT group) for 10 preoperative rehabilitation sessions. Postoperative rehabilitation was similar between groups. Data on quadriceps strength indices [(involved limb/uninvolved limb force) ×100], 4 hop score indices, and 2 self-report questionnaires were collected preoperatively and 3, 6, and 12 months postoperatively. Mann-Whitney U tests were used to compare functional differences between the groups. Chi-square tests were used to compare frequencies of passing functional criteria and reasons for differences in performance between groups postoperatively. RESULTS Functional outcomes were not different between groups, except a greater number of patients in the PERT group achieved global rating scores (current knee function expressed as a percentage of overall knee function prior to injury) necessary to pass return-to-sports criteria 6 and 12 months after surgery. Mean scores for each functional outcome met return-to-sports criteria 6 and 12 months postoperatively. Frequency counts of individual data, however, indicated that 5% of noncopers passed RTS criteria at 3, 48% at 6, and 78% at 12 months after surgery. CONCLUSION Functional outcomes suggest that a subgroup of noncopers require additional supervised rehabilitation to pass stringent criteria to return to sports. LEVEL OF EVIDENCE Therapy, level 2b. PMID:20195019
New Global Electron Density Observations from GPS-RO in the D- and E-Region Ionosphere
NASA Technical Reports Server (NTRS)
Wu, Dong L.
2017-01-01
A novel retrieval technique is developed for electron density (N(sub e)) in the D- and E-region (80-120 km) using the high-quality 50-Hz GPS radio occultation (GPS-RO) phase measurements. The new algorithm assumes a slow, linear variation in the F-region background when the GPS-RO passes through the D- and E-region, and extracts the N(sub e) profiles at 80-130 km from the phase advance signal caused by N(sub e). Unlike the conventional Abel function, the new approach produces a sharp N(sub e) weighting function in the lower ionosphere, and the N(sub e) retrievals are in good agreement with the IRI (International Reference Ionosphere) model in terms of monthly maps, zonal means and diurnal variations. The daytime GPS-RO N(sub e) profiles can be well characterized by the alpha-Chapman function of three parameters (N(sub mE), h(sub mE) and H), showing that the bottom of E-region is deepening and sharpening towards the summer pole. At high latitudes the monthly GPS-RO N(sub e) maps at 80-120 km reveal clear enhancement in the auroral zones, more prominent at night, as a result of energetic electron precipitation (EEP) from the outer radiation belt. The D-/E-region auroral N(sub e) is strongly correlated with K(sub p) on a daily basis. The new N(sub e) data allow further comprehensive analyses of the sporadic E (E(sub s)) phenomena in connection with the background N(sub e) in the E-region. The layered (2-10 km) and fluctuated (less than 2 km) E(sub s) components, namely N(sub e_Layer) than N(sub e_Pert), are extracted with respect to the background N( sub e_Region) on a profile-by-profile basis. The N(sub e_Layer) component has a strong but highly-refined peak at approximately 105 km, with an amplitude smaller than N(sub e_Region) approximately by an order of magnitude. The N(sub e_Pert) component, which was studied extensively in the past, is approximately 2 orders of magnitude weaker than N(sub e_Layer). Both N(sub e_Layer) and N(sub e_Pert) are subject to significant diurnal and semidiurnal variations, showing downward progression with local time in amplitude. The 11-year solar cycle dominates the N(sub e) interannual variations, showing larger N(sub e_Region) and N(sub e_Layer) but smaller N(sub e_Pert) amplitudes in the solar maximum years. Enhanced Ne profiles are often observed in the polar winter, showing good correlation with solar proton events (SPEs) and geomagnetic activity. The new methodology offers great potential for retrieving low N(sub e) in the D-region, where radio propagation and communication blackouts can occur due to enhanced ionization. For space weather applications it is recommended for GPSRO operations to raise the top of high-rate data acquisition to approximately 140 km in the future.
Biological Effects of Short, High-Level Exposure to Gases: Ammonia
1980-05-01
NMRAMI~ N UBR Fort Detrick, Frederick. MD 21701 14. MONITORING AGEINCY MNA ADORSA(If 4111ten, five CmoIelaid Off..) IS. SECURITY CLASS. (of Ole rePert...physiology, manifested either by increased or de - creased ventilation minute volume, have been reported at concentrations over 150 ppm (104 mg/m3...as have occurred generally have been attributed to acute pulmonary edema . Surviving patients with * residual pulmonary dysfunction usually have had
Space age management for social problems
NASA Technical Reports Server (NTRS)
Levine, A. L.
1973-01-01
Attempts to apply space age management to social problems were plagued with difficulties. Recent experience in the State of Delaware and in New York City, however, indicate new possibilities. Project management as practiced in NASA was applied with promising results in programs dealing with housing and social services. Such applications are feasible, according to recent research, because project management utilizes social and behavioral approaches, as well as advanced management tools, such as PERT, to achieve results.
The Soviet Involvement in the Ogaden War
1980-02-01
Somali border with the only good airport in the Oqaden, to fiqht riqhtists in northwestern Ethiopia. 2 1 Once the Somalis did invade, Moscow played for...Monpoerer rioe of the International Studies Association. Chese Planning and Organization Design. Streaa Italy. 20 Perk Plaze Hotel. St. Louis. Missouri...Jr, . Th Transpor PoPert- of Diute Gases ia Applied Fgelds. 183 pp. Mal PP 264 1979 Werland. Robert G . ’The US Navy on the Pacific, Past. Present
Instructional Systems Development: Conceptual Analysis and Comprehensive Bibliography
1976-02-01
design; Dr. Richard Braby and Ms. Karen Larm of TAEG for lending their expertise and libraries on media selection and intructor training; Dr. Robert... Vallance , T. R., Crawford, M. P. Identifying training needs and translating them into research requirements. In R. Glaser (Ed.), Training and education...systems design. Erngineeri~q Education, March 1969, 59(7), 861-865. Baker, B., Eris, R. L. An introduction to PERT/CPM. Homewood, IL: Richard D. Irwin
SOFTCOST - DEEP SPACE NETWORK SOFTWARE COST MODEL
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1994-01-01
The early-on estimation of required resources and a schedule for the development and maintenance of software is usually the least precise aspect of the software life cycle. However, it is desirable to make some sort of an orderly and rational attempt at estimation in order to plan and organize an implementation effort. The Software Cost Estimation Model program, SOFTCOST, was developed to provide a consistent automated resource and schedule model which is more formalized than the often used guesswork model based on experience, intuition, and luck. SOFTCOST was developed after the evaluation of a number of existing cost estimation programs indicated that there was a need for a cost estimation program with a wide range of application and adaptability to diverse kinds of software. SOFTCOST combines several software cost models found in the open literature into one comprehensive set of algorithms that compensate for nearly fifty implementation factors relative to size of the task, inherited baseline, organizational and system environment, and difficulty of the task. SOFTCOST produces mean and variance estimates of software size, implementation productivity, recommended staff level, probable duration, amount of computer resources required, and amount and cost of software documentation. Since the confidence level for a project using mean estimates is small, the user is given the opportunity to enter risk-biased values for effort, duration, and staffing, to achieve higher confidence levels. SOFTCOST then produces a PERT/CPM file with subtask efforts, durations, and precedences defined so as to produce the Work Breakdown Structure (WBS) and schedule having the asked-for overall effort and duration. The SOFTCOST program operates in an interactive environment prompting the user for all of the required input. The program builds the supporting PERT data base in a file for later report generation or revision. The PERT schedule and the WBS schedule may be printed and stored in a file for later use. The SOFTCOST program is written in Microsoft BASIC for interactive execution and has been implemented on an IBM PC-XT/AT operating MS-DOS 2.1 or higher with 256K bytes of memory. SOFTCOST was originally developed for the Zylog Z80 system running under CP/M in 1981. It was converted to run on the IBM PC XT/AT in 1986. SOFTCOST is a copyrighted work with all copyright vested in NASA.
Development of an On-Line Biological Detector
1976-07-31
ib . Pert...chemical (Kit 0510). This kit uses the enzymes, glucose oxidase ( Aspergillus niger) and peroxidase (horseradish). In the presence of glucose oxidase, "D...tt -h l o)w the(m to( peteltl rate bt v 1tn t ;w I ib r .I "Flow i i | ;t t r ,I h--Ut cii I r’. 2 Iit I ii I iii t icý, ind M - ) i n i I.t r, i
An Analysis of the Centaur Ground Processing System at the Kennedy Space Center/Cape Canaveral AFS.
1985-12-01
SPONSORING 8tn OFF ICE SYMBOL 9 . PROCUREMENT INSTRUMENT IDENTIFiCATION NUMBER 0 RG4NICATICN II appsicable, * cADDRESS C-;t,. S:sI. and~ /11’ L,a,, .0...The PERT Network................8 2. The SLAM Model................. 9 F. Outline of the Paper................13 Ii. The Shuttle/Centaur G System...104 9 . AWAIT NODE..................104 10. FREE NODE...................105 11. ASSIGN NODE..................105 w 12. COLCT NODE
Human Engineering Procedures Guide
1981-09-01
Evaluation (CGE) 175 3.9-25 Sample Technical Order Functional 193 Evaluation Form 3.9-26 Sample Test Participant History Record 201 3.Q-27 Sample...4 TO furict. evaluation 5 HFTEMAN 0 6 Env and pert. mecas. equipment g 7 System records review * * 8 Test part. history record 0 * * 9 Interview~s 0...local air flow in the range of 0 to 1000 ft/minute. This device is most useful for determining crew comfort conditions. g) Hygrometer or Psychrometer
Roeyen, Geert; Jansen, Miet; Ruyssinck, Laure; Chapelle, Thiery; Vanlander, Aude; Bracke, Bart; Hartman, Vera; Ysebaert, Dirk; Berrevoet, Frederik
2016-12-01
Recently, pancreaticogastrostomy (PG) has attracted renewed interest as a reconstruction technique after pancreaticoduodenectomy (PD), as it may imply a lower risk of clinical pancreatic fistula than reconstruction by pancreaticojejunostomy (PJ). We hypothesise that pancreatic exocrine insufficiency (PEI) is more common during clinical follow-up after PG than it is after PJ. This study compares the prevalence of PEI in patients undergoing PD for malignancy with reconstruction by PG versus reconstruction by PJ. PEI during the first year of follow-up was defined as the intake of pancreatic enzyme replacement therapy (PERT) within one year postoperatively and/or an abnormal exocrine function test. A total of 186 patients, having undergone surgery at two university hospitals, were included in the study. PEI during the first year postoperatively was present in 75.0% of the patients with PG, compared to 45.7% with PJ (p < 0.001). Intake of PERT within one year after surgery was found to be more prevalent in the PG group, i.e. 75.8% versus 38.5% (p < 0.001). There was a trend towards more disturbed exocrine function tests after PG (p = 0.061). PEI is more common with PG reconstruction than with PJ reconstruction after pancreaticoduodenectomy for malignancy. Copyright © 2016 International Hepato-Pancreato-Biliary Association Inc. Published by Elsevier Ltd. All rights reserved.
A Descriptive Evaluation of Automated Software Cost-Estimation Models,
1986-10-01
Version 1.03D) * PCOC (Version 7.01) - PRICE S • SLIM (Version 1.1) • SoftCost (Version 5. 1) * SPQR /20 (Version 1. 1) - WICOMO (Version 1.3) These...produce detailed GANTT and PERT charts. SPQR /20 is based on a cost model developed at ITT. In addition to cost, schedule, and staffing estimates, it...cases and test runs required, and the effectiveness of pre-test and test activities. SPQR /20 also predicts enhancement and maintenance activities. C
Accumulated Effects of Work under Heat Stress
1980-04-01
AUTHORITY ONR notice dtd 6 Mar 1981 THIS PAGE IS UNCLASSIFIED W04 GYUPJN UNIVERSITY OF THE NEGEV WWI D~epartment of Biology Department of Occupational...Beer Sheva, Israel. 2, A XiI - I This report was submitted to the Senate of the Ben Gurion University of the Negev by A. Gertner as part of the...among steel and glass workers. A s•jor pert of our country is situated in the warm Negev . There are various industrial plants and settlements in this
Analytical Solutions for Predicting Underwater Explosion Gas Bubble Behaviour
2010-11-01
donne les meilleures prévisions comparativement aux ajustements avec les données expérimentales. Le modèle à fluide incompressible exige d’utiliser une...couplage des mouvements radial et migratoire. L’étude montre que, comparativement aux résultats d’expérience, la réduction du rayon de la bulle... comparativement aux ajustements avec les données expérimentales. Le modèle à fluide incompressible exige d’utiliser une fonction empirique de perte d’énergie
Semi-insulating GaN Substrates for High-frequency Device Fabrication
2008-06-18
of the undoped and iron-doped samples were probed by X-ray diffraction (XRD) measurements using a Philips X’pert MRD triple axis diffracted beam system...diode laser. The light emitted by the samples was dispersed by a Princeton/Acton Trivista 557 triple spectrometer fit with an LN2 cool OMA V InGaAs... point out that the relative intensity of all these bands decreases with increasing of the iron doping. This observation is consistent with the change in
Freedman, Steven D
2017-07-01
Patients with exocrine pancreatic insufficiency (EPI) have suboptimal secretion of pancreatic digestive enzymes and experience a range of clinical symptoms related to the malabsorption of fat. In patients with EPI unable to meet their nutritional requirements, enteral nutrition (EN) support is used to augment nutritional status. In addition to protein and carbohydrate, EN formulas contain fats as a calorie source, as well as vitamins and minerals to help prevent nutritional deficiencies related to malabsorption. Semielemental enteral nutrition formulas are advantageous as they contain hydrolyzed protein, shorter chain carbohydrates, and may contain medium chain triglycerides as a fat source. However, severely pancreatic insufficient patients may be unable to absorb complex long-chain triglycerides provided by EN formulas due to insufficient pancreatic lipase; replacement pancreatic enzyme products are recommended for these patients. Currently, none of the FDA-approved pancreatic enzyme replacement therapy (PERT) products are indicated for use in patients receiving enteral nutrition and administration of enzymes by mixing into enteral nutrition formula is not supported by guidelines as this route is associated with risks. RELiZORB (immobilized lipase) is a novel in-line digestive cartridge that has been designed to address the unmet need for PERT in patients receiving enteral nutrition. RELiZORB efficacy and compatibility with a range of commercially available polymeric and semielemental formulas with varying nutrient, caloric content, and triglyceride chain lengths have been demonstrated. In most formulas, RELiZORB efficiently hydrolyzed greater than 90% of fats within the formula into absorbable fatty acids and monoglycerides.
1999-11-01
slurry was made from mixing iron, guar gum , an enzyme and borax. The guar gum was Hercules Supercol™ food grade fine (200-mesh size) powder . It was...Florida The guar gum was mixed with water in batches in a stirred open top tank to form 2 to 3% solutions. The guar gum solution was pumped first to a...holding tank, then into a truck-mounted batch mixing plant. A positive displacement pump controlled the feed rate of guar gum to the batch mixing plant
An Approach to Verifying Completeness and Consistency in a Rule-Based Expert System.
1982-08-01
peolea with the se e S knowlede base by observing en t om. W0hile thorough testing is an "samt4 Pert of V*flfyL the ooIlst4ftl and capleteness of a...physicians at Stanford’s Oncology Day Care Center on the management of patients who are on experimental treatment protocols. These protocols serve to...for oncology protocol management . Prooceedings of 7th IJCAI, pp. 876- 881, Vancouver, B.C., August 1981. I. van Melle, W. A Domain-Independent system
1986-08-01
is then applied in i ABSTRCT : ,.:,.vu knowledge acquisition from those multiple sources for a specific design, for example, an expert system for...67. N 181.1 47.U3 a75 269;9.6 % A. %3 3 Genetic Explanations: For the concept of a genetic explanation (see .d -. above) to apply to the Gaither...Simulation Research Unit (Acock,1985; Baker,1983; Baker,1985). -. MD’,EX srves as an inner shell for apPlying Artificial Intelligence and E:pert System
Homojunction silicon solar cells doping by ion implantation
NASA Astrophysics Data System (ADS)
Milési, Frédéric; Coig, Marianne; Lerat, Jean-François; Desrues, Thibaut; Le Perchec, Jérôme; Lanterne, Adeline; Lachal, Laurent; Mazen, Frédéric
2017-10-01
Production costs and energy efficiency are the main priorities for the photovoltaic (PV) industry (COP21 conclusions). To lower costs and increase efficiency, we are proposing to reduce the number of processing steps involved in the manufacture of N-type Passivated Rear Totally Diffused (PERT) silicon solar cells. Replacing the conventional thermal diffusion doping steps by ion implantation followed by thermal annealing allows reducing the number of steps from 7 to 3 while maintaining similar efficiency. This alternative approach was investigated in the present work. Beamline and plasma immersion ion implantation (BLII and PIII) methods were used to insert n-(phosphorus) and p-type (boron) dopants into the Si substrate. With higher throughput and lower costs, PIII is a better candidate for the photovoltaic industry, compared to BL. However, the optimization of the plasma conditions is demanding and more complex than the beamline approach. Subsequent annealing was performed on selected samples to activate the dopants on both sides of the solar cell. Two annealing methods were investigated: soak and spike thermal annealing. Best performing solar cells, showing a PV efficiency of about 20%, was obtained using spike annealing with adapted ion implantation conditions.
Deep space network software cost estimation model
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1981-01-01
A parametric software cost estimation model prepared for Deep Space Network (DSN) Data Systems implementation tasks is presented. The resource estimation model incorporates principles and data from a number of existing models. The model calibrates task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit DSN software life cycle statistics. The estimation model output scales a standard DSN Work Breakdown Structure skeleton, which is then input into a PERT/CPM system, producing a detailed schedule and resource budget for the project being planned.
1983-05-01
orre mosdismiel which uses& that pre-lictioa ram be maim umima eel Us have been developing a general solution for self -caoaiteot set of metric or...meet pert, analytical Stu-si dies of the Somliear respose of reio- forced costrate structures have bat At present, multi-dimemslosel aa focused, by s...quantities is avail- able and the applications are limited in 6 this respect. However, the entire develop- ment is " self -correcting" in the sense that 4 as
2003-11-01
de défense sur des matériels, des hommes et des doctrines préexistants, mais part au contraire d’une analyse des menaces et du... hommes et les doctrines. Comme on le verra ultérieurement, cette nouvelle démarche d’ingénierie du système de défense, qui se veut proactive et non...résolvent sous des contraintes de zéro mort ou tout au moins de pertes minimales, dont l ’« acceptabilité » est essentiellement facteur de
Validation of an Acoustic Head Simulator for the Evaluation of Personal Hearing Protection Devices
2004-11-01
et recouvert de peau artificielle. Les cavités de chaque côté permettent l’insertion de modules d’oreilles qui reproduisent les mécanismes des ...aux spécifications publiées. Ces différences n’ont pas influé sur la perte d’insertion. Après correction pour tenir compte des effets de la...un simulateur de tête époxy chargé d’aluminium et recouvert de peau artificielle. La tête est soutenue par un module de cou souple rattaché à un
2005-05-01
simulée d’essai pour obtenir les diagrammes de perte de transmission et de réverbération pour 18 éléments (une source, un réseau remorqué et 16 bouées...were recorded using a 1.5GHz Pentium 4 processor. The test results indicate that the Bellhop program runs fast enough to provide the required acoustic...was determined that the Bellhop program will be fast enough for these clients. Future Plans It is intended to integrate further enhancements that
An Approach to Realizing Process Control for Underground Mining Operations of Mobile Machines
Song, Zhen; Schunnesson, Håkan; Rinne, Mikael; Sturgul, John
2015-01-01
The excavation and production in underground mines are complicated processes which consist of many different operations. The process of underground mining is considerably constrained by the geometry and geology of the mine. The various mining operations are normally performed in series at each working face. The delay of a single operation will lead to a domino effect, thus delay the starting time for the next process and the completion time of the entire process. This paper presents a new approach to the process control for underground mining operations, e.g. drilling, bolting, mucking. This approach can estimate the working time and its probability for each operation more efficiently and objectively by improving the existing PERT (Program Evaluation and Review Technique) and CPM (Critical Path Method). If the delay of the critical operation (which is on a critical path) inevitably affects the productivity of mined ore, the approach can rapidly assign mucking machines new jobs to increase this amount at a maximum level by using a new mucking algorithm under external constraints. PMID:26062092
An Approach to Realizing Process Control for Underground Mining Operations of Mobile Machines.
Song, Zhen; Schunnesson, Håkan; Rinne, Mikael; Sturgul, John
2015-01-01
The excavation and production in underground mines are complicated processes which consist of many different operations. The process of underground mining is considerably constrained by the geometry and geology of the mine. The various mining operations are normally performed in series at each working face. The delay of a single operation will lead to a domino effect, thus delay the starting time for the next process and the completion time of the entire process. This paper presents a new approach to the process control for underground mining operations, e.g. drilling, bolting, mucking. This approach can estimate the working time and its probability for each operation more efficiently and objectively by improving the existing PERT (Program Evaluation and Review Technique) and CPM (Critical Path Method). If the delay of the critical operation (which is on a critical path) inevitably affects the productivity of mined ore, the approach can rapidly assign mucking machines new jobs to increase this amount at a maximum level by using a new mucking algorithm under external constraints.
NASA Astrophysics Data System (ADS)
Saeed, R.; Shah, Asif
2010-03-01
The nonlinear propagation of ion acoustic waves in electron-positron-ion plasma comprising of Boltzmannian electrons, positrons, and relativistic thermal ions has been examined. The Korteweg-de Vries-Burger equation has been derived by reductive perturbation technique, and its shock like solution is determined analytically through tangent hyperbolic method. The effect of various plasma parameters on strength and structure of shock wave is investigated. The pert graphical view of the results has been presented for illustration. It is observed that strength and steepness of the shock wave enervate with an increase in the ion temperature, relativistic streaming factor, positron concentrations, electron temperature and they accrue with an increase in coefficient of kinematic viscosity. The convective, dispersive, and dissipative properties of the plasma are also discussed. It is determined that the electron temperature has remarkable influence on the propagation and structure of nonlinear wave in such relativistic plasmas. The numerical analysis has been done based on the typical numerical data from a pulsar magnetosphere.
NASA Astrophysics Data System (ADS)
Widyastuti, Fajarin, Rindang; Pratiwi, Vania Mitha; Kholid, Rifki Rachman; Habib, Abdulloh
2018-04-01
In this study, RAM composite has been succesfully synthesized by mixing BaM as magnetic materials and PANI as conductive materials. BaM and PANI materials were prepared separately by solid state method and polymerization method, respectively. To investigated the presence of BaM phase and magnetic property of the as prepared BaM, XRD pert PAN analytical and VSM 250 Dexing Magnet were employed. Inductance Capacitance Resistance technique was carried out to measure electrical conductivity of the synthesized PANI materials. In order to further characterized the structural features of BaM and PANI, SEM-EDX FEI 850 and FTIR characterizations were conducted. RAM composite was prepared by mixing BaM and PANI powders with ultrasonic cleaner. Afterwards, VNA (Vector Network Analyzer) characterization was carried out to determine reflection loss value of RAM by applying mixed RAM composite and epoxy paint on aluminum plate using spray gun. Microscopic characterization was employed to investigated the distribution of RAM particles on the substrate. It was found that reflection loss value as low as -27.153 dB was achieved when applied 15 wt% BaM/PANi composite at 100.6 µm thickness. In addition, the absorption of electromagnetic waves value increase as the addition of RAM composite composition increases.
Synthesis and characterization of 2D graphene sheets from graphite powder
NASA Astrophysics Data System (ADS)
Patel, Rakesh V.; Patel, R. H.; Chaki, S. H.
2018-05-01
Graphene is 2D material composed of one atom thick hexagonal layer. This material has attracted great attention among scientific community because of its high surface area, excellent mechanical properties and conductivity due to free electrons in the 2D lattice. There are various approaches to prepare graphene nanosheets such as top-down approach where graphite exfoliation and nanotube unwrapping can be done. The bottom up approach involves deposition of hydrocarbon through CVD, epitaxial method and organo-synthesis etc.. In present studies top down approach method was used to prepare graphene. The graphite powder with around 20 µm to 150µm particle size was subjected to concentrated strong acid in presence of strong oxidizing agent in order to increase the d-spacing between layers which leads to the disruption of crystal lattice as confirmed by XRD (X'pert Philips). FT Raman spectra taken via (Renishaw InVia microscope) of pristine powder and Graphene oxide revealed the increase in D-band and reduction in G-Band. These exfoliated sheets have oxygen rich complexes at the surface of the layers as characterised by FTIR technique. The GO powder was ultrasonicated to prepare the stable suspension of Graphene. The graphene layers were observed under TEM (Philips Tecnai 20) as 2dimensional sheets with around 1µm sizes.
Urban Operations in the Year 2020 (Operations en zone urbaine en l’an 2020)
2003-04-01
d’un ennemi, l’accent étant alors mis sur la phase correspondante du concept. C’est ainsi que , pour mettre l’ennemi en échec, on a généralement...recommande que l’OTAN développe des capacités à utiliser dans des zones urbaines en se concentrant sur les besoins essentiels mis en évidence dans la... on peut citer les véhicules sans pilote et les armes non létales, qui permettent de réduire les pertes, ainsi que les dispositifs de largage de
Synthesis and characterization of nanocrystalline graphite from coconut shell with heating process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wachid, Frischa M., E-mail: frischamw@yahoo.com, E-mail: adhiyudhaperkasa@yahoo.com, E-mail: afandisar@yahoo.com, E-mail: nurulrosyidah92@gmail.com, E-mail: darminto@physics.its.ac.id; Perkasa, Adhi Y., E-mail: frischamw@yahoo.com, E-mail: adhiyudhaperkasa@yahoo.com, E-mail: afandisar@yahoo.com, E-mail: nurulrosyidah92@gmail.com, E-mail: darminto@physics.its.ac.id; Prasetya, Fandi A., E-mail: frischamw@yahoo.com, E-mail: adhiyudhaperkasa@yahoo.com, E-mail: afandisar@yahoo.com, E-mail: nurulrosyidah92@gmail.com, E-mail: darminto@physics.its.ac.id
Graphite were synthesized and characterized by heating process of coconut shell with varying temperature (400, 800 and 1000°C) and holding time (3 and 5 hours). After heating process, the samples were characterized by X-ray diffraction (XRD) and analyzed by X'pert HighScore Plus Software, Scanning Electron Microcope-Energy Dispersive X-Ray (SEM-EDX) and Transmission Electron Microscope-Energy Dispersive X-Ray (TEM-EDX). Graphite and londsdaelite phase were analyzed by XRD. According to EDX analysis, the sample was heated in 1000°C got the highest content of carbon. The amorphous carbon and nanocrystalline graphite were observed by SEM-EDX and TEM-EDX.
L'Assurance Vieillesse et Survivants (AVS)
Monod, Elisabeth; Girard, Philippe
2018-05-24
Series HR Seminar (Preparation à la retraite - 2009 - Preparing for retirement). Situation particuliere des fonctionnaires internationaux: Tant qu'ils travaillent pour une Organisation Internationale, ils sont exonere de l' AVS suisse (pour les ressortissants suisses uniquement a partir du moment ou ils sont affilies a la caisse des pensions; Des la cessation d'activitie, perte du statut et par consequent assujettissement obligatoire a l' AVS. [Special situation of international civil servants: As long as they work for an International Organization, they are exempt from the Swiss SIA (for Swiss nationals only from the moment they are affiliated to the pension fund; loss of status and therefore compulsory liability to the AVS.
Protease activity, localization and inhibition in the human hair follicle
Bhogal, R K; Mouser, P E; Higgins, C A; Turner, G A
2014-01-01
Synopsis Objective In humans, the process of hair shedding, referred to as exogen, is believed to occur independently of the other hair cycle phases. Although the actual mechanisms involved in hair shedding are not fully known, it has been hypothesized that the processes leading to the final step of hair shedding may be driven by proteases and/or protease inhibitor activity. In this study, we investigated the presence of proteases and protease activity in naturally shed human hairs and assessed enzyme inhibition activity of test materials. Methods We measured enzyme activity using a fluorescence-based assay and protein localization by indirect immunohistochemistry (IHC). We also developed an ex vivo skin model for measuring the force required to pull hair fibres from skin. Results Our data demonstrate the presence of protease activity in the tissue material surrounding club roots. We also demonstrated the localization of specific serine protease protein expression in human hair follicle by IHC. These data provide evidence demonstrating the presence of proteases around the hair club roots, which may play a role during exogen. We further tested the hypothesis that a novel protease inhibitor system (combination of Trichogen® and climbazole) could inhibit protease activity in hair fibre club root extracts collected from a range of ethnic groups (UK, Brazil, China, first-generation Mexicans in the USA, Thailand and Turkey) in both males and females. Furthermore, we demonstrated that this combination is capable of increasing the force required to remove hair in an ex vivo skin model system. Conclusion These studies indicate the presence of proteolytic activity in the tissue surrounding the human hair club root and show that it is possible to inhibit this activity with a combination of Trichogen® and climbazole. This technology may have potential to reduce excessive hair shedding. Résumé Objectif Chez l'homme, le processus de perte de cheveux, désigné comme exogène, est censé se produire indépendamment des autres phases du cycle de cheveux. Bien que les mécanismes réels impliqués dans la perte de cheveux ne soient pas entièrement connus, il a été émis l'hypothèse que les processus conduisant à l'étape finale de la perte de cheveux peuvent être modulés par des protéases et/ou l'activité d'inhibiteurs de protéase. Dans cette étude, nous avons étudié la présence de protéases et de l'activité des protéases dans les cheveux humains perdus naturellement et évalué l'activité inhibitrice d'enzyme de différents matériaux. Méthodes Nous avons mesuré l'activité enzymatique en utilisant un dosage basé sur la fluorescence et la localisation des protéines par immunohistochimie indirecte (IHC). Nous avons également développé un modèle de peau ex vivo pour mesurer la force nécessaire pour extraire les fibres capillaires de la peau. Résultats Nos données démontrent la présence d'une activité de la protéase dans le matériau de tissu entourant les racines du bulbe. Nous avons également démontré la localisation de l'expression des protéines de la sérine protéase spécifique du follicule pileux humain par IHC. Ces données fournissent des éléments de preuve démontrant la présence de protéases autour des racines du bulbe de cheveux qui peuvent jouer un rôle durant la phase exogène. Nous avons également testé l'hypothèse selon laquelle un nouveau système inhibiteur de protéase (combinaison de Trichogen ® et climbazole) pouvait inhiber l'activité de la protéase dans les extraits des bulbes de la racine des cheveux, recueillies à partir d'un éventail de groupes ethniques (Royaume-Uni, Brésil, Chine, 1ère génération Mexicains aux États-Unis, Thaïlande et Turquie) dans les deux sexes, mâles et les femelles. En outre, nous avons démontré que cette combinaison est capable d'augmenter la force nécessaire pour enlever les poils dans un système de modèle de peau ex vivo. Conclusion Ces études indiquent la présence d'une activité protéolytique dans le tissu entourant le bulbe de la racine des cheveux humains et montrent qu'il est possible d'inhiber cette activité avec une combinaison de Trichogen ® et climbazole. Cette technologie peut avoir le potentiel de réduire la perte excessive de cheveux. PMID:23992282
[Tissue expanders in the treatment of burn injuries].
Tourabi, K; Ribag, Y; Arrob, A; Moussaoui, A; Ihrai, H
2010-03-31
Les Auteurs présentent leur protocole pour l'expansion cutanée et rapportent quatre cas colligés au service des brûlures de leur hôpital au Maroc. Ils décrivent leur technique opératoire et les résultats obtenus. L'expansion cutanée reste la méthode de choix pour la couverture des pertes de substance étendues et la correction des séquelles de brûlure, et l'expérience rapportée par les Auteurs confirme les bons résultats que l'on peut obtenir avec cette technique, y compris les résultats esthétiques.
L'Assurance Vieillesse et Survivants (AVS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monod, Elisabeth; Girard, Philippe
2009-12-09
Series HR Seminar (Preparation à la retraite - 2009 - Preparing for retirement). Situation particuliere des fonctionnaires internationaux: Tant qu'ils travaillent pour une Organisation Internationale, ils sont exonere de l' AVS suisse (pour les ressortissants suisses uniquement a partir du moment ou ils sont affilies a la caisse des pensions; Des la cessation d'activitie, perte du statut et par consequent assujettissement obligatoire a l' AVS. [Special situation of international civil servants: As long as they work for an International Organization, they are exempt from the Swiss SIA (for Swiss nationals only from the moment they are affiliated to the pensionmore » fund; loss of status and therefore compulsory liability to the AVS.]« less
Annual Report on Electronics Research at The University of Texas at Austin.
1979-07-15
Radiative Recom-bination to Fo rm HD and HeH “ , Che mical Phy s ic s 2 8, pp. 441- 446 , 1978. * B. M il ler and M . F ink , “Effect of Finite...aedl Porn ‘ton~~sA th , Ml 077 0 0 Pont lolvoin , VA 22060 l’oaandry CS Attn M t nnile lAD : ..a n d Coaandrt ‘ ‘1 tres A1 .,‘brr I. tnt.,,.. Labor.tc...ts 88002 Porn NO,~~ t,th , IL! 011) 1 Ned Device. Lab *106 : 08011-ND (~~~. I g. Frt.de.lnn( Pert Nn.a’s,th , 6.2 01105 • The joint Service. Te chni
Ní Chonchubhair, Hazel M; Bashir, Yasir; Dobson, Mark; Ryan, Barbara M; Duggan, Sinead N; Conlon, Kevin C
2018-02-24
Small intestinal bacterial overgrowth (SIBO) is a condition characterised by symptoms similar to pancreatic exocrine insufficiency (PEI) in chronic pancreatitis patients. SIBO is thought to complicate chronic pancreatitis in up to 92% of cases; however, studies are heterogeneous and protocols non-standardised. SIBO may be determined by measuring lung air-expiration of either hydrogen or methane which are by-products of small bowel bacterial fermentation of intraluminal substrates such as carbohydrates. We evaluated the prevalence of SIBO among a defined cohort of non-surgical chronic pancreatitics with mild to severe PEI compared with matched healthy controls. Thirty-five patients and 31 age-, gender- and smoking status-matched healthy controls were evaluated for SIBO by means of a fasting glucose hydrogen breath test (GHBT). The relationship between SIBO and clinical symptoms in chronic pancreatitis was evaluated. SIBO was present in 15% of chronic pancreatitis patients, while no healthy controls tested positive (P = 0.029). SIBO was more prevalent in those taking pancreatic enzyme replacement therapy (PERT) (P = 0.016), with proton pump inhibitor use (PPI) (P = 0.022) and in those with alcohol aetiology (P = 0.023). Patients with concurrent diabetes were more often SIBO-positive and this was statistically significant (P = 0.009). There were no statistically significant differences in reported symptoms between patients with and without SIBO, with the exception of 'weight loss', with patients reporting weight loss more likely to have SIBO (P = 0.047). The prevalence of SIBO in this study was almost 15% and consistent with other studies of SIBO in non-surgical chronic pancreatitis patients. These data support the testing of patients with clinically-relevant PEI unresolved by adequate doses of PERT, particularly in those patients with concurrent diabetes. SIBO can be easily diagnosed therefore allowing more specific and more targeted symptom treatment. Copyright © 2018. Published by Elsevier B.V.
Hartigan, Erin H; Axe, Michael J; Snyder-Mackler, Lynn
2010-03-01
Randomized clinical trial. Determine effective interventions for improving readiness to return to sports postoperatively in patients with complete, unilateral, anterior cruciate ligament (ACL) rupture who do not compensate well after the injury (noncopers). Specifically, we compared the effects of 2 preoperative interventions on quadriceps strength and functional outcomes. The percentage of athletes who return to sports after ACL reconstruction varies considerably, possibly due to differential responses after acute ACL rupture and different management. Prognostic data for noncopers following ACL reconstruction is absent in the literature. Forty noncopers were randomly assigned to receive either progressive quadriceps strength-training exercises (STR group) or perturbation training in conjunction with strength-training exercises (PERT group) for 10 preoperative rehabilitation sessions. Postoperative rehabilitation was similar between groups. Data on quadriceps strength indices [(involved limb/uninvolved limb force) x 100], 4 hop score indices, and 2 self-report questionnaires were collected preoperatively and 3, 6, and 12 months postoperatively. Mann-Whitney U tests were used to compare functional differences between the groups. Chi-square tests were used to compare frequencies of passing functional criteria and reasons for differences in performance between groups postoperatively. Functional outcomes were not different between groups, except a greater number of patients in the PERT group achieved global rating scores (current knee function expressed as a percentage of overall knee function prior to injury) necessary to pass return-to-sports criteria 6 and 12 months after surgery. Mean scores for each functional outcome met return-to-sports criteria 6 and 12 months postoperatively. Frequency counts of individual data, however, indicated that 5% of noncopers passed RTS criteria at 3, 48% at 6, and 78% at 12 months after surgery. Functional outcomes suggest that a subgroup of noncopers require additional supervised rehabilitation to pass stringent criteria to return to sports. Therapy, level 2b.Note: If watching the first video, we recommend downloading and referring to the accompanying PowerPoint slides for any text that is not readable.
L'effet de p53 sur la radiosensibilité des cellules humaines normales et cancéreuses
NASA Astrophysics Data System (ADS)
Little, J. B.; Li, C. Y.; Nagasawa, H.; Huang, H.
1998-04-01
The radiosensitivity of normal human fibroblasts in p53 dependent and associated with the loss of cells from the cycling population as the result of an irreversible G1 arrest; cells lacking normal p53 function show no arrest and are more radioresistant. Under conditions in which the repair potentially lethal radiation damage is facilitated, the fraction of cells arrested in G1 is reduced and survival is enhanced. The response of human tumor cells differs significantly. The radiation-induced G1 arrest is minimal or absent in p53+ tumor cells, and loss of normal p53 function has no consistent effect on their radiosensitivity. These results suggest that p53 status may not be a useful predictive marker for the response of human solid tumors to radiation therapy. La radiosensibilité des fibroblastes diploïdes humains est liée à l'expression de p53, et à la perte de cellules en cycle résultant d'un arrêt irréversible en phase G1 ; dans les cellules n'ayant pas une fonction p53 normale, on ne constate aucun arrêt, et elles sont plus radio-résistantes. Dans des conditions favorables à la réparation de lésions potentiellement léthales dues à l'irradiation, la proportion de cellules bloquées en phase G1 baisse, et les chances de survie sont accrues. Bien différente est la réaction des cellules cancéreuses humaines. Le blocage par irradiation en phase G1 est minime ou inexistant dans les cellules cancéreuses p53^+, et la perte de la fonction normale p53 n'a pas d'effet constant sur leur radiosensibilité. Ces résultats laissent penser que l'expression de p53 n'est pas un indice fiable permettant de prévoir la réaction des tumeurs solides à la radiothérapie.
NASA Technical Reports Server (NTRS)
Redhed, D. D.; Tripp, L. L.; Kawaguchi, A. S.; Miller, R. E., Jr.
1973-01-01
The strategy of the IPAD implementation plan presented, proposes a three phase development of the IPAD system and technical modules, and the transfer of this capability from the development environment to the aerospace vehicle design environment. The system and technical module capabilities for each phase of development are described. The system and technical module programming languages are recommended as well as the initial host computer system hardware and operating system. The cost of developing the IPAD technology is estimated. A schedule displaying the flowtime required for each development task is given. A PERT chart gives the developmental relationships of each of the tasks and an estimate of the operational cost of the IPAD system is offered.
Les Protheses d'Expansion dans le Traitement des Sequelles de Brulures
Tourabi, K.; Ribag, Y.; Arrob, A.; Moussaoui, A.; Ihrai, H.
2010-01-01
Summary Les Auteurs présentent leur protocole pour l'expansion cutanée et rapportent quatre cas colligés au service des brûlures de leur hôpital au Maroc. Ils décrivent leur technique opératoire et les résultats obtenus. L'expansion cutanée reste la méthode de choix pour la couverture des pertes de substance étendues et la correction des séquelles de brûlure, et l'expérience rapportée par les Auteurs confirme les bons résultats que l'on peut obtenir avec cette technique, y compris les résultats esthétiques. PMID:21991194
Synthesis and characterization of polycrystalline brownmillerite cobalt doped Ca2Fe2O5
NASA Astrophysics Data System (ADS)
Dhankhar, Suchita; Bhalerao, Gopal; Baskar, K.; Singh, Shubra
2016-05-01
Brownmillerite compounds with general formula A2BB'O5 (BB' = Mn, Al, Fe, Co) have attracted attention in wide range of applications such as in solid oxide fuel cell, oxygen separation membrane and photocatalysis. Brownmillerite compounds have unique structure with alternate layers of BO6 octahedral layers and BO4 tetrahedral layers. Presence of dopants like Co in place of Fe increases oxygen vacancies. In the present work we have synthesized polycrystalline Ca2Fe2O5 and Ca2Fe1-xCoxO5 (x = 0.01, 0.03) by citrate combustion route. The as prepared samples were characterized by XRD using PANalytical X'Pert System, DRS (Diffuse reflectance spectroscopy) and SEM (Scanning electron microscopy).
EVA/ORU model architecture using RAMCOST
NASA Technical Reports Server (NTRS)
Ntuen, Celestine A.; Park, Eui H.; Wang, Y. M.; Bretoi, R.
1990-01-01
A parametrically driven simulation model is presented in order to provide a detailed insight into the effects of various input parameters in the life testing of a modular space suit. The RAMCOST model employed is a user-oriented simulation model for studying the life-cycle costs of designs under conditions of uncertainty. The results obtained from the EVA simulated model are used to assess various mission life testing parameters such as the number of joint motions per EVA cycle time, part availability, and number of inspection requirements. RAMCOST first simulates EVA completion for NASA application using a probabilistic like PERT network. With the mission time heuristically determined, RAMCOST then models different orbital replacement unit policies with special application to the astronaut's space suit functional designs.
NASA Astrophysics Data System (ADS)
Zhou, Hao-Jun; Yin, Yan-Peng; Fan, Xiao-Qiang; Li, Zheng-Hong; Pu, Yi-Kang
2016-06-01
A perturbation method is proposed to obtain the effective delayed neutron fraction β eff of a cylindrical highly enriched uranium reactor. Based on reactivity measurements with and without a sample at a specified position using the positive period technique, the reactor reactivity perturbation Δρ of the sample in β eff units is measured. Simulations of the perturbation experiments are performed using the MCNP program. The PERT card is used to provide the difference dk of effective neutron multiplication factors with and without the sample inside the reactor. Based on the relationship between the effective multiplication factor and the reactivity, the equation β eff = dk/Δρ is derived. In this paper, the reactivity perturbations of 13 metal samples at the designable position of the reactor are measured and calculated. The average β eff value of the reactor is given as 0.00645, and the standard uncertainty is 3.0%. Additionally, the perturbation experiments for β eff can be used to evaluate the reliabilities of the delayed neutron parameters. This work shows that the delayed neutron data of 235U and 238U from G.R. Keepin’s publication are more reliable than those from ENDF-B6.0, ENDF-B7.0, JENDL3.3 and CENDL2.2. Supported by Foundation of Key Laboratory of Neutron Physics, China Academy of Engineering Physics (2012AA01, 2014AA01), National Natural Science Foundation (11375158, 91326104)
Software cost/resource modeling: Deep space network software cost estimation model
NASA Technical Reports Server (NTRS)
Tausworthe, R. J.
1980-01-01
A parametric software cost estimation model prepared for JPL deep space network (DSN) data systems implementation tasks is presented. The resource estimation model incorporates principles and data from a number of existing models, such as those of the General Research Corporation, Doty Associates, IBM (Walston-Felix), Rome Air Force Development Center, University of Maryland, and Rayleigh-Norden-Putnam. The model calibrates task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit JPL software lifecycle statistics. The estimation model output scales a standard DSN work breakdown structure skeleton, which is then input to a PERT/CPM system, producing a detailed schedule and resource budget for the project being planned.
Characterization of natural puya sand extract of Central Kalimantan by using X-Ray Diffraction
NASA Astrophysics Data System (ADS)
Suastika, K. G.; Karelius, K.; Sudyana, I. N.
2018-03-01
Start Zircon sand extraction in this study use natural sand material from Kereng Pangi village of Central Kalimantan, also known as Puya sand. There are only three ways to extract the Puya sand. The first is magnetic separation, the second is immersion in HCl, and the third is reaction with NaOH. In addition, sample of each extraction step is analyzed with X-Ray Diffraction (XRD). Then based on the quantitative analysis using X'Pert Highscore Plus software, the samples are identified mostly as zircon (ZrSiO4) and silica (SiO2). Moreover, after the immersion process with HCl, the silica compound goes down and the zircon compound climbs to 74%. In the reaction process with NaOH zircon compound content further to increase to 88%.
Bouchard, Simon; Sidani, Sacha
2016-01-01
Background. Patients with chronic pancreatitis (CP) exhibit numerous risk factors for the development of small intestinal bacterial overgrowth (SIBO). Objective. To determine the prevalence of SIBO in patients with CP. Methods. Prospective, single-centre case-control study conducted between January and September 2013. Inclusion criteria were age 18 to 75 years and clinical and radiological diagnosis of CP. Exclusion criteria included history of gastric, pancreatic, or intestinal surgery or significant clinical gastroparesis. SIBO was detected using a standard lactulose breath test (LBT). A healthy control group also underwent LBT. Results. Thirty-one patients and 40 controls were included. The patient group was significantly older (53.8 versus 38.7 years; P < 0.01). The proportion of positive LBTs was significantly higher in CP patients (38.7 versus 2.5%: P < 0.01). A trend toward a higher proportion of positive LBTs in women compared with men was observed (66.6 versus 27.3%; P = 0.056). The subgroups with positive and negative LBTs were comparable in demographic and clinical characteristics, use of opiates, pancreatic enzymes replacement therapy (PERT), and severity of symptoms. Conclusion. The prevalence of SIBO detected using LBT was high among patients with CP. There was no association between clinical features and the risk for SIBO. PMID:27446865
Therrien, Amelie; Bouchard, Simon; Sidani, Sacha; Bouin, Mickael
2016-01-01
Background. Patients with chronic pancreatitis (CP) exhibit numerous risk factors for the development of small intestinal bacterial overgrowth (SIBO). Objective. To determine the prevalence of SIBO in patients with CP. Methods. Prospective, single-centre case-control study conducted between January and September 2013. Inclusion criteria were age 18 to 75 years and clinical and radiological diagnosis of CP. Exclusion criteria included history of gastric, pancreatic, or intestinal surgery or significant clinical gastroparesis. SIBO was detected using a standard lactulose breath test (LBT). A healthy control group also underwent LBT. Results. Thirty-one patients and 40 controls were included. The patient group was significantly older (53.8 versus 38.7 years; P < 0.01). The proportion of positive LBTs was significantly higher in CP patients (38.7 versus 2.5%: P < 0.01). A trend toward a higher proportion of positive LBTs in women compared with men was observed (66.6 versus 27.3%; P = 0.056). The subgroups with positive and negative LBTs were comparable in demographic and clinical characteristics, use of opiates, pancreatic enzymes replacement therapy (PERT), and severity of symptoms. Conclusion. The prevalence of SIBO detected using LBT was high among patients with CP. There was no association between clinical features and the risk for SIBO.
Measurement of the human esophageal cancer in an early stage with Raman spectroscopy
NASA Astrophysics Data System (ADS)
Maeda, Yasuhiro; Ishigaki, Mika; Taketani, Akinori; Andriana, Bibin B.; Ishihara, Ryu; Sato, Hidetoshi
2014-02-01
The esophageal cancer has a tendency to transfer to another part of the body and the surgical operation itself sometimes gives high risk in vital function because many delicate organs exist near the esophagus. So the esophageal cancer is a disease with a high mortality. So, in order to lead a higher survival rate five years after the cancer's treatment, the investigation of the diagnosis methods or techniques of the cancer in an early stage and support the therapy are required. In this study, we performed the ex vivo experiments to obtain the Raman spectra from normal and early-stage tumor (stage-0) human esophageal sample by using Raman spectroscopy. The Raman spectra are collected by the homemade Raman spectrometer with the wavelength of 785 nm and Raman probe with 600-um-diameter. The principal component analysis (PCA) is performed after collection of spectra to recognize which materials changed in normal part and cancerous pert. After that, the linear discriminant analysis (LDA) is performed to predict the tissue type. The result of PCA indicates that the tumor tissue is associated with a decrease in tryptophan concentration. Furthermore, we can predict the tissue type with 80% accuracy by LDA which model is made by tryptophan bands.
Coal gasification systems engineering and analysis. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
1980-01-01
Feasibility analyses and systems engineering studies for a 20,000 tons per day medium Btu (MBG) coal gasification plant to be built by TVA in Northern Alabama were conducted. Major objectives were as follows: (1) provide design and cost data to support the selection of a gasifier technology and other major plant design parameters, (2) provide design and cost data to support alternate product evaluation, (3) prepare a technology development plan to address areas of high technical risk, and (4) develop schedules, PERT charts, and a work breakdown structure to aid in preliminary project planning. Volume one contains a summary of gasification system characterizations. Five gasification technologies were selected for evaluation: Koppers-Totzek, Texaco, Lurgi Dry Ash, Slagging Lurgi, and Babcock and Wilcox. A summary of the trade studies and cost sensitivity analysis is included.
NASA Astrophysics Data System (ADS)
Saeed, R.; Shah, Asif; Noaman-Ul-Haq, Muhammad
2010-10-01
The nonlinear propagation of ion-acoustic solitons in relativistic electron-positron-ion plasma comprising of Boltzmannian electrons, positrons, and relativistic thermal ions has been examined. The Korteweg-de Vries equation has been derived by reductive perturbation technique. The effect of various plasma parameters on amplitude and structure of solitary wave is investigated. The pert graphical view of the results has been presented for illustration. It is observed that increase in the relativistic streaming factor causes the soliton amplitude to thrive and its width shrinks. The soliton amplitude and width decline as the ion to electron temperature ratio is increased. The increase in positron concentration results in reduction of soliton amplitude. The soliton amplitude enhances as the electron to positron temperature ratio is increased. Our results may have relevance in the understanding of astrophysical plasmas.
NASA Technical Reports Server (NTRS)
John, Bonnie; Vera, Alonso; Matessa, Michael; Freed, Michael; Remington, Roger
2002-01-01
CPM-GOMS is a modeling method that combines the task decomposition of a GOMS analysis with a model of human resource usage at the level of cognitive, perceptual, and motor operations. CPM-GOMS models have made accurate predictions about skilled user behavior in routine tasks, but developing such models is tedious and error-prone. We describe a process for automatically generating CPM-GOMS models from a hierarchical task decomposition expressed in a cognitive modeling tool called Apex. Resource scheduling in Apex automates the difficult task of interleaving the cognitive, perceptual, and motor resources underlying common task operators (e.g. mouse move-and-click). Apex's UI automatically generates PERT charts, which allow modelers to visualize a model's complex parallel behavior. Because interleaving and visualization is now automated, it is feasible to construct arbitrarily long sequences of behavior. To demonstrate the process, we present a model of automated teller interactions in Apex and discuss implications for user modeling. available to model human users, the Goals, Operators, Methods, and Selection (GOMS) method [6, 21] has been the most widely used, providing accurate, often zero-parameter, predictions of the routine performance of skilled users in a wide range of procedural tasks [6, 13, 15, 27, 28]. GOMS is meant to model routine behavior. The user is assumed to have methods that apply sequences of operators and to achieve a goal. Selection rules are applied when there is more than one method to achieve a goal. Many routine tasks lend themselves well to such decomposition. Decomposition produces a representation of the task as a set of nested goal states that include an initial state and a final state. The iterative decomposition into goals and nested subgoals can terminate in primitives of any desired granularity, the choice of level of detail dependent on the predictions required. Although GOMS has proven useful in HCI, tools to support the construction of GOMS models have not yet come into general use.
Lamarca, Angela; McCallum, Lynne; Nuttall, Christina; Barriuso, Jorge; Backen, Alison; Frizziero, Melissa; Leon, Rebecca; Mansoor, Was; McNamara, Mairéad G; Hubner, Richard A; Valle, Juan W
2018-06-20
Background Patients with advanced well-differentiated neuroendocrine tumours(Wd-NETs) are commonly treated with somatostatin analogues(SSAs). Some patients may develop SSA-related side effects such as pancreatic exocrine insufficiency(PEI). Methods In this single-institution, prospective, observational study, the frequency of SSA-induced PEI in 50 sequential patients with advanced Wd-NETs treated with SSAs was investigated. Toxicity was assessed monthly and faecal elastase-1 (FE1) and quality of life (QoL) were assessed 3-monthly. Results The median age was 65.8 years, 58% were male and the majority (92%) of patients had metastatic disease; patients received 4-weekly long acting octreotide (60%) or lanreotide (40%). Twelve patients (24%) developed SSA-related PEI after a median of 2.9 months from SSA initiation; FE1 was a reliable screening tool, especially in symptomatic patients (risk ratio 8.25 (95% confidence interval 1.15-59.01)). Most of these patients (11/12; 92%) required PERT. Other SSA-related adverse events (any grade) included flatulence (50%), abdominal pain (32%), diarrhoea (30%) and fatigue (20%). Development of PEI did not significantly worsen overall QoL, however gastrointestinal symptoms and diarrhoea were increased. Conclusion This study demonstrated that PEI occurs at a higher rate than previously reported; clinicians need to diagnose and treat this SSA-related adverse-event which occurs in 1 in 4 patients with Wd-NETs treated with SSAs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amato, Sandra F.
The Fermilab experiment E769 collected approximately 400 billion events using a 250 GeV /c pion, haon and proton beam incident on targets of Al, Cu, Be and W. One measured the x F and pmore » $$2\\atop{T}$$ di stributions of 232 ± 13.5 C* through the decay mode D 0π, where D 0 → Kππ 0 without reconstructing the π 0 . Fitting the distributions to the form A(l - x F )" and Be -bp$$2\\atop{T}$$, respectively, the values n = 4.14±0.31 ±0.03 and b = 0.68±0.06± 0.03 Gev -2 were found. The dependence of the cross section with the atomic number was measured and the fit to the curve A α gave α= 1.06 ± .08 ± .01. Those measurements are compared with another analysis, with those of other experiments and with predictions based on pert urbative QCD.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dhankhar, Suchita; Baskar, K.; Singh, Shubra, E-mail: shubra6@gmail.com
2016-05-23
Brownmillerite compounds with general formula A{sub 2}BB’O{sub 5} (BB’ = Mn, Al, Fe, Co) have attracted attention in wide range of applications such as in solid oxide fuel cell, oxygen separation membrane and photocatalysis. Brownmillerite compounds have unique structure with alternate layers of BO{sub 6} octahedral layers and BO{sub 4} tetrahedral layers. Presence of dopants like Co in place of Fe increases oxygen vacancies. In the present work we have synthesized polycrystalline Ca{sub 2}Fe{sub 2}O{sub 5} and Ca{sub 2}Fe{sub 1-x}Co{sub x}O{sub 5} (x = 0.01, 0.03) by citrate combustion route. The as prepared samples were characterized by XRD using PANalyticalmore » X’Pert System, DRS (Diffuse reflectance spectroscopy) and SEM (Scanning electron microscopy).« less
Localizing multiple X chromosome-linked retinitis pigmentosa loci using multilocus homogeneity tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ott, J.; Terwilliger, J.D.; Bhattacharya, S.
1990-01-01
Multilocus linkage analysis of 62 family pedigrees with X chromosome-linked retinitis pigmentosa (XLRP) was undertaken to determine the presence of possible multiple disease loci and to reliability estimate their map location. Multilocus homogeneity tests furnish convincing evidence for the presence of two XLRP loci, the likelihood ratio being 6.4 {times} 10{sup 9}:1 in a favor of two versus a single XLRP locus and gave accurate estimates for their map location. In 60-75% of the families, location of an XLRP gene was estimated at 1 centimorgan distal to OTC, and in 25-40% of the families, an XLRP locus was located halfwaymore » between DXS14 (p58-1) and DXZ1 (Xcen), with an estimated recombination fraction of 25% between the two XLRP loci. There is also good evidence for third XLRP locus, midway between DXS28 (C7) and DXS164 (pERT87), supported by a likelihood ratio of 293:1 for three versus two XLRP loci.« less
Program Predicts Time Courses of Human/Computer Interactions
NASA Technical Reports Server (NTRS)
Vera, Alonso; Howes, Andrew
2005-01-01
CPM X is a computer program that predicts sequences of, and amounts of time taken by, routine actions performed by a skilled person performing a task. Unlike programs that simulate the interaction of the person with the task environment, CPM X predicts the time course of events as consequences of encoded constraints on human behavior. The constraints determine which cognitive and environmental processes can occur simultaneously and which have sequential dependencies. The input to CPM X comprises (1) a description of a task and strategy in a hierarchical description language and (2) a description of architectural constraints in the form of rules governing interactions of fundamental cognitive, perceptual, and motor operations. The output of CPM X is a Program Evaluation Review Technique (PERT) chart that presents a schedule of predicted cognitive, motor, and perceptual operators interacting with a task environment. The CPM X program allows direct, a priori prediction of skilled user performance on complex human-machine systems, providing a way to assess critical interfaces before they are deployed in mission contexts.
Tourabi, K.; Moussaoui, A.; Ribag, Y.; Ihrai, H.
2010-01-01
Summary Le lambeau hypogastrique est un lambeau cutané axial, vascularisé par le pédicule épigastrique inférieur superficiel. Il constitue un des moyens efficaces pour la couverture de la face postérieure de la main et du poignet. Quatre patients ont bénéficié de la couverture de la perte de substance après que l’excision du placard cicatriciel a fait appel au lambeau hypogastrique sevré au 21ème jour. Les résultats esthétiques et fonctionnels étaient plutôt satisfaisants. Ce lambeau constitue une technique de choix dans le traitement des séquelles de brûlures de la main avec exposition des éléments nobles; de dissection facile, donne peu de complications, et nécessite des retouches à distance pour obtenir un résultat optimal. PMID:21991214
Seys, Scott A; Sampedro, Fernando; Hedberg, Craig W
2015-09-01
Beef product recall data from 2005 through 2012 associated with Shiga toxin-producing Escherichia coli (STEC) O157 contamination were used to develop quantitative models to estimate the number of illnesses prevented by recalls. The number of illnesses prevented was based on the number of illnesses that occurred relative to the number of pounds consumed, then extrapolated to the number of pounds of recalled product recovered. A simulation using a Program Evaluation and Review Technique (PERT) probability distribution with illness-related recalls estimated 204 (95% credible interval, 117-333) prevented STEC O157 illnesses from 2005 through 2012. Recalls not associated with illnesses had more recalled product recovered and prevented an estimated 83 additional STEC O157 illnesses. Accounting for underdiagnosis resulted in an estimated total of 7500 STEC O157 illnesses prevented over 8 years. This study demonstrates that recalls, although reactive in nature, are an important tool for averting further exposure and illnesses.
NASA Astrophysics Data System (ADS)
Rimbault, Benjamin
Cette these de maitrise presentee par articles visait a etudier le comportement hydraulique et thermique d'un ecoulement de nanofluides en micro-canal chauffe. Nous avons etudie premierement de l'eau distillee, ensuite des melanges de particules d'oxyde de cuivre (taille 29nm) avec de l'eau distillee en concentrations particulaires volumiques 4.5%, 1.03%, et 0.24% (CuO-H2O). L'ecoulement force des differents fluides a ete realise au moyen de pompes a engrenages au sein d'un circuit ferme, comprenant un micro-canal a section rectangulaire (e=1.116mm,1=25.229mm) chauffe sur deux faces paralleles via des cartouches electriques, deux echangeurs de chaleurs en serie, ainsi qu'un debitmetre magnetique. A notre connaissance peu d'etudes sur l'ecoulement de nanofluides d'oxyde de cuivre-eau en micro-canal rectangulaire chauffe sont disponibles dans la litterature, cette recherche sert de contribution. Premierement, une validation avec la litterature a ete effectuee pour le cas d'un ecoulement d'eau entre plaques planes paralleles chauffees. Des essais hydrauliques ont ete realises pour une gamme du nombre de Reynolds allant jusqu'a Re=5000 a temperature constante. Par la suite des essais thermiques jusqu'a Re=2500 ont consiste en une elevation de temperature fixe (20.5°C a 30.5°C) a travers la longueur du micro-canal sous un regime stationnaire. Les resultats ont demontre une augmentation de la perte de pression et du coefficient de frottement des nanofluides sur l'eau pour un meme debit. Une telle augmentation de perte de pression etait de +70%, +25%, et +0 a 30% respectivement pour les concentrations 4.50%, 1.03%, et 0.24%. Concernant la transition laminaire a turbulent les comportements semblaient indiquer une valeur critique semblable entre l'eau et les differentes concentrations avec et sans chauffage a un nombre de Reynolds critique Rem 1000. Nous avons observe une legere augmentation du coefficient de convection thermique avec le debit massique pour les faibles concentrations (1.03% et 0.24%), alors que la concentration 4.5% demontre une nette diminution. En general la performance energetique globale definie par la chaleur transferee sur la puissance de pompage, reste inferieure compare a l'eau pour un meme Reynolds et un meme debit massique egalement. L'eau semble etre la meilleure solution en termes de performance energetique globale.
NASA Astrophysics Data System (ADS)
Trincă, Lucia Carmen; Fântânariu, Mircea; Solcan, Carmen; Trofin, Alina Elena; Burtan, Liviu; Acatrinei, Dumitru Mihai; Stanciu, Sergiu; Istrate, Bogdan; Munteanu, Corneliu
2015-10-01
Magnesium based alloys, especially Mg-Ca alloys, are biocompatible substrates with mechanical properties similar to those of bones. The biodegradable alloys of Mg-Ca provide sufficient mechanical strength in load carrying applications as opposed to biopolymers and also they avoid stress shielding and secondary surgery inherent with permanent metallic implant materials. The main issue facing a biodegradable Mg-Ca alloy is the fast degradation in the aggressive physiological environment of the body. The alloy's corrosion is proportional with the dissolution of the Mg in the body: the reaction with the water generates magnesium hydroxide and hydrogen. The accelerated corrosion will lead to early loss of the alloy's mechanical integrity. The degradation rate of an alloy can be improved mainly through tailoring the composition and by carrying out surface treatments. This research focuses on the ability to adjust degradation rate of Mg-Ca alloys by an original method and studies the biological activity of the resulted specimens. A new Mg-Ca alloy, with a Si gradient concentration from the surface to the interior of the material, was obtained. The surface morphology was investigated using scanning electron microscopy (VegaTescan LMH II, SE detector, 30 kV), X-ray diffraction (X'Pert equipment) and energy dispersive X-ray (Bruker EDS equipment). In vivo degradation behavior, biological compatibility and activity of Mg-Ca alloys with/without Si gradient concentration were studied with an implant model (subcutaneous and bony) in rats. The organism response to implants was characterized by using radiological (plain X-rays and computed tomography), biochemical and histological methods of investigation. The results sustained that Si gradient concentration can be used to control the rate of degradation of the Mg-Ca alloys for enhancing their biologic activity in order to facilitate bone tissue repair.
Ahmed, Tahmeed; Shahid, Abu S. M. S. B.; Shahunja, K. M.; Bardhan, Pradip Kumar; Faruque, Abu Syeed Golam; Das, Sumon Kumar; Salam, Mohammed Abdus
2015-01-01
We aimed to evaluate sociodemographic, epidemiological, and clinical risk factors for pulmonary tuberculosis (PTB) in children presenting with severe acute malnutrition (SAM) and pneumonia. Children aged 0 to 59 months with SAM and radiologic pneumonia from April 2011 to July 2012 were studied in Bangladesh. Children with confirmed PTB (by culture and/or X-pert MTB/RIF) (cases = 27) and without PTB (controls = 81; randomly selected from 378 children) were compared. The cases more often had the history of contact with active PTB patient (P < .01) and exposure to cigarette smoke (P = .04) compared with the controls. In logistic regression analysis, after adjusting for potential confounders, the cases were independently associated with working mother (P = .05) and positive tuberculin skin test (TST; P = .02). Thus, pneumonia in SAM children is a common presentation of PTB and further highlights the importance of the use of simple TST and/or history of contact with active TB patients in diagnosing PTB in such children, especially in resource-limited settings. PMID:27335971
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roy, Sumit K., E-mail: sumit.sxc13@gmail.com; Singh, S. N., E-mail: snsphyru@gmail.com; Prasad, K., E-mail: k.prasad65@gmail.com
2016-05-06
Lead-free solid solutions (1-x)Ba{sub 0.06}(Na{sub 1/2}Bi{sub 1/2}){sub 0.94}TiO{sub 3}-xNaNbO{sub 3} (0 ≤ x ≤ 1.0) were prepared by conventional ceramic fabrication technique. X-ray diffraction and Rietveld refinement analyses of these ceramics were carried out using X’Pert HighScore Plus software to determine the crystal symmetry, space group and unit cell dimensions. Rietveld refinement revealed that NaNbO{sub 3} with orthorhombic structure was completely diffused into Ba{sub 0.06}(Na{sub 1/2}Bi{sub 1/2}){sub 0.94}TiO{sub 3} lattice having the rhombohedral-tetragonal symmetry. EDS and SEM studies were carried out in order to evaluate the quality and purity of the compounds. SEM images showed a change in grain shapemore » with the increase of NaNbO{sub 3} content. FTIR spectra confirmed the formation of solid solution.« less
Etude de l'affaiblissement du comportement mecanique du pergelisol du au rechauffement climatique
NASA Astrophysics Data System (ADS)
Buteau, Sylvie
Le rechauffement climatique predit pour les prochaines decennies, aura des impacts majeurs sur le pergelisol qui sont tres peu documentes pour l'instant. La presente etude a pour but d'evaluer ces impacts sur les proprietes mecaniques du pergelisol et sa stabilite a long terme. Une nouvelle technique d'essai de penetration au cone a taux de deformation controle, a ete developpee pour caracteriser en place le pergelisol. Ces essais geotechniques et la mesure de differentes proprietes physiques ont ete effectues sur une butte de pergelisol au cours du printemps 2000. Le developpement et l'utilisation d'un modele geothermique 1D tenant compte de la thermodependance du comportement mecanique ont permis d'evaluer que les etendues de pergelisol chaud deviendraient instables a la suite d'un rechauffement de l'ordre de 5°C sur cent ans. En effet, la resistance mecanique du pergelisol diminuera alors rapidement jusqu'a 11,6 MPa, ce qui correspond a une perte relative de 98% de la resistance par rapport a un scenario sans rechauffement.
Benjamin, David M; Pendrak, Robert F
2003-07-01
Clinical pharmacologists are all dedicated to improving the use of medications and decreasing medication errors and adverse drug reactions. However, quality improvement requires that some significant parameters of quality be categorized, measured, and tracked to provide benchmarks to which future data (performance) can be compared. One of the best ways to accumulate data on medication errors and adverse drug reactions is to look at medical malpractice data compiled by the insurance industry. Using data from PHICO insurance company, PHICO's Closed Claims Data, and PHICO's Event Reporting Trending System (PERTS), this article examines the significance and trends of the claims and events reported between 1996 and 1998. Those who misread history are doomed to repeat the mistakes of the past. From a quality improvement perspective, the categorization of the claims and events is useful for reengineering integrated medication delivery, particularly in a hospital setting, and for redesigning drug administration protocols on low therapeutic index medications and "high-risk" drugs. Demonstrable evidence of quality improvement is being required by state laws and by accreditation agencies. The state of Florida requires that quality improvement data be posted quarterly on the Web sites of the health care facilities. Other states have followed suit. The insurance industry is concerned with costs, and medication errors cost money. Even excluding costs of litigation, an adverse drug reaction may cost up to $2500 in hospital resources, and a preventable medication error may cost almost $4700. To monitor costs and assess risk, insurance companies want to know what errors are made and where the system has broken down, permitting the error to occur. Recording and evaluating reliable data on adverse drug events is the first step in improving the quality of pharmacotherapy and increasing patient safety. Cost savings and quality improvement evolve on parallel paths. The PHICO data provide an excellent opportunity to review information that typically would not be in the public domain. The events captured by PHICO are similar to the errors and "high-risk" drugs described in the literature, the U.S. Pharmacopeia's MedMARx Reporting System, and the Sentinel Event reporting system maintained by the Joint Commission for the Accreditation of Healthcare Organizations. The information in this report serves to alert clinicians to the possibility of adverse events when treating patients with the reported drugs, thus allowing for greater care in their use and closer monitoring. Moreover, when using high-risk drugs, patients should be well informed of known risks, dosage should be titrated slowly, and therapeutic drug monitoring and laboratory monitoring should be employed to optimize therapy and minimize adverse effects.
NASA Astrophysics Data System (ADS)
Vérinaud, Christophe
2000-11-01
Dans le domaine de la haute résolution angulaire en astronomie, les techniques de l'interférométrie optique et de l'optique adaptative sont en plein essor. La principale limitation de l'interférométrie est laturbulence atmosphérique qui entraîne des pertes de cohérence importantes, préjudiciables à la sensibilité et à la précision des mesures. L'optique adaptative appliquée à l'interférométrie va permettre un gain en sensibilité considérable. Le but de cette thèse est l'étude de l'influence de l'optique adaptative sur les mesures interférométriques et son application au Grand Interféromètre à DeuxTélescopes (GI2T) situé sur le Mont Calern dans le sud de la France. Deux problèmes principaux sont étudiés de manière théorique par des développements analytiques et des simulations numériques: le premier est le contrôle en temps réel de la variation des différences de marche optique, encore appelée piston différentiel, induite par l'optique adaptative ; le deuxième problème important est la calibration des mesures de contraste des franges dans le cas de la correction partielle. Je limite mon étude au cas d'un interféromètre multi-modes en courtes poses, mode de fonctionnement principal du GI2T également prévu sur le Very Large Telescope Interferometer installé au Cerro Paranal au Chili. Je développe une méthode de calibration des pertes de cohérence spatio-temporelles connaissant la fonction de structure des fronts d'onde corrigés. Je montre en particulier qu'il est possible d'estimer fréquence par fréquence la densité spectrale des images en courtes poses, méthode très utile pour augmenter la couverture du plan des fréquences spatiales dans l'observation d'objets étendus. La dernière partie de ce mémoire est consacrée au développements instrumentaux auxquels j'ai participé. J'ai développé un banc de qualification du système d'optique adaptative à courbure destiné au GI2T et j'ai étudié l'implantation optique de deux systèmes dans la table de recombinaison.
NASA Astrophysics Data System (ADS)
Pradhan, Moumita; Pradhan, Dinesh; Bandyopadhyay, G.
2010-10-01
Fuzzy System has demonstrated their ability to solve different kinds of problem in various application domains. There is an increasing interest to apply fuzzy concept to improve tasks of any system. Here case study of a thermal power plant is considered. Existing time estimation represents time to complete tasks. Applying fuzzy linear approach it becomes clear that after each confidence level least time is taken to complete tasks. As time schedule is less than less amount of cost is needed. Objective of this paper is to show how one system becomes more efficient in applying Fuzzy Linear approach. In this paper we want to optimize the time estimation to perform all tasks in appropriate time schedules. For the case study, optimistic time (to), pessimistic time (tp), most likely time(tm) is considered as data collected from thermal power plant. These time estimates help to calculate expected time(te) which represents time to complete particular task to considering all happenings. Using project evaluation and review technique (PERT) and critical path method (CPM) concept critical path duration (CPD) of this project is calculated. This tells that the probability of fifty percent of the total tasks can be completed in fifty days. Using critical path duration and standard deviation of the critical path, total completion of project can be completed easily after applying normal distribution. Using trapezoidal rule from four time estimates (to, tm, tp, te), we can calculate defuzzyfied value of time estimates. For range of fuzzy, we consider four confidence interval level say 0.4, 0.6, 0.8,1. From our study, it is seen that time estimates at confidence level between 0.4 and 0.8 gives the better result compared to other confidence levels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kondoh, T.; Hayashi, K.; Matsumoto, T.
1995-10-09
We report two sisters in a family representing manifestations of Wiskott-Aldrich syndrome (WAS), an X-linked immunodeficiency disorder. An elder sister had suffered from recurrent infections, small thrombocytopenic petechiae, purpura, and eczema for 7 years. The younger sister had the same manifestations as the elder sister`s for a 2-year period, and died of intracranial bleeding at age 2 years. All the laboratory data of the two patients were compatible with WAS, although they were females. Sialophorin analysis with the selective radioactive labeling method of this protein revealed that in the elder sister a 115-KD band that should be specific for sialophorinmore » was reduced in quantity, and instead an additional 135-KD fragment was present as a main band. Polymerase chain reaction (PCR) analysis of the sialophorin gene and single-strand conformation polymorphism (SSCP) analysis of the PCR product demonstrated that there were no detectable size-change nor electrophoretic mobility change in the DNA from both patients. The results indicated that their sialophorin gene structure might be normal. Studies on the mother-daughter transmission of X chromosome using a pERT84-MaeIII polymorphic marker mapped at Xp21 and HPRT gene polymorphism at Xq26 suggested that each sister had inherited a different X chromosome from the mother. Two explanations are plausible for the occurrence of the WAS in our patients: the WAS in the patients is attributable to an autosomal gene mutation which may regulate the sialophorin gene expression through the WAS gene, or, alternatively, the condition in this family is an autosomal recessive disorder separated etiologically from the X-linked WAS. 17 refs., 6 figs., 1 tab.« less
NASA Astrophysics Data System (ADS)
Mahmoudi, Soulmaz; Gholizadeh, Ahmad
2018-06-01
In this work, Y3-xSrxZrxFe5O12 (0.0 ≤ x ≤ 0.7) were synthesized by citrate precursor method at 1050 °C. The structural and magnetic properties of Y3-xSrxFe5-xZrxO12 were studied by using the X-ray diffraction technique, scanning electron microscopy, transmission electron microscopy, the Fourier transform infrared spectroscopy and vibrating sample magnetometer. XRD analysis using X'Pert package show a pure garnet phase with cubic structure (space group Ia-3d) and the impurity phase SrZrO3 is observed when the range of x value is exceeded from 0.6. Rietveld refinement using Fullprof program shows the lattice volume expansion with increasing the degree of Sr/Zr substitution. The crystallite sizes remain constant in the range of x = 0.0 - 0.5 and then increase. The different morphology observed in SEM micrographs of the samples can be related to different values of the microstrain in the samples. The hysteresis loops of the samples reveal a superparamagnetic behaviour. Also, the drop in coercivity with increasing of the substitution is mainly originated from a reduction in the magneto-elastic anisotropy energy. The values of the saturation magnetization (MS) indicate a non-monotonically variant with increasing the Sr/Zr substitution and reach a maximum 26.14 emu/g for the sample x = 0.1 and a minimum 17.64 emu/g for x = 0.0 and x = 0.2. The variation of MS, in these samples results from a superposition of three factors; reduction of Fe3+ in a-site, change in angle FeT-O-FeO, and magnetic core size.
Preliminary model and validation of molten carbonate fuel cell kinetics under sulphur poisoning
NASA Astrophysics Data System (ADS)
Audasso, E.; Nam, S.; Arato, E.; Bosio, B.
2017-06-01
MCFC represents an effective technology to deal with CO2 capture and relative applications. If used for these purposes, due to the working conditions and the possible feeding, MCFC must cope with a different number of poisoning gases such as sulphur compounds. In literature, different works deal with the development of kinetic models to describe MCFC performance to help both industrial applications and laboratory simulations. However, in literature attempts to realize a proper model able to consider the effects of poisoning compounds are scarce. The first aim of the present work is to provide a semi-empirical kinetic formulation capable to take into account the effects that sulphur compounds (in particular SO2) have on the MCFC performance. The second aim is to provide a practical example of how to effectively include the poisoning effects in kinetic models to simulate fuel cells performances. To test the reliability of the proposed approach, the obtained formulation is implemented in the kinetic core of the SIMFC (SIMulation of Fuel Cells) code, an MCFC 3D model realized by the Process Engineering Research Team (PERT) of the University of Genova. Validation is performed through data collected at the Korea Institute of Science and Technology in Seoul.
Aspects technologiques d'un alternateur synchrone entièrement supraconducteur de 18 kVA
NASA Astrophysics Data System (ADS)
Védrine, P.; Brunet, Y.; Tixador, P.; Bonnet, P.; Laumond, Y.; Sabrié, J. L.
1991-02-01
Taking advantage of the recent development of low loss a.c. superconducting conductors, the realization of a fully superconducting generator is now possible. In collaboration with GEC-ALSTHOM we have first, in the CRTBT-LEG lab, defined the main characteristics of the machine and the technological problems induced by the use of superconducting wires both at the armature and the field windings. We have now constructed the first fully superconducting generator with separated cryostats, for the stator and rotor windings. Le faible niveau de pertes en régime alternatif obtenu dans des brins multifilamentaires NbTi produits par GEC-ALSTHOM, a permis à partir de 1984 d'envisager la réalisation d'un alternateur synchrone dont les deux enroulements, inducteur et induit, seraient supraconducteurs. Le travail entrepris au CRTBT-LEG en collaboration avec GEC-ALSTHOM a eu pour objectif de définir les caractéristiques de la machine, d'identifier puis de résoudre les problèmes technologiques liés aux conditions d'utilisation de ces supraconducteurs, afin de réaliser maintenant, le premier alternateur entièrement supraconducteur à axe horizontal avec des cryostats statorique et rotorique séparés.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirby, Carolyn L; Lord, Anna C. Snider
The Bryan Mound caprock was subjected to extens ive sulphur mining prior to the development of the Strategic Petroleum Reserve. Undoubtedl y, the mining has modified the caprock integrity. Cavern wells at Bryan Mound have been subject to a host of well integr ity concerns with many likely compromised by the cavernous capro ck, surrounding corrosive environment (H 2 SO 4 ), and associated elevated residual temperatures al l of which are a product of the mining activities. The intent of this study was to understand the sulphur mining process and how the mining has affected the stability of themore » caprock and how the compromised caprock has influenced the integrity of the cavern wells. After an extensiv e search to collect pert inent information through state agencies, literature sear ches, and the Sandia SPR librar y, a better understanding of the caprock can be inferred from the knowledge gaine d. Specifically, the discovery of the original ore reserve map goes a long way towards modeling caprock stability. In addition the gained knowledge of sulphur mining - subs idence, superheated corrosive wa ters, and caprock collapse - helps to better predict the post mi ning effects on wellbore integrity. This page intentionally left blank« less
Using Apex To Construct CPM-GOMS Models
NASA Technical Reports Server (NTRS)
John, Bonnie; Vera, Alonso; Matessa, Michael; Freed, Michael; Remington, Roger
2006-01-01
process for automatically generating computational models of human/computer interactions as well as graphical and textual representations of the models has been built on the conceptual foundation of a method known in the art as CPM-GOMS. This method is so named because it combines (1) the task decomposition of analysis according to an underlying method known in the art as the goals, operators, methods, and selection (GOMS) method with (2) a model of human resource usage at the level of cognitive, perceptual, and motor (CPM) operations. CPM-GOMS models have made accurate predictions about behaviors of skilled computer users in routine tasks, but heretofore, such models have been generated in a tedious, error-prone manual process. In the present process, CPM-GOMS models are generated automatically from a hierarchical task decomposition expressed by use of a computer program, known as Apex, designed previously to be used to model human behavior in complex, dynamic tasks. An inherent capability of Apex for scheduling of resources automates the difficult task of interleaving the cognitive, perceptual, and motor resources that underlie common task operators (e.g., move and click mouse). The user interface of Apex automatically generates Program Evaluation Review Technique (PERT) charts, which enable modelers to visualize the complex parallel behavior represented by a model. Because interleaving and the generation of displays to aid visualization are automated, it is now feasible to construct arbitrarily long sequences of behaviors. The process was tested by using Apex to create a CPM-GOMS model of a relatively simple human/computer-interaction task and comparing the time predictions of the model and measurements of the times taken by human users in performing the various steps of the task. The task was to withdraw $80 in cash from an automated teller machine (ATM). For the test, a Visual Basic mockup of an ATM was created, with a provision for input from (and measurement of the performance of) the user via a mouse. The times predicted by the automatically generated model turned out to approximate the measured times fairly well (see figure). While these results are promising, there is need for further development of the process. Moreover, it will also be necessary to test other, more complex models: The actions required of the user in the ATM task are too sequential to involve substantial parallelism and interleaving and, hence, do not serve as an adequate test of the unique strength of CPM-GOMS models to accommodate parallelism and interleaving.
Renormalization of quark propagators from twisted-mass lattice QCD at N{sub f}=2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blossier, B.; Boucaud, Ph.; Pene, O.
2011-04-01
We present results concerning the nonperturbative evaluation of the renormalization constant for the quark field, Z{sub q}, from lattice simulations with twisted-mass quarks and three values of the lattice spacing. We use the regularization-invariant momentum-subtraction (RI'-MOM) scheme. Z{sub q} has very large lattice spacing artefacts; it is considered here as a test bed to elaborate accurate methods which will be used for other renormalization constants. We recall and develop the nonperturbative correction methods and propose tools to test the quality of the correction. These tests are also applied to the perturbative correction method. We check that the lattice-spacing artefacts indeedmore » scale as a{sup 2}p{sup 2}. We then study the running of Z{sub q} with particular attention to the nonperturbative effects, presumably dominated by the dimension-two gluon condensate in Landau gauge. We show indeed that this effect is present, and not small. We check its scaling in physical units, confirming that it is a continuum effect. It gives a {approx}4% contribution at 2 GeV. Different variants are used in order to test the reliability of our result and estimate the systematic uncertainties. Finally, combining all our results and using the known Wilson coefficient of , we find g{sup 2}({mu}{sup 2}){sub {mu}}{sup 2}{sub CM}=2.01(11)({sub -0.73}{sup +0.61})GeV{sup 2} at {mu}=10 GeV, the local operator A{sup 2} being renormalized in the MS scheme. This last result is in fair agreement within uncertainties with the value independently extracted from the strong coupling constant. We convert the nonperturbative part of Z{sub q} from the regularization-invariant momentum-subtraction (RI'-MOM) scheme to MS. Our result for the quark field renormalization constant in the MS scheme is Z{sub q} {sup MS} {sup pert}((2 GeV){sup 2},g{sub bare}{sup 2})=0.750(3)(7)-0.313(20)(g{sub bare}{sup 2}-1.5) for the perturbative contribution and Z{sub q}{sup MSnonperturbative}((2 GeV){sup 2},g{sub bare}{sup 2})=0.781(6)(21)-0.313(20)(g{sub bare}{sup 2}-1.5) when the nonperturbative contribution is included.« less
L’infection bactérienne chez le patient brûlé
Le Floch, R.; Naux, E.; Arnould, J.F.
2015-01-01
Summary La mort d’un patient brûlé est le plus souvent causée par une infection, bactérienne dans la grande majorité des cas. La perte de la barrière cutanée, les dispositifs invasifs et l’immunodépression liée à la brûlure sont trois mécanismes concourant à la survenue de ces infections. Chez un patient inflammatoire, les signes infectieux généraux d’infection sont peu discriminants. Du fait de la gravité des infections chez ce patient, leur prévention est un paramètre essentiel de la prise en charge. En raison des particularités pharmacocinétiques des brûlés, les posologies d’antibiotiques doivent être adaptés et les dosages sanguins doivent être systématiques. A l’heure où les résistances deviennent préoccupantes, les recherches sur les thérapeutiques sur les alternatives thérapeutiques parmi lesquels les inhibiteurs de facteurs de virulence, les peptides antimicrobiens, les polyphénols, l’immunothérapie…) deviennent cruciales. L’une des possibilités thérapeutiques les plus prometteuses semble être la phagothérapie. PMID:27252607
Comparaison des effets des irradiations γ, X et UV dans les fibres optiques
NASA Astrophysics Data System (ADS)
Girard, S.; Ouerdane, Y.; Baggio, J.; Boukenter, A.; Meunier, J.-P.; Leray, J.-L.
2005-06-01
Les fibres optiques présentent de nombreux avantages incitant à les intégrer dans des applications devant résister aux environnements radiatifs associés aux domaines civil, spatial ou militaire. Cependant, leur exposition à un rayonnement entraîne la création de défauts ponctuels dans la silice amorphe pure ou dopée qui constitue les différentes parties de la fibre optique. Ces défauts causent, en particulier, une augmentation transitoire de l'atténuation linéique des fibres optiques responsable de la dégradation voire de la perte du signal propagé dans celles-ci. Dans cet article, nous comparons les effets de deux types d'irradiation: une impulsion X et une dose γ cumulée. Les effets de ces irradiations sont ensuite comparés avec ceux induits par une insolation ultraviolette (244 nm) sur les propriétés d'absorption des fibres optiques. Nous montrons qu'il existe des similitudes entre ces différentes excitations et qu'il est possible, sous certaines conditions, d'utiliser celles-ci afin d'évaluer la capacité de certaines fibres optiques à fonctionner dans un environnement nucléaire donné.
Interfacial reactions and wetting in Al-Mg sintered by powder metallurgy process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faisal, Heny, E-mail: faisal@physics.its.ac.id; Darminto,; Triwikantoro,
2016-04-19
Was conducted to analyze the effect of temperature variation on the bonding interface sintered composite Al-Mg and analyze the effect of variations of the density and hardness sinter. Research carried out by the base material powders of Al, Mg powder and solvent n-butanol. The method used in this study is a powder metallurgy, with a composition of 60% volume fraction of Al - 40% Mg. Al-Mg mixing with n-butanol for 1 hour at 500 rpm. Then the emphasis (cold comression) with a size of 1.4 cm in diameter dies and height of 2.8 cm, is pressed with a force of 20 MPa and heldmore » for 15 minutes. After the sample into pellets, then sintered at various temperatures 300 °C, 350 °C, 400 °C and 450 °C. Characterization is done by using the testing green density, sintered density, X-ray diffraction (XRD), Scanning Electron Microscopy (SEM), vickers microhardness, and press test. XRD data analysis done by using X’Pert High Score Plus (HSP) to determine whether there is a new phase is formed. Test results show that the sintered density increasing sintering temperature, the resulting density is also increasing (shrinkage). However, at a temperature of 450 °C decreased (swelling). With the increased sinter density, interfacial bonding getting Kuta and more compact so that its hardness is also increased. From the test results of SEM / EDX, there Mg into Al in the border area. At temperatures of 300 °C, 350 °C, 400 °C, the phase formed is Al, Mg and MgO. While phase is formed at a temperature of 450 °C is aluminum magnesium (Al{sub 3}Mg{sub 2}), Aluminum Magnesium Zinc (AlMg{sub 2}Zn).« less
Titanium minerals of placer deposits as a source for new materials
NASA Astrophysics Data System (ADS)
Kotova, Olga; Ponaryadov, Alexey
2015-04-01
Heavy mineral deposits are a source of the economic important element titanium, which is contained in ilmenite and leucoxene. The mineral composition of placer titanium ore and localization pattern of ore minerals determine their processing and enriching technologies. New data on the mineralogy of titanium ores from modern coastal-marine placer in Stradbroke Island, Eastern Australia, and Pizhma paleoplacer in Middle Timan, Russia, and materials on their basis are presented. The samples were studied by the following methods: optical-mineralogical (stereomicroscope MBS-10, polarizing microscope POLAM L-311), semiquantitative x-ray phase analysis (x-ray difractometer X'Pert PRO MPD). Besides microprobe (VEGA 3 TESCAN) and x-ray fluorescent analysis (XRF-1800 Shimadzu) were used. By the mineralogical composition ores of the both deposits are complex: enriched by valuable minerals. Apart from main ore concentrates it is possible to obtain accompanying nonmetallic products. This will increase the efficiency of deposit exploitation. Ilmenite dominates in ore sands of Stradbroke Island, and leucoxene dominates in the ores of the Pizhma titanium deposit. Australian ilmenite and its altered varieties are mainly characterized by a very high MnO content (from 5.24 to 11.08 %). The irregular distribution of iron oxides, titanium and manganese in the altered ilmenite was shown in the paper. E.g., in the areas of substitution of ilmenite by pseudorutile the concentrations of the given elements are greatly various due to various ratios of basic components in each grain. Their ratios are equal in the area of rutile evolution. Moreover, the high content of gold, diamonds and also rare earth elements (REE) and rare metals (their forms are not determined) were studied. We found native copper on the surface of minerals composing titanium-bearing sandstones of the Pizhma placer. According to the technological features of rocks (density and magnetic) studied placers are close. The obtained results of physical studies, mineral composition features, morphostructural characteristics and degree of alteration of titanium minerals from the placers specify a high potential of physical methods of processing (gravitational and magnetic separation, flotation) and possible application of combined methods of processing. Production of pigment titanium dioxide for further production of titanium white, paper, plastics etc is the usual application area of titanium concentrates. Titanium dioxide of high chemical purity is used to produce optically transparent glass, fiber optics, electronics (iPad), piezoceramics, in medical and food industry. We designed photocatalysts based on leucoxene from Pizhma placer. The results showed that the photocatalysts based on rutile, synthesized from leucoxene from Pizhma deposit, can be applied to decay phenols in water.
A new Fe-Mn-Si alloplastic biomaterial as bone grafting material: In vivo study
NASA Astrophysics Data System (ADS)
Fântânariu, Mircea; Trincă, Lucia Carmen; Solcan, Carmen; Trofin, Alina; Strungaru, Ştefan; Şindilar, Eusebiu Viorel; Plăvan, Gabriel; Stanciu, Sergiu
2015-10-01
Designing substrates having suitable mechanical properties and targeted degradation behavior is the key's development of bio-materials for medical application. In orthopedics, graft material may be used to fill bony defects or to promote bone formation in osseous defects created by trauma or surgical intervention. Incorporation of Si may increase the bioactivity of implant locally, both by enhancing interactions at the graft-host interface and by having a potential endocrine like effect on osteoblasts. A Fe-Mn-Si alloy was obtained as alloplastic graft materials for bone implants that need long recovery time period. The surface morphology of the resulted specimens was investigated using scanning electrons microscopy (VegaTescan LMH II, SE detector, 30 kV), X-ray diffractions (X'Pert equipment) or X-ray dispersive energy analyze (Bruker EDS equipment). This study objective was to evaluate in vivo the mechanisms of degradation and the effects of its implantation over the main metabolic organs. Biochemical, histological, plain X radiography and computed tomography investigations showed good compatibility of the subcutaneous implants in the rat organism. The implantation of the Fe-Mn-Si alloy, in critical size bone (tibiae) defect rat model, did not induced adverse biological reactions and provided temporary mechanical support to the affected bone area. The biodegradation products were hydroxides layers which adhered to the substrate surface. Fe-Mn-Si alloy assured the mechanical integrity in rat tibiae defects during bone regeneration.
Le carcinome neuro-endocrine cutané primitif: à propos d'un nouveau cas et revue de la littérature
Boukind, Samira; Elatiqi, Oumkeltoum; Dlimi, Meriem; Elamrani, Driss; Benchamkha, Yassine; Ettalbi, Saloua
2015-01-01
Le carcinome neuro- endocrine cutané primitif (CNEC) est une tumeur cutanée rare et agressive du sujet âgé, favorisée par le soleil et l'immunodépression. Elle est caractérisée par une évolution agressive avec un fort taux de récidive, une évolution ganglionnaire régionale et un risque de métastases à distance. Nous rapportons un cas de cette tumeur chez un patient âgé de 67 ans sous forme d'un placard nodulaire hémorragique mesurant 16 /14 cm. Le patient a bénéficié d'une exérèse chirurgicale large avec couverture de la perte de substance par un lambeau musculo-cutané du muscle grand dorsal, un curage ganglionnaire axillaire et une radiothérapie adjuvante. Après un recul de 2 ans et 2 mois, le patient est toujours vivant sans métastase ni récidive. La littérature étant pauvre, la prise en charge diagnostique et thérapeutique est controversée et donc hétérogène. Globalement le pronostic est mauvais, et certains paramètres corrélés au pronostic sont précisés. PMID:26185585
Mbongo, Jean Alfred; Mouanga, Alain; Miabaou, Didace Massamba; Nzelie, Aya; Iloki, Léon Hervé
2016-01-01
Toute maladie est un mal en soi qu’il faut éradiquer car elle altère souvent de façon significative la qualité de la vie. L’hystérectomie vaginale est indiquée pour les patientes qui présentent certaines affections gynécologiques graves, elle est donc bénéfique mais, peut également avoir une répercussion néfaste sur la qualité de vie de la femme. Ainsi nous avons voulu explorer le vécu de la maladie et de l’hystérectomie vaginale (HV) des femmes avant et après l’intervention chirurgicale. Nous avons effectué une étude prospective qualitative, à recueil clinique sur une période de 12 mois; qui a concerné les femmes, ayant subi une hystérectomie vaginale. Celles n’ayant pas accepté de participer à l’étude, ou n’ayant pas de contact téléphonique n’ont pas été incluses. Pendant la maladie, le vécu des femmes a été: l’inconfort sexuel 26/40 (65%); les saignements génitaux 12/40 (30%); les douleurs pelviennes 13/40 (32,5%). En Post-opératoire, ont été noté les dyspareunies transitoires30/40 (75%) ; les céphalées secondaires à l’anesthésie 4/40 (10%). Le vécu psychologique a été dominé avant l’HV par la peur de la chirurgie chez toutes les patientes, les troubles du sommeil 38/40 (95%), l’angoisse 30 /40(75%), un sentiment de honte lié aux difficultés à accomplir l’acte sexuel en raison du prolapsus 26/40(65%) et/ ou en raison des saignements génitaux, dus au fibrome utérin 14/40(35%). Le sentiment de la perte de féminité était déclaré par 26/40 femmes porteuses de prolapsus utérin (65%), la modification de l’estime de soi 26/40 (65%). Ces appréciations subjectives ont été améliorées avec l’HV, contre balançant la perte de leur organe de reproduction. Aucune information n’a été donnée par les femmes à leurs proches et aux membres de la famille avant la chirurgie, traduisant ainsi leur sentiment de gène ou de honte. L’arrêt des symptômes a été observé dans tous les cas, même si dans un cas (1,25%) un nouveau signe au titre des complications (plaie rectale) a éténoté. Concernant l’activité sexuelle, tous les couples ont déclaré leur satisfaction après le traitement. Le vécu dramatique de la maladie et de l’hystérectomie vaginale avant, est nettement amélioré après l’intervention chirurgicale. PMID:28292042
Stratum corneum dysfunction in dandruff
Turner, G A; Hoptroff, M; Harding, C R
2012-01-01
Summary Synopsis Dandruff is characterized by a flaky, pruritic scalp and affects up to half the world’s population post-puberty. The aetiology of dandruff is multifactorial, influenced by Malassezia, sebum production and individual susceptibility. The commensal yeast Malassezia is a strong contributory factor to dandruff formation, but the presence of Malassezia on healthy scalps indicates that Malassezia alone is not a sufficient cause. A healthy stratum corneum (SC) forms a protective barrier to prevent water loss and maintain hydration of the scalp. It also protects against external insults such as microorganisms, including Malassezia, and toxic materials. Severe or chronic barrier damage can impair proper hydration, leading to atypical epidermal proliferation, keratinocyte differentiation and SC maturation, which may underlie some dandruff symptoms. The depleted and disorganized structural lipids of the dandruff SC are consistent with the weakened barrier indicated by elevated transepidermal water loss. Further evidence of a weakened barrier in dandruff includes subclinical inflammation and higher susceptibility to topical irritants. We are proposing that disruption of the SC of the scalp may facilitate dandruff generation, in part by affecting susceptibility to metabolites from Malassezia. Treatment of dandruff with cosmetic products to directly improve SC integrity while providing effective antifungal activity may thus be beneficial. Résumé Les pellicules se caractérisent par un cuir chevelu prurigineux, squameux, et affectent jusqu’à la moitié de la population post-pubertaire du monde. L’étiologie des pellicules est multifactorielle, influencée par Malassezia, par la production de sébum, et par la susceptibilité individuelle. La levure commensale Malassezia est un facteur fortement contributif à la formation de pellicules, mais la présence de Malassezia aussi sur les cuirs chevelus sains indique que Malassezia seule n’est pas une cause suffisante. Un stratum corneum (SC) sain forme une barrière protectrice pour empêcher la perte d’eau et maintenir l’hydratation du cuir chevelu. Il protège également contre les agressions externes tels les micro-organismes, y compris Malassezia, ou des substances toxiques. Des dommages aigus ou chroniques au niveau de la barrière peuvent nuire à une bonne hydratation, conduisant à des effets atypiques de la prolifération épidermique, de la différenciation des kératinocytes, et de la maturation du SC, ce qui peut expliquer une partie des symptômes des pellicules. L’appauvrissement et la désorganisation des lipides structurels d’un stratum corneum sujet aux pellicules sont compatibles avec la notion d’une barrière affaiblie telle qu’indiquée par une perte d’eau transépidermique élevée. Une preuve supplémentaire d’une barrière affaiblie dans les cas des pellicules est fournie par un niveau d’inflammation infraclinique et une plus grande susceptibilité aux irritants topiques. Nous proposons que la perturbation du SC du cuir chevelu facilite la production de pellicules, en partie en augmentant la sensibilité aux métabolites de Malassezia. Le traitement des pellicules avec des produits cosmétiques pour améliorer directement l’intégrité du SC, tout en offrant une activité antifongique efficace peut donc être bénéfique. PMID:22515370
A Study of the Unstable Modes in High Mach Number Gaseous Jets and Shear Layers
NASA Astrophysics Data System (ADS)
Bassett, Gene Marcel
1993-01-01
Instabilities affecting the propagation of supersonic gaseous jets have been studied using high resolution computer simulations with the Piecewise-Parabolic-Method (PPM). These results are discussed in relation to jets from galactic nuclei. These studies involve a detailed treatment of a single section of a very long jet, approximating the dynamics by using periodic boundary conditions. Shear layer simulations have explored the effects of shear layers on the growth of nonlinear instabilities. Convergence of the numerical approximations has been tested by comparing jet simulations with different grid resolutions. The effects of initial conditions and geometry on the dominant disruptive instabilities have also been explored. Simulations of shear layers with a variety of thicknesses, Mach numbers and densities perturbed by incident sound waves imply that the time for the excited kink modes to grow large in amplitude and disrupt the shear layer is taug = (546 +/- 24) (M/4)^{1.7 } (Apert/0.02) ^{-0.4} delta/c, where M is the jet Mach number, delta is the half-width of the shear layer, and A_ {pert} is the perturbation amplitude. For simulations of periodic jets, the initial velocity perturbations set up zig-zag shock patterns inside the jet. In each case a single zig-zag shock pattern (an odd mode) or a double zig-zag shock pattern (an even mode) grows to dominate the flow. The dominant kink instability responsible for these shock patterns moves approximately at the linear resonance velocity, nu_ {mode} = cextnu_ {relative}/(cjet + c_ {ext}). For high resolution simulations (those with 150 or more computational zones across the jet width), the even mode dominates if the even penetration is higher in amplitude initially than the odd perturbation. For low resolution simulations, the odd mode dominates even for a stronger even mode perturbation. In high resolution simulations the jet boundary rolls up and large amounts of external gas are entrained into the jet. In low resolution simulations this entrainment process is impeded by numerical viscosity. The three-dimensional jet simulations behave similarly to two-dimensional jet runs with the same grid resolutions.
NASA Astrophysics Data System (ADS)
Lévêque, J.; Netter, D.; Rezzoug, A.; Caron, J. P.
1995-12-01
The increasing of fault current level in electrical networks leads to a new interest for superconducting current limiters since 1983 when an ac wire with low losses has been developed. They are based on the natural transition from the superconducting state to the normal resistive state. In order to study the transition usual models are not sufficient. This paper deals with a numerical resolution of the electro-thermical coupled problem. Computed and experimental results are favorably compared. Some factors which affect the transition as well as the influence of the starting instant are studied. L'accroissement des courants de court-circuit dans les réseaux électriques a ravivé l'intérêt pour les limiteurs supraconducteurs de courant depuis la mise au point en 1983 de fil supraconducteur à faibles pertes en régime variable. Le principe de fonctionnement de ces limiteurs est fondé sur la transition vers l'état normal du matériau supraconducteur. Les modèles simplifiés s'avèrent insuffisants pour l'étude de la transition, une méthode de résolution numérique du problème électrothermique est donc faite. Les résultats des simulations sont comparés à ceux d'expérimentation puis deux études, l'une de facteurs influant sur la propagation de la transition, l'autre du type de court-circuit, sont menées.
Evidence for retrovirus infections in green turtles Chelonia mydas from the Hawaiian islands
Casey, R.N.; Quackenbush, S.L.; Work, Thierry M.; Balazs, G.H.; Bowser, P.R.; Casey, J.W.
1997-01-01
Apparently normal Hawaiian green turtles Chelonia mydas and those displaying fibropapillomas were analyzed for infection by retroviruses. Strikingly, all samples were positive for polymerase enhanced reverse transcriptase (PERT) with levels high enough to quantitate by the conventional reverse transcriptase (RT) assay. However, samples of skin, even from asymptomatic turtles, were RT positive, although the levels of enzyme activity in healthy turtles hatched and raised in captivity were much lower than those observed in asymptomatic free-ranging turtles. Turtles with fibropapillomas displayed a broad range of reverse transcriptase activity. Skin and eye fibropapillomas and a heart tumor were further analyzed and shown to have reverse transcriptase activity that banded in a sucrose gradient at 1.17 g ml-1. The reverse transcriptase activity purified from the heart tumor displayed a temperature optimum of 37??C and showed a preference for Mn2+ over Mg2+. Sucrose gradient fractions of this sample displaying elevated reverse transcriptase activity contained primarily retrovitalsized particles with prominent envelope spikes, when negatively stained and examined by electron microscopy. Sodium dodecylsulfate-polyacrylamide gel electrophoresis (SDS-PAGE) analysis of gradient-purified virions revealed a conserved profile among 4 independent tumors and showed 7 prominent proteins having molecular weights of 116, 83, 51, 43, 40, 20 and 14 kDa. The data suggest that retroviral infections are widespread in Hawaiian green turtles and a comprehensive investigation is warranted to address the possibility that these agents cause green turtle fibropapillomatosis (GTFP).
NASA Astrophysics Data System (ADS)
Chabin, M.; Malki, M.; Husson, E.; Morell, A.
1994-07-01
The evolution of the dielectric permittivity and loss factor under an external applied electric field has been studied in PbMg{1/3}Nb{2/3}O3 ceramics between 80 and 420 K. For a threshold field of 4 kV.cm^{-1}, it is possible to induce a ferroelectric transition from the average cubic phase to a macroscopically polar phase. The poling and depoling temperatures depend on the various combinations of thermal treatments and on the applied field strength. The transition between the nanopolar state and the macropolar state is discussed L'évolution de la permittivité diélectrique et du facteur de pertes diélectriques sous un champ électrique extérieur a été étudiée dans des céramiques de PbMg{1/3}Nb{2/3}O3 enre 80 et 420 K. Pour un champ de seuil de 4 kV.cm^{-1} il est possible d'induire une transition ferroélectrique de la phase cubique moyenne en une phase macroscopiquement polaire. Les températures de polarisation et de dépolarisation dépendent des différentes combinaisons de traitements thermiques et de la valeur du champ appliqué. La transition entre la phase constituée de nanodomaines polaires et la phase constituée de macrodomaines polaires est discutée.
Kent, Dorothea Stark; Remer, Thomas; Blumenthal, Caron; Hunt, Sharon; Simonds, Sharon; Egert, Sarah; Gaskin, Kevin J
2018-05-01
The 'gold standard' test for the indirect determination of pancreatic function status in infants with cystic fibrosis (CF), the 72-hour fecal fat excretion test, is likely to become obsolete in the near future. Alternative indirect pancreatic function tests with sufficient sensitivity and specificity to determine pancreatic phenotype need further evaluation in CF infants. Evaluation of the clinical utility of both the noninvasive, nonradioactive C-mixed triglyceride (MTG) breath test and fecal elastase-1 (FE1) in comparison with the 72-hour fecal fat assessment in infants with CF. C-MTG breath test and the monoclonal and polyclonal FE1 assessment in stool was compared with the 72-hour fecal fat assessment in 24 infants with CF. Oral pancreatic enzyme substitution (PERT; if already commenced) was stopped before the tests. Sensitivity rates between 82% and 100% for CF patients with pancreatic insufficiency assessed by both the C-MTG breath test and the FE1 tests proved to be high and promising. The C-MTG breath test (31%-38%) as well as both FE1 tests assessed by the monoclonal (46%-54%) and the polyclonal (45%) ELISA kits, however, showed unacceptably low-sensitivity rates for the detection of pancreatic-sufficient CF patients in the present study. The C-MTG breath test with nondispersive infrared spectroscopy (NDIRS) technique, as well as both FE1 tests, are not alternatives to the fecal fat balance test for the evaluation of pancreatic function in CF infants during the first year of life.
Status and Progress of High-efficiency Silicon Solar Cells
NASA Astrophysics Data System (ADS)
Xiao, Shaoqing; Xu, Shuyan
High-efficiency Si solar cells have attracted more and more attention from researchers, scientists, engineers of photovoltaic (PV) industry for the past few decades. Many high-quality researchers and engineers in both academia and industry seek solutions to improve the cell efficiency and reduce the cost. This desire has stimulated a growing number of major research and research infrastructure programmes, and a rapidly increasing number of publications in this filed. This chapter reviews materials, devices and physics of high-efficiency Si solar cells developed over the last 20 years. In this chapter there is a fair number of topics, not only from the material viewpoint, introducing various materials that are required for high-efficiency Si solar cells, such as base materials (FZ-Si, CZ-Si, MCZ-Si and multi-Si), emitter materials (diffused emitter and deposited emitter), passivation materials (Al-back surface field, high-low junction, SiO2, SiO x , SiN x , Al2O3 and a-Si:H), and other functional materials (antireflective layer, TCO and metal electrode), but also from the device and physics point of view, elaborating on physics, cell concept, development and status of all kinds of high-efficiency Si solar cells, such as passivated emitter and rear contact (PERC), passivated emitter and rear locally diffused (PERL), passivated emitter and rear totally diffused (PERT), Pluto, interdigitated back-contacted (IBC), emitter-wrap-through (EWT), metallization-wrap-through (MWT), Heterojunction with intrinsic thin-layer (HIT) and so on. Some representative examples of high-efficiency Si solar cell materials and devices with excellent performance and competitive advantages are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cunningham, D.W.; Skinner, D.E.
1999-10-26
The objective of this Phase 1 subcontract was to establish an efficient production plating system capable of depositing thin-film CdTe and CdS on substrates up to 0.55 m{sup 2}. This baseline would then be used to build on and extend deposition areas to 0.94 m{sup 2} in the next two phases. The following achievements have been demonstrated: {sm{underscore}bullet} Chemical-bath deposition of CdS and electrochemical deposition of CdTe was demonstrated on 0.55 m{sup 2} substrates. The films were characterized using optical and electrical techniques, to increase the understanding of the materials and aid in loss analysis. {sm{underscore}bullet} A stand-alone, prototype CdTemore » reaction tank was built and commissioned, allowing the BP Solar team to perform full-scale trials as part of this subcontract. {sm{underscore}bullet} BP Solar installed two outdoor systems for reliability and performance testing. {sm{underscore}bullet} The 2-kW, ground-mounted, grid-connected system contains seventy-two 0.43-m{sup 2} Apollo{reg{underscore}sign} module interconnects. {sm{underscore}bullet} Two modules have been supplied to NREL for evaluation on their Performance and Energy Rating Test bed (PERT) for kWh evaluation. {sm{underscore}bullet} BP Solar further characterized the process waste stream with the aim to close-loop the system. Currently, various pieces of equipment are being investigated for suitability of particle and total organic compound removal.« less
Experiments on planetary ices at UCL
NASA Astrophysics Data System (ADS)
Grindrod, P. M.; Fortes, A. D.; Wood, I. G.; Dobson, D.; Sammonds, P. R.; Stone-Drake, L.; Vocadlo, L.
2007-08-01
Using a suite of techniques and equipment, we conduct several different types of experiments on planetary ices at UCL. Samples are prepared in the Ice Physics Laboratory, which consists of a 5 chamber complex of inter-connected cold rooms, controllable from +30 to -30 deg C. Within this laboratory we have a functioning triaxial deformation cell operating at low temperature (down to -90 deg C) and high pressures (300 MPa), an Automatic Ice Fabric Analyser (AIFA) and a low-temperature microscope with CCD output. Polycrystalline samples, 40mm diameter by 100mm long, are compressed in the triaxial rig with a confining pressure; single crystal specimens are compressed in a separate uniaxial creep rig which operates at zero confining pressure for surface studies. A cold stage is also available for study of ice microstructural studies on our new Jeol JSM-6480LV SEM, which also allows tensile, compression and/or bending tests, with load ranges from less than 2N to 5000N. Finally, we also use a cold stage on a new PANalytical, X'pert PRO MPD, high resolution powder diffractometer to study the structure and phase behaviour of icy materials. Recent highlights of our work include: (1) derivation of a manufacturing process for methane clathrate at low temperatures, analysed in the X-Ray Diffraction Laboratory, for future rheological experiments, (2) analysed the growth behaviour of MS11, (3) refurbished and commenced calibration tests on the triaxial deformation cell using ice Ih, and (4) performed creep tests on gypsum and epsomite using the single crystal deformation cell. Further experiments will build on these preliminary results.
NASA Astrophysics Data System (ADS)
Iorio, L.
2012-07-01
We put independent model dynamical constraints on the net electric charge Q of some astronomical and astrophysical objects by assuming that their exterior spacetimes are described by the Reissner-Nordström, metric, which induces an additional potential {U_RN ∝ Q^2 r^{-2}}. From the current bounds {Δ dot \\varpi} on any anomalies in the secular perihelion rate {dot \\varpi} of Mercury and the Earth-mercury ranging Δ ρ, we have {|Q_{⊙}| ≲ 1-0.4 × 10^{18} C}. Such constraints are 60-200 times tighter than those recently inferred in literature. For the Earth, the perigee precession of the Moon, determined with the Lunar Laser Ranging technique, and the intersatellite ranging Δ ρ for the GRACE mission yield {|Q_{⊕} | ≲ 5-0.4 × 10^{14} C}. The periastron rate of the double pulsar PSR J0737-3039A/B system allows to infer {|Q_NS | ≲ 5× 10^{19} C}. According to the perinigricon precession of the main sequence S2 star in Sgr A*, the electric charge carried by the compact object hosted in the Galactic Center may be as large as {|Q_{bullet} | ≲ 4× 10^{27} C}. Our results extend to other hypothetical power-law interactions inducing extra-potentials {U_pert = Ψ r^{-2}} as well. It turns out that the terrestrial GRACE mission yields the tightest constraint on the parameter {Ψ}, assumed as a universal constant, amounting to {|Ψ| ≲ 5× 109 {m^4 s^{-2}}}.
Caracterisation experimentale de la transmission acoustique de structures aeronautiques
NASA Astrophysics Data System (ADS)
Pointel, Vincent
Le confort des passagers à l'intérieur des avions pendant le vol est un axe en voie d'amélioration constante. L'augmentation de la proportion des matériaux composites dans la fabrication des structures aéronautiques amène de nouvelles problématiques à résoudre. Le faible amortissement de ces structures, en contre partie de leur poids/raideur faible, est non favorable sur le plan acoustique, ce qui oblige les concepteurs à devoir trouver des moyens d'amélioration. De plus, les mécanismes de transmission du son au travers d'un système double paroi de type aéronautique ne sont pas complètement compris, c'est la raison qui motive cette étude. L'objectif principal de ce projet est de constituer une base de données pour le partenaire industriel de ce projet : Bombardier Aéronautique. En effet, les données expérimentales de performance d'isolation acoustique, de systèmes complets représentatifs d'un fuselage d'avion sont très rares dans la littérature scientifique. C'est pourquoi une méthodologie expérimentale est utilisée dans ce projet. Deux conceptions différentes de fuselage sont comparées. La première possède une peau (partie extérieure du fuselage) métallique raidie, alors que la deuxième est constituée d'un panneau sandwich composite. Dans les deux cas, un panneau de finition de fabrication sandwich est utilisé. Un traitement acoustique en laine de verre est placé à l'intérieur de chacun des fuselages. Des isolateurs vibratoires sont utilisés pour connecter les deux panneaux du fuselage. La simulation en laboratoire de la couche limite turbulente, qui est la source d'excitation prépondérante pendant la phase de vol, n'est pas encore possible hormis en soufflerie. C'est pourquoi deux cas d'excitation sont considérés pour essayer d'approcher cette sollicitation : une excitation mécanique (pot vibrant) et une acoustique (champ diffus). La validation et l'analyse des résultats sont effectuées par le biais des logiciels NOVA et VAONE, utilisés par le partenaire industriel de ce projet. Un des objectifs secondaires est de valider le modèle double paroi implémenté dans NOVA. L'investigation de l'effet de compression local du traitement acoustique, sur la perte par transmission d'une simple paroi, montre que cette action n'a aucun effet bénéfique notable. D'autre part, il apparaît que la raideur des isolateurs vibratoires a un lien direct avec les performances d'isolation du système double paroi. Le système double paroi avec peau composite semble moins sensible à ce paramètre. Le modèle double paroi de NOVA donne de bons résultats concernant le système double paroi avec une peau métallique. Des écarts plus importants sont observés en moyennes et hautes fréquences dans le cas du système avec une peau composite. Cependant, la bonne tendance de la prédiction au vu de la complexité de la structure est plutôt prometteuse.
NASA Astrophysics Data System (ADS)
Mejdi, Abderrazak
Les fuselages des avions sont generalement en aluminium ou en composite renforces par des raidisseurs longitudinaux (lisses) et transversaux (cadres). Les raidisseurs peuvent etre metalliques ou en composite. Durant leurs differentes phases de vol, les structures d'avions sont soumises a des excitations aeriennes (couche limite turbulente : TBL, champs diffus : DAF) sur la peau exterieure dont l'energie acoustique produite se transmet a l'interieur de la cabine. Les moteurs, montes sur la structure, produisent une excitation solidienne significative. Ce projet a pour objectifs de developper et de mettre en place des strategies de modelisations des fuselages d'avions soumises a des excitations aeriennes et solidiennes. Tous d'abord, une mise a jour des modeles existants de la TBL apparait dans le deuxieme chapitre afin de mieux les classer. Les proprietes de la reponse vibro-acoustique des structures planes finies et infinies sont analysees. Dans le troisieme chapitre, les hypotheses sur lesquelles sont bases les modeles existants concernant les structures metalliques orthogonalement raidies soumises a des excitations mecaniques, DAF et TBL sont reexamines en premier lieu. Ensuite, une modelisation fine et fiable de ces structures est developpee. Le modele est valide numeriquement a l'aide des methodes des elements finis (FEM) et de frontiere (BEM). Des tests de validations experimentales sont realises sur des panneaux d'avions fournis par des societes aeronautiques. Au quatrieme chapitre, une extension vers les structures composites renforcees par des raidisseurs aussi en composites et de formes complexes est etablie. Un modele analytique simple est egalement implemente et valide numeriquement. Au cinquieme chapitre, la modelisation des structures raidies periodiques en composites est beaucoup plus raffinee par la prise en compte des effets de couplage des deplacements planes et transversaux. L'effet de taille des structures finies periodiques est egalement pris en compte. Les modeles developpes ont permis de conduire plusieurs etudes parametriques sur les proprietes vibro-acoustiques des structures d'avions facilitant ainsi la tache des concepteurs. Dans le cadre de cette these, un article a ete publie dans le Journal of Sound and Vibration et trois autres soumis, respectivement aux Journal of Acoustical Society of America, International Journal of Solid Mechanics et au Journal of Sound and Vibration Mots cles : structures raidies, composites, vibro-acoustique, perte par transmission.
Effects of annealing on the structure and magnetic properties of Fe80B20 magnetostrictive fibers.
Zhu, Qianke; Zhang, Shuling; Geng, Guihong; Li, Qiushu; Zhang, Kewei; Zhang, Lin
2016-07-04
Fe80B20 amorphous alloys exhibit excellent soft magnetic properties, high abrasive resistance and outstanding corrosion resistance. In this work, Fe80B20 amorphous micro-fibers with HC of 3.33 Oe were firstly fabricated and the effects of annealing temperature on the structure and magnetic properties of the fibers were investigated. In this study, Fe80B20 amorphous fibers were prepared by the single roller melt-spinning method. The structures of as-spun and annealed fibers were investigated by X-ray diffractometer (XRD) (PANalytical X,Pert Power) using Cu Kα radiation. The morphology of the fibers was observed by scanning electron microscopy (SEM) (HITACHI-S4800). Differential scanning calorimetry (DSC) measurements of the fibers were performed on Mettler Toledo TGA/DSC1 device under N2 protection. Vibrating sample magnetometer (VSM, Versalab) was used to examine the magnetic properties of the fibers. The resonance behavior of the fibers was characterized by an impedance analyzer (Agilent 4294A) with a home-made copper coil. The X-ray diffusion (XRD) patterns show that the fibers remain amorphous structure until the annealing temperature reaches 500°C. The differential scanning calorimetry (DSC) results show that the crystallization temperature of the fibers is 449°C. The crystallization activation energy is calculated to be 221 kJ/mol using Kissinger formula. The scanning electron microscopy (SEM) images show that a few dendrites appear at the fiber surface after annealing. The result indicates that the coercivity HC (//) and HC (⊥) slightly increases with increasing annealing temperature until 400°C, and then dramatically increases with further increasing annealing temperature which is due to significant increase in magneto-crystalline anisotropy and magneto-elastic anisotropy. The Q value firstly increases slightly when the annealing temperature rises from room temperature (RT) to 300°C, then decreases until 400°C. Eventually, the value of Q increases to ~2000 at annealing temperature of 500°C. In this study, Fe80B20 amorphous fibers with the diameter of 60 μm were prepared by the single roller melt-spinning method and annealed at 200°C, 300°C, 400°C, and 500°C, respectively. XRD results indicate that the fiber structure remains amorphous when the annealing temperature is below 400°C. α-Fe phase and Fe3B phase appear when the annealing temperature rises to 500°C, which is above the crystallization temperature of 449°C. The recrystallization activation energy is calculated to be 221 kJ/mol. The coercivity increases with increasing annealing temperature, which attributes to the increase of total anisotropy. All the as-spun and annealed fibers exhibit good resonance behavior for magnetostrictive sensors.
Villarreal, C; Rodriguez, M H; Bown, D N; Arredondo-Jiménez, J I
1995-04-01
Village-scale trials were carried out in southern Mexico to compare the efficacy of indoor-spraying of the pyrethroid insecticide lambda-cyhalothrin applied either as low-volume (LV) aqueous emulsion or as wettable-powder (WP) aqueous suspension for residual control of the principal coastal malaria vector Anopheles albimanus. Three indoor spray rounds were conducted at 3-month intervals using back-pack mist-blowers to apply lambda-cyhalothrin 12.5 mg a.i./m2 by LV, whereas the WP was applied by conventional compression sprayer at a mean rate of 26.5 mg a.i./m2. Both treatments caused mosquito mortality indoors and outdoors (collected inside house curtains) as a result of contact with treated surfaces before and after feeding, but had no significant impact on overall population density of An. albimanus resting indoors or assessed by human bait collections. Contact bioassays showed that WP and LV treatments with lambda-cyhalothrin were effective for 12-20 weeks (> 75% mortality) without causing excito-repellency. Compared to the WP treatment (8 houses/man/day), LV treatment (25 houses/man/day) was more than 3 times quicker per house, potentially saving 68% of labour costs. This is offset, however, by the much lower unit price of a compression sprayer (e.g. Hudson 'X-pert' at US$120) than a mist-blower (e.g. 'Super Jolly' at US$350), and higher running costs for LV applications. It was calculated, therefore, that LV becomes more economical than WP after 18.8 treatments/100 houses/10 men at equivalent rates of application, or after 7.6 spray rounds with half-rate LV applications.
Chisti, Mohammod Jobayer; Graham, Stephen M; Duke, Trevor; Ahmed, Tahmeed; Ashraf, Hasan; Faruque, Abu Syed Golam; La Vincente, Sophie; Banu, Sayera; Raqib, Rubhana; Salam, Mohammed Abdus
2014-01-01
Severe malnutrition is a risk factor for pneumonia due to a wide range of pathogens but aetiological data are limited and the role of Mycobacterium tuberculosis is uncertain. We prospectively investigated severely malnourished young children (<5 years) with radiological pneumonia admitted over a 15-month period. Investigations included blood culture, sputa for microscopy and mycobacterial culture. Xpert MTB/RIF assay was introduced during the study. Study children were followed for 12 weeks following their discharge from the hospital. 405 eligible children were enrolled, with a median age of 10 months. Bacterial pathogens were isolated from blood culture in 18 (4.4%) children, of which 72% were Gram negatives. Tuberculosis was confirmed microbiologically in 7% (27/396) of children that provided sputum - 10 by culture, 21 by Xpert MTB/RIF assay, and 4 by both tests. The diagnostic yield from induced sputum was 6% compared to 3.5% from gastric aspirate. Sixty (16%) additional children had tuberculosis diagnosed clinically that was not microbiologically confirmed. Most confirmed tuberculosis cases did not have a positive contact history or positive tuberculin test. The sensitivity and specificity of Xpert MTB/RIF assay compared to culture was 67% (95% CI: 24-94) and 92% (95% CI: 87-95) respectively. Overall case-fatality rate was 17% and half of the deaths occurred in home following discharge from the hospital. TB was common in severely malnourished Bangladeshi children with pneumonia. X-pert MTB/RIF assay provided higher case detection rate compared to sputum microscopy and culture. The high mortality among the study children underscores the need for further research aimed at improved case detection and management for better outcomes.
Xue, Ling; Cohnstaedt, Lee W.; Scott, H. Morgan; Scoglio, Caterina
2013-01-01
Rift Valley fever is a vector-borne zoonotic disease which causes high morbidity and mortality in livestock. In the event Rift Valley fever virus is introduced to the United States or other non-endemic areas, understanding the potential patterns of spread and the areas at risk based on disease vectors and hosts will be vital for developing mitigation strategies. Presented here is a general network-based mathematical model of Rift Valley fever. Given a lack of empirical data on disease vector species and their vector competence, this discrete time epidemic model uses stochastic parameters following several PERT distributions to model the dynamic interactions between hosts and likely North American mosquito vectors in dispersed geographic areas. Spatial effects and climate factors are also addressed in the model. The model is applied to a large directed asymmetric network of 3,621 nodes based on actual farms to examine a hypothetical introduction to some counties of Texas, an important ranching area in the United States of America. The nodes of the networks represent livestock farms, livestock markets, and feedlots, and the links represent cattle movements and mosquito diffusion between different nodes. Cattle and mosquito (Aedes and Culex) populations are treated with different contact networks to assess virus propagation. Rift Valley fever virus spread is assessed under various initial infection conditions (infected mosquito eggs, adults or cattle). A surprising trend is fewer initial infectious organisms result in a longer delay before a larger and more prolonged outbreak. The delay is likely caused by a lack of herd immunity while the infection expands geographically before becoming an epidemic involving many dispersed farms and animals almost simultaneously. Cattle movement between farms is a large driver of virus expansion, thus quarantines can be efficient mitigation strategy to prevent further geographic spread. PMID:23667453
Xue, Ling; Cohnstaedt, Lee W; Scott, H Morgan; Scoglio, Caterina
2013-01-01
Rift Valley fever is a vector-borne zoonotic disease which causes high morbidity and mortality in livestock. In the event Rift Valley fever virus is introduced to the United States or other non-endemic areas, understanding the potential patterns of spread and the areas at risk based on disease vectors and hosts will be vital for developing mitigation strategies. Presented here is a general network-based mathematical model of Rift Valley fever. Given a lack of empirical data on disease vector species and their vector competence, this discrete time epidemic model uses stochastic parameters following several PERT distributions to model the dynamic interactions between hosts and likely North American mosquito vectors in dispersed geographic areas. Spatial effects and climate factors are also addressed in the model. The model is applied to a large directed asymmetric network of 3,621 nodes based on actual farms to examine a hypothetical introduction to some counties of Texas, an important ranching area in the United States of America. The nodes of the networks represent livestock farms, livestock markets, and feedlots, and the links represent cattle movements and mosquito diffusion between different nodes. Cattle and mosquito (Aedes and Culex) populations are treated with different contact networks to assess virus propagation. Rift Valley fever virus spread is assessed under various initial infection conditions (infected mosquito eggs, adults or cattle). A surprising trend is fewer initial infectious organisms result in a longer delay before a larger and more prolonged outbreak. The delay is likely caused by a lack of herd immunity while the infection expands geographically before becoming an epidemic involving many dispersed farms and animals almost simultaneously. Cattle movement between farms is a large driver of virus expansion, thus quarantines can be efficient mitigation strategy to prevent further geographic spread.
[The effect of parental attitudes on habilitation of hearing impaired children].
Ristić, Snezana; Kocić, Biljana; Milosević, Zoran
2013-04-01
Habilitation of children with hearing loss is a very complex process and requires a team work. Habilitation period length, as well as the effects themselves are individual and depend on many factors. The goal of any habilitation process is to improve the quality of life of each individual to the maximal extent possible, regardless of whether embedded cochlear implant, or other forms of am plification applied. A long-standing practice has shown that the influence of parents and their attitudes in the habilita tion process is great. The aim of this study was to examine the extent of this influence in order to educate the parents so to help their children maximize their potential. The instruments used in this study were: semi-structured interview, the Parental Attitudes Scale (PAD), Package Nottingham Early Estimates (NEAP). The participants in this study were the parents with children aged 4-15 years. The extent of hearing loss in the children was recorded at the beginning and during the habilitation process and all were actively involved at least three months. For statistical analysis of this study the descriptive and inferential statisti cal techniques were applied. The results of our study show significant differences in certain parental atti tudes. A close cooperation of the parents and quality ex perts interactions with the parents are a prerequisite for a successful habilitation. The result of this re search show that the process of habilitation of children with hearing and speech disorders is significantly affected by the parent attitudes. Parental attitudes were proved to be espe cially important for children with greater hearing loss. It was also noted that in our society mainly mothers are concerned with hearing-damaged children, which indicates that the educational process should be extend to both parents.
Crossan, Claire; Mourad, Nizar I; Smith, Karen; Gianello, Pierre; Scobie, Linda
2018-05-21
Subcutaneous implantation of a macroencapsulated patch containing human allogenic islets has been successfully used to alleviate type 1 diabetes mellitus (T1DM) in a human recipient without the need for immunosuppression. The use of encapsulated porcine islets to treat T1DM has also been reported. Although no evidence of pathogen transfer using this technology has been reported to date, we deemed it appropriate to determine if the encapsulation technology would prevent the release of virus, in particular, the porcine endogenous retrovirus (PERV). HEK293 (human epithelial kidney) and swine testis (ST) cells were co-cultured with macroencapsulated pig islets embedded in an alginate patch, macroencapsulated PK15 (swine kidney epithelial) cells embedded in an alginate patch and free PK15 cells. Cells and supernatant were harvested at weekly time points from the cultures for up to 60 days and screened for evidence of PERV release using qRT-PCR to detect PERV RNA and SG-PERT to detect reverse transcriptase (RT). No PERV virus, or evidence of PERV replication, was detected in the culture medium of HEK293 or pig cells cultured with encapsulated porcine islets. Increased PERV activity relative to the background was not detected in ST cells cultured with encapsulated PK15 cells. However, PERV was detected in 1 of the 3 experimental replicates of HEK293 cells cultured with encapsulated PK15 cells. Both HEK293 and ST cells cultured with free PK15 cells showed an increase in RT detection. With the exception of 1 replicate, there does not appear to be evidence of transmission of replication competent PERV from the encapsulated islet cells or the positive control PK15 cells across the alginate barrier. The detection of PERV would suggest the alginate barrier of this replicate may have become compromised, emphasizing the importance of quality control when producing encapsulated islet patches. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Pre-melting Behaviour in fcc Metals
NASA Astrophysics Data System (ADS)
Pamato, M. G.; Wood, I. G.; Dobson, D. P.; Hunt, S.; Vocadlo, L.
2016-12-01
Although the Earth's core is accepted to be made of an iron-nickel alloy with a few percent of light elements, its exact structure and composition are still unknown. Seismological and mineralogical models in the Earth's inner core do not agree, with mineralogical models derived from ab initiocalculations predicting shear-wave velocities up to 30% greater than seismically observed values. Recent computer simulations revealed that such difference may be explained by a dramatic, non-linear, softening of the elastic constants of Fe prior to melting. Up to date, computer calculations are the only result on pre-melting of direct applicability to the Earth's core and it is essential to systematically investigate such phenomena at inner core pressures and temperatures. Measuring the pressure dependence of pre-melting effects at such conditions and to the required precision is however extremely challenging. Also, pre-melting effects have been observed or suggested to occur in other materials, particularly noble metals, which exhibit large departures from linearity (modulus defects) at elevated temperatures. The aim of this study is to investigate to what extent pre-melting behaviour occurs in the physical properties of other metals at more experimentally tractable conditions. In particular, we report measurements of density and thermal expansion coefficients of both pure and alloyed gold (Au) up to their melting points. Au is an ideal test material since it crystallises in a simple monatomic face-centred structure and has a relatively low melting temperature. Precise measurements of unit cell lattice parameters were performed using a PANalytical X'Pert Pro powder diffractometer, equipped with an incident beam monochromator (giving very high resolution diffraction patterns) and with environmental stages covering the range from 40 K to 1373 K, with a readily achievable temperature resolution of 1K. We will discuss the circumstances under which pre-melting occurs, its mechanism(s), the effect of impurities and defects in the solid, and the consequences of pre-melting in the Earth's core.
Electrotechnical prospects for superconducting applications
NASA Astrophysics Data System (ADS)
Brunet, Y.; Renard, M.
After a review of the classical limitations, due to iron and copper losses, we give the necessary superconducting properties, needed to achieve significant progresses, either in the size, or in the efficiency of electrotechnical plants. The successive achievement in SC will be explained, in relation with the physics of usual SC, and the needed properties for technology. The problems encountered in electrotechnics by decreasing interest are : networks losses and stability, storage of energy production, transformation and protection. In each case, SC solutions may be found or at least imagined. We shall review the limitations estimated in each case, generally by extrapolation of small scale experiments, with 4 K SC, and try to see what are the modifications which may be obtained by the use of high Tc SC. Special attention will be paid to energy storage and electrical machinery and the interest of completely superconducting plants will be shown. Une fois précisées les limitations actuelles des matériels électriques imposées essentiellement par l'utilisation de matériaux comme le fer ou le cuivre, nous détaillons les caractéristiques des supraconducteurs susceptibles d'améliorer les performances des installations électrotechniques. Les progrès successifs des conducteurs supraconducteurs sont expliqués en tenant compte de leur impact technologique. Les problèmes rencontrés en électrotechnique sont : les pertes et la stabilité des réseaux, le stockage et la production de l'énergie. Dans chaque cas des solutions supraconductrices existent ou peuvent être imaginées. Nous examinons notamment pour les machines électriques et le stockage de l'énergie, les solutions qui existent ou sont en cours de développement avec des supraconducteurs à basse température (˜ 4 K) et quelles sont les modifications apportées par l'utilisation de supraconducteurs à haut Tc.
Labay, Keith A.; Wilson, Frederic H.
2004-01-01
The four parks depicted on this map make up a single World Heritage Site that covers 24.3 million acres. Together, they comprise the largest internationally protected land-based ecosystem on the planet. The United Nations Educational, Scientific and Cultural Organization (UNESCO) established the World Heritage Program in 1972 for the identification and protection of the world?s irreplaceable natural and cultural resources. World Heritage Sites are important as storehouses of memory and evolution, as anchors for sustainable tourism and community, and as laboratories for the study and understanding of the earth and culture. This World Heritage Site protects the prominent mountain ranges of Kluane, Wrangell, Saint Elias, and Chugach. It includes many of the tallest peaks on the continent, the world's largest non-polar icefield, extensive glaciers, vital watersheds, and expanses of dramatic wilderness. [Les quatre parcs figurant sur cette carte ne constituent qu?un seul site du patrimoine mondial recouvrant plus de 99 millions de km2, ce qui en fait le plus grand ecosysteme terrestre protege par loi internationale. En 1972, L?UNESCO (l?organisation des Nations Unies pour les sciences, l'education et la culture) a etabli le programme du patrimoine mondial afin d?identifier et de proteger les ressources naturelles et culturelles irremplacables de notre plan?te. Si les sites du patrimoine mondial sont si importants c'est parce qu'ils representent a la fois des livres ouverts sur l?histoire de la Terre, le point de depart du tourisme durable et du developpement des collectivites, des laboratoires pour etudier et comprendre la nature et la culture. Ce site du patrimoine mondial assure la protection des chaines de montagnes de Kluane, Wrangell, Saint Elias, et Chugach. On y trouve plusieurs des plus hauts sommets du continent, le plus grand champ de glace non-polaire du monde, d?immenses glaciers, des bassins hydrologiques essentiels, et de la nature sauvage a perte de vue.
Thrust at N{sup 3}LL with power corrections and a precision global fit for {alpha}{sub s}(m{sub Z})
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbate, Riccardo; Stewart, Iain W.; Fickinger, Michael
2011-04-01
We give a factorization formula for the e{sup +}e{sup -} thrust distribution d{sigma}/d{tau} with {tau}=1-T based on the soft-collinear effective theory. The result is applicable for all {tau}, i.e. in the peak, tail, and far-tail regions. The formula includes O({alpha}{sub s}{sup 3}) fixed-order QCD results, resummation of singular partonic {alpha}{sub s}{sup j}ln{sup k}({tau})/{tau} terms with N{sup 3}LL accuracy, hadronization effects from fitting a universal nonperturbative soft function defined with field theory, bottom quark mass effects, QED corrections, and the dominant top mass dependent terms from the axial anomaly. We do not rely on Monte Carlo generators to determine nonperturbative effectsmore » since they are not compatible with higher order perturbative analyses. Instead our treatment is based on fitting nonperturbative matrix elements in field theory, which are moments {Omega}{sub i} of a nonperturbative soft function. We present a global analysis of all available thrust data measured at center-of-mass energies Q=35-207 GeV in the tail region, where a two-parameter fit to {alpha}{sub s}(m{sub Z}) and the first moment {Omega}{sub 1} suffices. We use a short-distance scheme to define {Omega}{sub 1}, called the R-gap scheme, thus ensuring that the perturbative d{sigma}/d{tau} does not suffer from an O({Lambda}{sub QCD}) renormalon ambiguity. We find {alpha}{sub s}(m{sub Z})=0.1135{+-}(0.0002){sub expt{+-}}(0.0005){sub hadr{+-}}(0.0009){sub pert}, with {chi}{sup 2}/dof=0.91, where the displayed 1-sigma errors are the total experimental error, the hadronization uncertainty, and the perturbative theory uncertainty, respectively. The hadronization uncertainty in {alpha}{sub s} is significantly decreased compared to earlier analyses by our two-parameter fit, which determines {Omega}{sub 1}=0.323 GeV with 16% uncertainty.« less
Greenhouse as pert of a life support system for a martian crew
NASA Astrophysics Data System (ADS)
Sychev, V. N.; Levinskikh, M. A.; Grigorie, A. I.
One of the most important problems in space exploration is the biomedical support of humans in a hostile environment that cannot sustain their life and development. An integral part of biomedical support is an adequate life support systems (LSS). In the visible future a manned flight to Mars can become a reality. When designing a LSS for a Martian Expedition, we assume that over the next 15-20 years we will be able to support the Martian crew using systems and hardware that have been in operation on the International Space Station (ISS). Their extended use on MIR and ISS has demonstrated their high reliability and provided detailed information about their operation in space. Today it is recognized that integration of a biological subsystem (at least, a greenhouse) in a LSS will enrich the Martian spacecraft environment and mitigate potential adverse effects of a long-term exposure to a man-made (abiogenic) environment. Our estimates show that an adequate amount of wet biomass of lettuce cultures can be produced in a greenhouse with a planting area of 10 m2. This means that a greenhouse of a sufficient size can be housed in 5 standard Space Shuttle racks. A greenhouse made of modules can be installed as a single unit in one area or as several subunits in different areas of the Martian vehicle. According to our calculations, a greenhouse of this capacity can provide a 6-member crew with adequate amounts of vitamins and minerals, as well as regenerate about 5% of oxygen, 3.6% of water and over 1% of food components. Incorporation of a greenhouse will make it necessary to redesign current LSSs by changing material flows and upgrading their components. Prior to this, we have to investigate operational characteristics of greenhouses on space vehicles, design systems capable of supporting continuous and prolonged operation of greenhouses, and select plants that can provide crews with required vitamins and minerals.
NASA Astrophysics Data System (ADS)
Haguma, Didier
Il est dorenavant etabli que les changements climatiques auront des repercussions sur les ressources en eau. La situation est preoccupante pour le secteur de production d'energie hydroelectrique, car l'eau constitue le moteur pour generer cette forme d'energie. Il sera important d'adapter les regles de gestion et/ou les installations des systemes hydriques, afin de minimiser les impacts negatifs et/ou pour capitaliser sur les retombees positives que les changements climatiques pourront apporter. Les travaux de la presente recherche s'interessent au developpement d'une methode de gestion des systemes hydriques qui tient compte des projections climatiques pour mieux anticiper les impacts de l'evolution du climat sur la production d'hydroelectricite et d'etablir des strategies d'adaptation aux changements climatiques. Le domaine d'etude est le bassin versant de la riviere Manicouagan situe dans la partie centrale du Quebec. Une nouvelle approche d'optimisation des ressources hydriques dans le contexte des changements climatiques est proposee. L'approche traite le probleme de la saisonnalite et de la non-stationnarite du climat d'une maniere explicite pour representer l'incertitude rattachee a un ensemble des projections climatiques. Cette approche permet d'integrer les projections climatiques dans le probleme d'optimisation des ressources en eau pour une gestion a long terme des systemes hydriques et de developper des strategies d'adaptation de ces systemes aux changements climatiques. Les resultats montrent que les impacts des changements climatiques sur le regime hydrologique du bassin de la riviere Manicouagan seraient le devancement et l'attenuation de la crue printaniere et l'augmentation du volume annuel d'apports. L'adaptation des regles de gestion du systeme hydrique engendrerait une hausse de la production hydroelectrique. Neanmoins, une perte de la performance des installations existantes du systeme hydrique serait observee a cause de l'augmentation des deversements non productibles dans le climat futur. Des strategies d'adaptation structurale ont ete analysees pour augmenter la capacite de production et la capacite d'ecoulement de certaines centrales hydroelectriques afin d'ameliorer la performance du systeme. Une analyse economique a permis de choisir les meilleures mesures d'adaptation et de determiner le moment opportun pour la mise en oeuvre de ces mesures. Les resultats de la recherche offrent aux gestionnaires des systemes hydriques un outil qui permet de mieux anticiper les consequences des changements climatiques sur la production hydroelectrique, incluant le rendement de centrales, les deversements non productibles et le moment le plus opportun pour inclure des modifications aux systemes hydriques. Mots-cles : systemes hydriques, adaptation aux changements climatiques, riviere Manicouagan
Etude analytique du fonctionnement des moteurs à réluctance alimentés à fréquence variable
NASA Astrophysics Data System (ADS)
Sargos, F. M.; Gudefin, E. J.; Zaskalicky, P.
1995-03-01
In switched reluctance motors fed by a constant voltage source (like a battery) at high frequencies, the current becomes unpredictable and often cannot reach a given reference value, because of the variation of the inductances with the rotor position ; the “motional” m.m.f. generates commutation troubles which increase with the frequency. An optimal control as well as an approximate design of the motor require a quick and simple calculation of currents, powers and losses ; now, in principle, the non-linear electrical equation needs a numerical resolution, whose results cannot be extrapolated. By linearizing this equation by intervals, the method proposed here allows to express analytically, in any case, the phase currents, the torque and the copper losses, when the feeding voltage itself is constant by intervals. The model neglects saturation, but a simple adjustment of the inductance (chosen ad libitum) allows to deal with it. The calculation is immediate and perfectly accurate as long as the machine parameters themselves are well defined. Some results are given as examples for two usual feeding modes. Dans les machines à réluctance alimentées à haute fréquence par une source à tension constante, comme une batterie, le courant varie de manière difficilement prévisible, à cause de la variation des inductances avec la position du rotor, et souvent ne parvient pas à s'établir à une valeur de consigne imposée ; la f.é.m. “motionnelle” engendre des difficultés de communication qui s'aggravent avec l'augmentation de fréquence jusqu'à empêcher le fonctionnement. Tant pour optimiser la commande que pour dimensionner approximativement un moteur ; on doit pouvoir calculer simplement et rapidement le courant et la puissance ; or l'équation électrique, non linéaire, doit en principe être résolue numériquement et les résultats ne sont pratiquement pas extrapolables. En linéarisant par intervalles cette équation, la méthode proposée ici permer d'exprimer analytiquement et dans tous les cas les courants de phase, la puissance fournie et les pertes Joule, lorsque la tension aux bornes de l'enroulement est constante par morceaux. Le modèle utilisé néglige la saturation ; mais il est possible de tenir compte de celle-ci par des ajustements, facilement calculables, de la courbe d'inductance, quelle que soit son allure. Les calculs sont immédiats et parfaitement précis pour autant que les paramètres soient bien définis. Quelques résultats sont donnés à titre d'exemple, pour deux modes d'alimentation usuels.
NASA Astrophysics Data System (ADS)
Rahli, O.; Tadrist, L.; Miscevic, M.; Santini, R.
1995-11-01
Experimental studies have been carried out on fluid flow through porous media made up of randomly packed monodisperse fibers. The fibers of fixed diameter have an aspect ratio (L/d) varying between 4 and 70 given porosities of the porous media varying between 0.35 and 0.90. The relationships between friction losses and superficial velocity have been systematically determined for each porous medium. A detailed analysis is carried out for low fluid velocities. The influence of flow direction on pressure drop is studied along two perpendicular directions: it is found that fibrous media behave globally in isotropic manner. The permeability and the Kozeny Carman parameter k_k are deduced from experimental results. The variations of the permeability increase exponentially with the porosity. The Kozeny Carman parameter k_k is a decreasing function of the porosity ɛ(L/d) and tends asymptotically to a value close to that deduced from a modified Ergun relation. The important decrease, observed for small aspect ratios, is certainly an effect of the cut sections of fibers. This effect becomes negligible for larger aspect ratios. The results in terms of permeability and of Kozeny Carman parameter k_k are systematically compared to those deduced from various theoretical models. Generally, these models consider cylinders arranged in simple network, the flow being either parallel or perpendicular to the axis of cylinders. The variation laws of the parameter k_k, deduced from different models, present important discrepancies with our experimental results. The theoretical models, established for regular arrays of fibers do not correctly describe the behavior of randomly packed fibers. Des études expérimentales de l'écoulement d'un fluide à travers un milieu poreux constitué de fibres monodisperses empilées aléatoirement sont réalisées. Les milieux fibreux étudiés ont des porosités variant de 0,35 à 0,90. Ces porosités sont obtenues à l'aide de fibres dont le rapport d'aspect L/d varie de 4 à 70. Les lois de perte de charge en fonction de la vitesse du fluide sont déterminées systématiquement pour chaque milieu poreux. Une analyse détaillée est effectuée pour les faibles vitesses d'écoulement. L'influence de la direction de l'écoulement sur les lois de perte de charge est étudiée. Celles-ci sont établies suivant deux directions perpendiculaires. Les résultats obtenus mettent en évidence un comportement du milieu fibreux globalement isotrope vis à vis de l'écoulement du fluide. La perméabilité et le paramètre de Kozeny Carman k_k sont déduits des résultats expérimentaux. Les variations de la perméabilité en fonction de la porosité suivent une loi de type exponentiel. Le paramètre de Kozeny Carman k_k décroît en fonction de la porosité ɛ(L/d) et tend asymptotiquement vers une valeur proche de celle déduite de la relation d'Ergun modifiée. La décroissance importante, observée pour les petits rapports d'aspect, est sans doute due aux effets induits par les surfaces de base des fibres. Ces effets deviennent négligeables pour les grands rapports d'aspect. Les résultats en terme de perméabilité et de paramètre de Kozeny Carman sont comparés systématiquement avec les modèles proposés dans la littérature. Généralement, ces modèles supposent des cylindres de grande longueur, disposés en réseaux simples. L'écoulement est soit parallèle, soit perpendiculaire à l'axe des cylindres. Les lois de variation du paramètre k_k, déduites des différents modèles, présentent des écarts importants avec les résultats expérimentaux. Ces résultats semblent montrer qu'une disposition des fibres en réseaux simples ne permet pas une description convenable des écoulements à travers les empilements aléatoires de fibres.
Towards the development of rapid screening techniques for shale gas core properties
NASA Astrophysics Data System (ADS)
Cave, Mark R.; Vane, Christopher; Kemp, Simon; Harrington, Jon; Cuss, Robert
2013-04-01
Shale gas has been produced for many years in the U.S.A. and forms around 8% of total their natural gas production. Recent testing for gas on the Fylde Coast in Lancashire UK suggests there are potentially large reserves which could be exploited. The increasing significance of shale gas has lead to the need for deeper understanding of shale behaviour. There are many factors which govern whether a particular shale will become a shale gas resource and these include: i) Organic matter abundance, type and thermal maturity; ii) Porosity-permeability relationships and pore size distribution; iii) Brittleness and its relationship to mineralogy and rock fabric. Measurements of these properties require sophisticated and time consuming laboratory techniques (Josh et al 2012), whereas rapid screening techniques could provide timely results which could improve the efficiency and cost effectiveness of exploration. In this study, techniques which are portable and provide rapid on-site measurements (X-ray Fluorescence (XRF) and Infra-red (IR) spectroscopy) have been calibrated against standard laboratory techniques (Rock-Eval 6 analyser-Vinci Technologies) and Powder whole-rock XRD analysis was carried out using a PANalytical X'Pert Pro series diffractometer equipped with a cobalt-target tube, X'Celerator detector and operated at 45kV and 40mA, to predict properties of potential shale gas material from core material from the Bowland shale Roosecote, south Cumbria. Preliminary work showed that, amongst various mineralogical and organic matter properties of the core, regression models could be used so that the total organic carbon content could be predicted from the IR spectra with a 95 percentile confidence prediction error of 0.6% organic carbon, the free hydrocarbons could be predicted with a 95 percentile confidence prediction error of 0.6 mgHC/g rock, the bound hydrocarbons could be predicted with a 95 percentile confidence prediction error of 2.4 mgHC/g rock, mica content with a 95 percentile confidence prediction error of 14% and quartz content with a 95 percentile confidence prediction error of 14% . References M. Josh *, L. Esteban, C. Delle Piane, J. Sarout, D.N. Dewhurst, M.B. Clennell 2012. Journal of Petroleum Science and Engineering , 88-89, 107-124.
Pedraza-Sánchez, Sigifredo; Lezana-Fernández, Jose Luis; Gonzalez, Yolanda; Martínez-Robles, Luis; Ventura-Ayala, María Laura; Sadowinski-Pine, Stanislaw; Nava-Frías, Margarita; Moreno-Espinosa, Sarbelio; Casanova, Jean-Laurent; Puel, Anne; Boisson-Dupuis, Stephanie; Torres, Martha
2017-01-01
In humans, recessive loss-of-function mutations in STAT1 are associated with mycobacterial and viral infections, whereas gain-of-function (GOF) mutations in STAT1 are associated with a type of primary immunodeficiency related mainly, but not exclusively, to chronic mucocutaneous candidiasis (CMC). We studied and established a molecular diagnosis in a pediatric patient with mycobacterial infections, associated with CMC. The patient, daughter of a non-consanguineous mestizo Mexican family, had axillary adenitis secondary to BCG vaccination and was cured with resection of the abscess at 1-year old. At the age of 4 years, she had a supraclavicular abscess with acid-fast-staining bacilli identified in the soft tissue and bone, with clinical signs of disseminated infection and a positive Gene-X-pert test, which responded to anti-mycobacterial drugs. Laboratory tests of the IL-12/interferon gamma (IFN-γ) circuit showed a higher production of IL-12p70 in the whole blood from the patient compared to healthy controls, when stimulated with BCG and BCG + IFN-γ. The whole blood of the patient produced 35% less IFN-γ compared to controls assessed by ELISA and flow cytometry, but IL-17 producing T cells from patient were almost absent in PBMC stimulated with PMA plus ionomycin. Signal transduction and activator of transcription 1 (STAT1) was hyperphosphorylated at tyrosine 701 in response to IFN-γ and -α, as demonstrated by flow cytometry and Western blotting in fresh blood mononuclear cells and in Epstein-Barr virus lymphoblastoid cell lines (EBV-LCLs); phosphorylation of STAT1 in EBV-LCLs from the patient was resistant to inhibition by staurosporine but sensitive to ruxolitinib, a Jak phosphorylation inhibitor. Genomic DNA sequencing showed a de novo mutation in STAT1 in cells from the patient, absent in her parents and brother; a known T385M missense mutation in the DNA-binding domain of the transcription factor was identified, and it is a GOF mutation. Therefore, GOF mutations in STAT1 can induce susceptibility not only to fungal but also to mycobacterial infections by mechanisms to be determined.
N'goran, Yves N'da Kouakou; Traore, Fatou; Tano, Micesse; Kramoh, Kouadio Euloge; Kakou, Jean-Baptiste Anzouan; Konin, Christophe; Kakou, Maurice Guikahue
2015-01-01
Introduction L'objectif de notre étude était de décrire les caractéristiques sociodémographiques et les Facteurs de Risque cardio-Vasculaires (FRV) des patients admis pour accidents vasculaires cérébraux (AVC) dans un service autre que celui de la neurologie. Méthodes Étude transversale rétrospective sur une période de 2 ans (janv. 2010 et déc. 2011), réalisée aux urgences de l'institut de cardiologie d'Abidjan. Résultats Il s'agissait de 176 adultes avec un âge moyen de 60 ans, une prédominance féminine. Les facteurs de risque majeurs retrouvés étaient l'hypertension artérielle dans 86,4% des cas, le diabète dans 11,4% des cas, le tabagisme dans 2,2% des cas. Les motifs de consultation étaient la perte de connaissance dans 36,4% des cas, l'hémiplégie dans 31,8% des cas, les céphalées dans 17,4% des cas, les vertiges dans 10,9% et les palpitations dans 2,2% des cas. La tension artérielle systolique moyenne était à 174 mmHg, la tension artérielle diastolique moyenne était à 105 mmHg et la pression pulsée moyenne était à 70 mmHg. Les AVC étaient associés à une arythmie complète par fibrillation auriculaire dans 11,4% des cas. Les AVC ischémiques représentaient 84,1%. L’évolution aux urgences a été marquée par un décès dans 17% (30) des cas. Conclusion Les AVC constituent un problème majeur de santé publique. Malgré sa prédominance féminine, ils (AVC) touchaient 44% des hommes dans notre étude lorsqu'on sait qu'en Afrique l'activité sociale repose sur les hommes. Ils restent une pathologie grave par la forte létalité. PMID:26327997
Akpoto, Yao Messanvi; Abalo, Anani; Gnandi-pio, Faré; Sonhaye, Lantam; Tchaou, Mazamaesso; Sama, Hamza Doles; Assenouwe, Sarakawabalo; Lamboni, Damessane; Amavi, Kossigan Adodossi; Adam, Saliou; Kpelao, Essossinam; Tengue, Kodjo; Songne-Gnamkoulamba, Badjona
2015-01-01
Le but de notre étude était de déterminer la fréquence des fractures de membres liées à l'exercice de la fonction militaire au sein des Forces de Défense et de Sécurité en milieu africain en vue de ressortir l'impact des différentes circonstances de survenue. Nous avons entrepris une étude rétrospective descriptive allant du 1er janvier 2004 au 31 décembre 2013. Elle a concerné les agents des forces de défense et de sécurité traités pour des fractures de membres au cours de cette période. Sept cent quatre (704) cas de fractures de membres ont été dénombrés. L’âge moyen des patients était de 30,57 ans avec des extrêmes de 19 et 55 ans. La prédominance masculine était nette (95,71%). L'Armée de Terre (51,05%) et la Gendarmerie Nationale (38,86%) étaient les plus représentées. Les hommes du rang étaient majoritaires (43,08%), suivis des sous-officiers (32,59%). La fréquence annuelle des fractures de membres en rapport avec la profession militaire était de 63 cas. Les fractures de jambe étaient les lésions les plus recensées (32,96%). Les Formations et les stages militaires ont été les circonstances de survenue les plus rencontrées (42,60%), suivies des accidents de la circulation (39,43%). La perte des journées de service liée à ces lésions était estimée à 14009 jours par an. Les fractures de jambes occupent le premier rang des fractures de membres en rapport avec l'exercice de la profession militaire. Les formations-stages militaires et les accidents de la voie publique en sont les deux grandes circonstances de survenue. PMID:27081434
Pedraza-Sánchez, Sigifredo; Lezana-Fernández, Jose Luis; Gonzalez, Yolanda; Martínez-Robles, Luis; Ventura-Ayala, María Laura; Sadowinski-Pine, Stanislaw; Nava-Frías, Margarita; Moreno-Espinosa, Sarbelio; Casanova, Jean-Laurent; Puel, Anne; Boisson-Dupuis, Stephanie; Torres, Martha
2017-01-01
In humans, recessive loss-of-function mutations in STAT1 are associated with mycobacterial and viral infections, whereas gain-of-function (GOF) mutations in STAT1 are associated with a type of primary immunodeficiency related mainly, but not exclusively, to chronic mucocutaneous candidiasis (CMC). We studied and established a molecular diagnosis in a pediatric patient with mycobacterial infections, associated with CMC. The patient, daughter of a non-consanguineous mestizo Mexican family, had axillary adenitis secondary to BCG vaccination and was cured with resection of the abscess at 1-year old. At the age of 4 years, she had a supraclavicular abscess with acid-fast-staining bacilli identified in the soft tissue and bone, with clinical signs of disseminated infection and a positive Gene-X-pert test, which responded to anti-mycobacterial drugs. Laboratory tests of the IL-12/interferon gamma (IFN-γ) circuit showed a higher production of IL-12p70 in the whole blood from the patient compared to healthy controls, when stimulated with BCG and BCG + IFN-γ. The whole blood of the patient produced 35% less IFN-γ compared to controls assessed by ELISA and flow cytometry, but IL-17 producing T cells from patient were almost absent in PBMC stimulated with PMA plus ionomycin. Signal transduction and activator of transcription 1 (STAT1) was hyperphosphorylated at tyrosine 701 in response to IFN-γ and -α, as demonstrated by flow cytometry and Western blotting in fresh blood mononuclear cells and in Epstein-Barr virus lymphoblastoid cell lines (EBV-LCLs); phosphorylation of STAT1 in EBV-LCLs from the patient was resistant to inhibition by staurosporine but sensitive to ruxolitinib, a Jak phosphorylation inhibitor. Genomic DNA sequencing showed a de novo mutation in STAT1 in cells from the patient, absent in her parents and brother; a known T385M missense mutation in the DNA-binding domain of the transcription factor was identified, and it is a GOF mutation. Therefore, GOF mutations in STAT1 can induce susceptibility not only to fungal but also to mycobacterial infections by mechanisms to be determined. PMID:29270166
Garbe, David S.; Fang, Yanshan; Zheng, Xiangzhong; Sowcik, Mallory; Anjum, Rana; Gygi, Steven P.; Sehgal, Amita
2013-01-01
Circadian rhythms in Drosophila rely on cyclic regulation of the period (per) and timeless (tim) clock genes. The molecular cycle requires rhythmic phosphorylation of PER and TIM proteins, which is mediated by several kinases and phosphatases such as Protein Phosphatase-2A (PP2A) and Protein Phosphatase-1 (PP1). Here, we used mass spectrometry to identify 35 “phospho-occupied” serine/threonine residues within PER, 24 of which are specifically regulated by PP1/PP2A. We found that cell culture assays were not good predictors of protein function in flies and so we generated per transgenes carrying phosphorylation site mutations and tested for rescue of the per01 arrhythmic phenotype. Surprisingly, most transgenes restore wild type rhythms despite carrying mutations in several phosphorylation sites. One particular transgene, in which T610 and S613 are mutated to alanine, restores daily rhythmicity, but dramatically lengthens the period to ∼30 hrs. Interestingly, the single S613A mutation extends the period by 2–3 hours, while the single T610A mutation has a minimal effect, suggesting these phospho-residues cooperate to control period length. Conservation of S613 from flies to humans suggests that it possesses a critical clock function, and mutational analysis of residues surrounding T610/S613 implicates the entire region in determining circadian period. Biochemical and immunohistochemical data indicate defects in overall phosphorylation and altered timely degradation of PER carrying the double or single S613A mutation(s). The PER-T610A/S613A mutant also alters CLK phosphorylation and CLK-mediated output. Lastly, we show that a mutation at a previously identified site, S596, is largely epistatic to S613A, suggesting that S613 negatively regulates phosphorylation at S596. Together these data establish functional significance for a new domain of PER, demonstrate that cooperativity between phosphorylation sites maintains PER function, and support a model in which specific phosphorylated regions regulate others to control circadian period. PMID:24086144
NASA Astrophysics Data System (ADS)
Floquet, Jimmy
Dans les cuves d'electrolyse d'aluminium, le milieu de reaction tres corrosif attaque les parois de la cuve, ce qui diminue leur duree de vie et augmente les couts de production. Le talus, qui se forme sous l'effet des pertes de chaleur qui maintiennent un equilibre thermique dans la cuve, sert de protection naturelle a la cuve. Son epaisseur doit etre controlee pour maximiser cet effet. Advenant la resorption non voulue de ce talus, les degats generes peuvent s'evaluer a plusieurs centaines de milliers de dollars par cuve. Aussi, l'objectif est de developper une mesure ultrasonore de l'epaisseur du talus, car elle serait non intrusive et non destructive. La precision attendue est de l'ordre du centimetre pour des mesures d'epaisseurs comprenant 2 materiaux, allant de 5 a 20 cm. Cette precision est le facteur cle permettant aux industriels de controler l'epaisseur du talus de maniere efficace (maximiser la protection des parois tout en maximisant l'efficacite energetique du procede), par l'ajout d'un flux thermique. Cependant, l'efficacite d'une mesure ultrasonore dans cet environnement hostile reste a demontrer. Les travaux preliminaires ont permis de selectionner un transducteur ultrasonore a contact ayant la capacite a resister aux conditions de mesure (hautes temperatures, materiaux non caracterises...). Differentes mesures a froid (traite par analyse temps-frequence) ont permis d'evaluer la vitesse de propagation des ondes dans le materiau de la cuve en graphite et de la cryolite, demontrant la possibilite d'extraire l'information pertinente d'epaisseur du talus in fine. Fort de cette phase de caracterisation des materiaux sur la reponse acoustique des materiaux, les travaux a venir ont ete realises sur un modele reduit de la cuve. Le montage experimental, un four evoluant a 1050 °C, instrumente d'une multitude de capteurs thermique, permettra une comparaison de la mesure intrusive LVDT a celle du transducteur, dans des conditions proches de la mesure industrielle. Mots-cles : Ultrasons, CND, Haute temperature, Aluminium, Cuve d'electrolyse.
Approche structurée en pratique familiale pour les patients ayant des problèmes de mémoire
Lee, Linda; Weston, W. Wayne; Heckman, George; Gagnon, Micheline; Lee, F. Joseph; Sloka, Scott
2013-01-01
Résumé Objectif Présenter aux médecins de famille une approche structurée pour les patients qui présentent des problèmes de mémoire. Sources des données Cette approche se fonde sur un programme agréé de formation clinique sur la mémoire, élaboré par le Centre for Family Medicine Memory Clinic en partenariat avec le Collège des médecins de famille de l’Ontario. Message principal Le recours à une approche structurée de raisonnement clinique peut aider les médecins à poser un diagnostic exact chez des patients qui présentent des problèmes de mémoire. Le délirium, la dépression et les causes réversibles doivent être exclus, pour ensuite faire une différenciation entre le vieillissement cognitif normal, la déficience cognitive légère et la démence. Il est essentiel de procéder à une anamnèse collatérale et à une évaluation fonctionnelle exacte. Les formes courantes de la démence peuvent être cliniquement différenciées par la séquence dans laquelle les symptômes apparaissent et par la façon dont les déficits cognitifs évoluent avec le temps. Habituellement, les signes précoces de la démence d’Alzheimer comportent une déficience de la mémoire épisodique, tandis que la démence due principalement à des causes vasculaires peut se présenter par une perte précoce de la fonction exécutive et de la fonction visuospatiale, ainsi que des caractéristiques cliniques particulières. Conclusion Une approche de raisonnement clinique peut aider les médecins à poser des diagnostics précoces et exacts qui peuvent orienter une prise en charge appropriée et améliorer les soins aux patients qui ont des problèmes de mémoire.
A Theory of L 1-Dissipative Solvers for Scalar Conservation Laws with Discontinuous Flux
NASA Astrophysics Data System (ADS)
Andreianov, Boris; Karlsen, Kenneth Hvistendahl; Risebro, Nils Henrik
2011-07-01
We propose a general framework for the study of L 1 contractive semigroups of solutions to conservation laws with discontinuous flux: u_t + mathfrak{f}(x,u)_x=0, qquad mathfrak{f}(x,u)= left\\{begin{array}{ll} f^l(u),& x < 0,\\ f^r(u), & x > 0, right.quadquadquad (CL) where the fluxes f l , f r are mainly assumed to be continuous. Developing the ideas of a number of preceding works ( Baiti and Jenssen in J Differ Equ 140(1):161-185, 1997; Towers in SIAM J Numer Anal 38(2):681-698, 2000; Towers in SIAM J Numer Anal 39(4):1197-1218, 2001; Towers et al. in Skr K Nor Vidensk Selsk 3:1-49, 2003; Adimurthi et al. in J Math Kyoto University 43(1):27-70, 2003; Adimurthi et al. in J Hyperbolic Differ Equ 2(4):783-837, 2005; Audusse and Perthame in Proc Roy Soc Edinburgh A 135(2):253-265, 2005; Garavello et al. in Netw Heterog Media 2:159-179, 2007; Bürger et al. in SIAM J Numer Anal 47:1684-1712, 2009), we claim that the whole admissibility issue is reduced to the selection of a family of "elementary solutions", which are piecewise constant weak solutions of the form c(x)=c^l11_{left\\{{x < 0}right\\}}+c^r11_{left\\{{x > 0}right\\}}. We refer to such a family as a "germ". It is well known that (CL) admits many different L 1 contractive semigroups, some of which reflect different physical applications. We revisit a number of the existing admissibility (or entropy) conditions and identify the germs that underly these conditions. We devote specific attention to the "vanishing viscosity" germ, which is a way of expressing the "Γ-condition" of D iehl (J Hyperbolic Differ Equ 6(1):127-159, 2009). For any given germ, we formulate "germ-based" admissibility conditions in the form of a trace condition on the flux discontinuity line { x = 0} [in the spirit of V ol'pert (Math USSR Sbornik 2(2):225-267, 1967)] and in the form of a family of global entropy inequalities [following K ruzhkov (Math USSR Sbornik 10(2):217-243, 1970) and C arrillo (Arch Ration Mech Anal 147(4):269-361, 1999)]. We characterize those germs that lead to the L 1-contraction property for the associated admissible solutions. Our approach offers a streamlined and unifying perspective on many of the known entropy conditions, making it possible to recover earlier uniqueness results under weaker conditions than before, and to provide new results for other less studied problems. Several strategies for proving the existence of admissible solutions are discussed, and existence results are given for fluxes satisfying some additional conditions. These are based on convergence results either for the vanishing viscosity method (with standard viscosity or with specific viscosities "adapted" to the choice of a germ), or for specific germ-adapted finite volume schemes.
Elidrissi, Mohammed; Hammou, Nassereddine; Shimi, Mohammed; Elibrahimi, Abdelhalim; Elmrini, Abdelmajid
2013-01-01
Les pseudarthroses de l'extrémité distale du fémur sont relativement rares du fait de la qualité de la vascularisation de cette région. La prise en charge d'une telle complication pose un certain nombre de difficultés. Le traitement chirurgical fait appel à plusieurs techniques conservatrices, le traitement par prothèse peut s'avérer utile quand la perte de substance est importante chez le sujet âgé. L'objectif de ce travail est de discuter l'intérêt de la mégaprothèse du genou dans le traitement de la pseudarthrose de l'extrémité distale du fémur, à travers l’étude de l'observation d'une patiente et revue de la littérature. Il s'agit d'une patiente âgée de 62 ans qui présente une pseudarthrose de l'extrémité distale du fémur gauche. Sur le plan clinique la patiente présente des douleurs du genou gauche, avec gène fonctionnelle importante. Le score de l'IKS préopératoire était de 60. Elle a bénéficié d'un remplacement prothétique par une mégaprothèse du genou. En postopératoire la flexion du genou était à 90°, le score de l'IKS était de 130. A travers l’étude de cette observation, et la revue de la littérature, nous pensons que l'utilisation de mégaprothèse du genou, constitue une solution efficace et durable pour le traitement des pseudarthroses du fémur distal et particulièrement chez le sujet âgé. Cette technique permet de répondre aux impératifs d'un tel aléa de la consolidation: lutter contre la douleur et garantir une mobilité satisfaisante permettant de répondre aux besoins de la vie quotidienne du patient et ainsi améliorer sa qualité de vie. PMID:24396555
Hybrid superconducting a.c. current limiter extrapolation 63 kV-1 250 A
NASA Astrophysics Data System (ADS)
Tixador, P.; Levêque, J.; Brunet, Y.; Pham, V. D.
1994-04-01
Following the developement of a.c. superconducting wires a.c. current superconducting limiters have emerged. These limiters limit the fault currents nearly instantaneously, without detection nor order giver and may be suitable for high voltages. They are based on the natural transition from the superconducting state to the normal resistive state by overstepping the critical current of a superconducting coil which limits or triggers the limitation. Our limiter device consists essentially of two copper windings coupled through a saturable magnetic circuit and of a non inductively wound superconducting coil with a reduced current compared to the line current. This design allows a simple superconducting cable and reduced cryogenic losses but the dielectric stresses are high during faults. A small model (150 V/50 A) has experimentally validated our design. An industrial scale current limiter is designed and the comparisons between this design and other superconducting current limiters are given. Les courants de court-circuit sur les grands réseaux électriques ne cessent d'augmenter. Dans ce contexte sont apparus les limiteurs supraconducteurs de courant suite au développement des brins supraconducteurs alternatifs. Ces limiteurs peuvent limiter les courants de défaut presque instantanément, sans détection de défaut ni donneur d'ordre et ils sont extrapolables aux hautes tensions. Ils sont fondés sur la transition naturelle de l'état supraconducteur à l'état normal très résistif par dépassement du courant critique d'un enroulement supraconducteur qui limite ou déclenche la limitation. Notre limiteur est composé de deux enroulements en cuivre couplés par un circuit magnétique saturable et d'une bobine supraconductrice à courant réduit par rapport au courant de la ligne. Cette conception permet un câble supraconducteur simple et des pertes cryogéniques réduites mais les contraintes diélectriques en régime de défaut sont importantes. Une maquette (150 V/50 A) a permis de valider expérimentalement cette conception. Nous aborderons l'extrapolation d'un limiteur de taille industrielle (63 kV/1 250 A). Les résultats seront comparés à des limiteurs supraconducteurs résistifs et de type DASC.
Facteurs prédictifs de l’échec de traitement antituberculeux en Guinée Conakry
Nimagan, Souleymane; Bopaka, Regis Gothard; Diallo, Mamadou Mouctar; Diallo, Boubacar Djelo; Diallo, Mamadou Bailo; Sow, Oumou Younoussa
2015-01-01
La tuberculose est un véritable problème de santé publique. C'est une maladie guérissable et cette guérison passe par une bonne prise en charge thérapeutique. Il arrive parfois on assiste à l’échec thérapeutique, d'où l'intérêt de notre étude portant sur les facteurs prédictifs de ses échecs. Dans l'espace d'une année sur 1300 cas de tuberculose toute forme confondue, 700 cas de tuberculose pulmonaire à microscopie positive ont été répertorié dont 100 cas transférés. La tranche d’âge de 15-25 ans a été la plus touchée avec un sexe-ratio de 2 en faveur des hommes et 41,66% de nos malades ont été les ouvriers suivis de 20,83% des commerçants. La majorité de nos patients provenait de Conakry soit 99, 5%. Sur 600 patients suivis les nouveaux cas représentaient 83,33% et l’échec thérapeutique représentait 12 cas soit 2%. L'interruption du traitement représente le principal facteur de l’échec. Les facteurs qui ont influencé la régularité des malades au traitement ont été multiples. Des facteurs liés à l'organisation du système de santé, la rupture des médicaments antituberculeux, l’éducation sanitaire insuffisante, les contraintes de la supervision du traitement, l'implication insuffisante et la vente des médicaments par le personnel de santé. Des facteurs liés aux patients eux-mêmes, la crainte de perte d'emploi, les contraintes financières. Les renforcements de l'organisation du système sanitaire et l’éducation thérapeutiques pourront réduire le taux d’échec du traitement antituberculeux. L'amélioration de la qualité de la prise en charge des malades en situation d’échec devrait passer par une culture systématique des expectorations avec antibiogramme. PMID:26889327
The Norwegian Healthier Goats programme--a financial cost-benefit analysis.
Nagel-Alne, G Elise; Asheim, Leif J; Hardaker, J Brian; Sølverød, Liv; Lindheim, Dag; Valle, Paul S
2014-05-01
The aim of this study was to evaluate the profitability to dairy goat farmers of participating in the Healthier Goats disease control and eradication programme (HG), which was initiated in 2001 and is still running. HG includes the control and eradication of caprine arthritis encephalitis (CAE), caseous lymphadenitis (CLA) and paratuberculosis (Johne's disease) in Norwegian goat herds. The profitability of participation was estimated in a financial cost-benefit analysis (CBA) using partial budgeting to quantify the economic consequences of infectious disease control through HG versus taking no action. Historical data were collected from 24 enrolled dairy goat herds and 21 herds not enrolled in HG, and supplemented with information from a questionnaire distributed to the same farmers. Expert opinions were collected to arrive at the best possible estimates. For some input parameters there were uncertainty due to imperfect knowledge, thus these parameters were modelled as PERT probability distributions and a stochastic simulation model was built. The CBA model was used to generate distributions of net present value (NPV) of farmers' net cash flows for choosing to enroll versus not enrolling. This was done for three selected milk quota levels of 30000L, 50000L and 70000L, and both for before and after the introduction of a reduced milk price for the non-enrolled. The NPVs were calculated over time horizons of 5, 10 and 20 years using an inflation-adjusted discount rate of 2.8% per annum. The results show that participation in HG on average was profitable over a time horizon of 10 years or longer for quota levels of 50000L and 70000L, although not without risk of having a negative NPV. If farmers had to pay all the costs themselves, participation in HG would have been profitable only for a time horizon beyond 20 years. In 2012, a reduced milk price was introduced for farmers not enrolled in HG, changing the decision criteria for farmers, and thus, the CBA. When the analysis was altered to account for these changes, the expected NPV was positive over five years for the 50000L quota, indicating an increased profitability of enrolling in HG. The sensitivity analysis showed that particular attention should be paid to work load and investment costs when planning for disease control programmes in the future. Copyright © 2014 Elsevier B.V. All rights reserved.
Réparation juridique en dommage corporel de l’insuffisance antéhypophysaire post-traumatique
Mahjoub, Mohamed; Jedidi, Maher; Mezgar, Zied; Masmoudi, Tasnim; Zhioua, Mongi; Euch, Koussay El; Njah, Mansour
2017-01-01
L'insuffisance antéhypophysaire post-traumatique (IAHPT) est une pathologie exceptionnelle mais de réalité certaine résultant des lésions ischémiques lors des traumatismes crâniens (TC) sévères. L'objectif est de préciser à partir d'une étude de cas les critères d'imputabilité de l'IAHPT suite au (TC) ainsi que les spécificités relatifs à sa réparation juridique. C'est une étude médico-légale d'un cas d'IAHPT, diagnostiqué et suivi au service d'endocrinologie et de médecine légale du CHU de Sousse (Tunisie). Il s'agit d'une femme âgée de 45 ans, sans antécédents pathologiques (6 gestes, 4 parités et 2 avortements) ayant un cycle menstruel régulier, sans notion d'accouchement hémorragique, qui a été victime d'un accident de la voie publique (piétonne, heurtée puis renversée par une voiture) occasionnant un TC avec point d'impact occipital sans perte de connaissance initiale; ayant présenté trois ans après l'accident, une hypothyroïdie. L'exploration hormonale rapporte l'atteinte de tous les autres axes. L'exploration neuroradiologique retrouve une intégrité de l'hypophyse et de la tige. Le diagnostic définitif est l'IAHPT. L'expertise médicale (faite 4 ans après l'accident) a conclue à l'imputabilité de l'IAHPT à l'accident. Le taux d'incapacité partielle permanente IIP en droit commun a été évalué à 25%. L'IAHPT est un diagnostic d'élimination. L'évaluation du dommage corporel doit tenir compte des symptômes résiduels, contraintes thérapeutiques et répercussions sur l'activité quotidienne et professionnelle. L'évolution sous hormonothérapie de substitution est souvent favorable, cependant, elle peut être émaillée de complications, d'où l'obligation d'établir des réserves préservant ainsi le droit du patient à une nouvelle révision.
Use of Biomass Ash as a stabilization agent for expansive marly soils (SE Spain)
NASA Astrophysics Data System (ADS)
Ureña, C.; Azañón, J. M.; Caro, J. M.; Irigaray, C.; Corpas, F.; Ramirez, A.; Rivas, F.; Salazar, L. M.; Mochón, I.
2012-04-01
In recent years, several biomass power plants have been installed in Southeastern Spain to reuse olive oil industry residues. This energy production tries to reduce the high costs associated with fossil fuels, but without entering into direct competition to traditional food crops. The waste management in these biomass energy plants is still an issue since there are non-flammable materials which remains after incineration in the form of ashes. In Southeastern Spain there is also a great amount of clayey and marly soils whose volume is very sensitive to changes in climate conditions, making them unsuitable for civil engineering. We propose the use of biomass ash (both fly ash and bottom ash) as a stabilization agent for expansive soils in order to improve the efficiency of construction processes by using locally available materials. In this work biomass ashes from a biomass power plant in Southeastern Spain have been used to stabilize 6 samples of local marly soil. Those 6 samples of expansive soil were mixed with different dosages of biomass ash (2%, 4% and 7%) to create 18 specimens of treated soil, which were submitted to Proctor, Atterberg Limits, pH and Free Swell Index tests, following Spanish Standards UNE by AENOR. X-Ray Diffraction (XRD) tests by powder method were also carried out, using a diffractometer Philips X'Pert-MPD. The results obtained for the original untreated marly soil were: PI = 34.6; Free Swell = 12.5; pH = 8. By adding biomass ash the value of the plasticity index (PI) became slightly lower although it was not low enough as to obtain a non-plastic soil (PI under 25). However, there were dramatical decreases of free swell index (FSI) after the stabilization treatment: FSI < 8.18 (2% biomass); FSI < 6.15 (4% biomass); FSI < 4.18 (7% biomass); These results suggest that treated soil is quite less susceptible than the original soil to moisture changes. The pH of the mixes after adding biomass ash rose from 8 to 11±1 leading to an alkaline environment which, as reviewed literature points out, helps to the development of pozzolanic reactions and stabilization process. Finally, XRD tests indicated a sharp decrease in the intensity of reflection of the Smectite peak, suggesting a reduction in the amount of this expansive mineral in treated soils. This positive and durable effect may be related to cation exchange from Na+ to smaller cations or even the formation of mixed-layered clay minerals. A further research must be conducted to determine the pozzolanic properties of biomass ash (i.e., its suitability for concrete composites), the optimum dosages, etc. The further research is also necessary to better understand the mineralogy changes occurred within the crystalline structure. Nevertheless, these first results let us infer that biomass ash from power plants has a high capacity to enhance mechanical properties of expansive soils. Given the widespread use of biomass in industry today, the secondary use of biomass ash might improve the sustainability and efficiency of the biomass generation, incineration and waste management process.
2013-01-01
Background Peripheral neuroblastic tumors (pNTs), including neuroblastoma (NB), ganglioneuroblastoma (GNB) and ganglioneuroma (GN), are extremely heterogeneous pediatric tumors responsible for 15 % of childhood cancer death. The aim of the study was to evaluate the expression of CD44s (‘s’: standard form) cell adhesion molecule by comparison with other specific prognostic markers. Methods An immunohistochemical profile of 32 formalin-fixed paraffin-embedded pNTs tissues, diagnosed between January 2007 and December 2010, was carried out. Results Our results have demonstrated the association of CD44s negative pNTs cells to lack of differentiation and tumour progression. A significant association between absence of CD44s expression and metastasis in human pNTs has been reported. We also found that expression of CD44s defines subgroups of patients without MYCN amplification as evidenced by its association with low INSS stages, absence of metastasis and favorable Shimada histology. Discussion These findings support the thesis of the role of CD44s glycoprotein in the invasive growth potential of neoplastic cells and suggest that its expression could be taken into consideration in the therapeutic approaches targeting metastases. Virtual Slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1034403150888863 Résumé Introduction les tumeurs neuroblastiques périphériques (TNPs), comprenant le neuroblastome (NB), le ganglioneuroblastome (GNB) et le ganglioneurome (GN), sont des tumeurs pédiatriques extrêmement hétérogènes responsables de 15% des décès par cancer chez les enfants. Le but de cette étude était d’évaluer l’expression de la molécule d’adhésion cellulaire CD44s (‘s’: pour standard) par rapport à d’autres facteurs pronostiques spécifiques. Méthodes Un profil immunohistochimique de 32 TNPs fixées au formol et incluses en paraffine, diagnostiquées entre Janvier 2007 et Décembre 2010, a été réalisé. Résultats Nos résultats ont mis en évidence l’association des TNPs n’exprimant pas le CD44s avec une perte de différenciation et une progression tumorale et nous avons rapporté une association significative entre l’absence d’expression du CD44s et la présence de métastases. Nous avons également constaté que l’expression du CD44s définit des sous-groupes de patients dans les tumeurs n’amplifiant pas le MYCN, comme en témoigne son association avec les stades INSS bas, l’absence de métastases et l’histologie favorable de Shimada. Discussion Ces résultats appuient l’hypothèse du rôle de la glycoprotéine CD44s dans le potentiel de croissance invasive des cellules néoplasiques et suggèrent que son expression pourrait être prise en considération dans des voies thérapeutiques ciblant les métastases. PMID:23445749
Fluid mediated transformation of aragonitic cuttlebone to calcite
NASA Astrophysics Data System (ADS)
Perdikouri, C.; Kasioptas, A.; Putnis, A.
2009-04-01
The aragonite to calcite transition has been studied extensively over the years because of its wide spectra of applications and of its significant geochemical interest. While studies of kinetics (e.g. Topor et al., 1981), thermodynamics (e.g. Wolf et al., 1996) and behavior of ions such as Sr and Mg (e.g. Yoshioka et al., 1986) have been made there are still unanswered questions regarding this reaction especially in the cases where the effects of fluid composition are considered. It is well known that when heated in air, aragonite transforms by a solid state reaction to calcite. The aragonite cuttlebone of the sepia officinalis that was used for our experiments undergoes a phase transition at ~370-390Ë C, measured by in situ heating experiments in a Philips X'pert X-ray powder diffractometer equipped with a HTK 1200 High temperature oven. Successive X-ray scans were taken at isothermal temperatures at 200C intervals. A similar temperature range was found by Vongsavat et al. 2006, who studied this transition in Acropora corals. It is possible however to promote this transition at considerably lower temperatures by means of a fluid mediated reaction where the replacement takes place by a dissolution-precipitation mechanism (Putnis & Putnis, 2007). We have successfully carried out hydrothermal experiments where cuttlebone has been converted to calcite at 200Ë C. Using the PhreeqC program we calculated the required composition of a solution that would be undersaturated with respect to aragonite and saturated with respect to calcite leading to dissolution of the aragonite and to a consequent precipitation of the new calcite phase, similar to the experiments described in an earlier study (Perdikouri et al, 2008). This reaction is not pseudomorphic and results in the destruction of the morphology, presumably due to the molar volume increase. A total transformation of the cuttlebone produced a fine calcite powder. The cuttlebone exhibits a unique microstructure, made up of interconnected chambers. The aragonite grown during biomineralization of the cuttlebone is interlaced with a β-chitin organic phase that provides the framework for the morphology that is observed. Experiments carried out with the same constant conditions but for different periods of time have revealed the evolution of the transformation to calcite. At shorter reaction times the product was made up of calcite powder and of well preserved aragonite septa, as was confirmed by powder X-ray diffraction. In other words, the vertical pillars appear to react at faster rates than the horizontal septa. It has been reported by Florek et al. 2008 that the septa contain higher quantities of β-chitin. The aim of this study is the investigation of these observations and the determination of the effect of the organic component on the kinetics of the aragonite to calcite transformation. Florek M., Fornal E., Gómez-Romero P., Zieba E., Paszkowicz W., Lekki J.,Nowak J., Kuczumow A. Materials Science and Engineering C, In Press (2008) Perdikouri C., Kasioptas A., Putnis C.V., Putnis A. Mineralogical Magazine 72, 111-114 (2008) Putnis A., Putnis C.V. Solid State Chemistry 180, 1783-1786 (2007) Topor N. D., Tolokonnikova L. I., Kadenatsi B. M. Journal of Thermal Analysis 20, 169-174 (1981) Vongsavat V., Winotai P., Meejoo S. Nuclear Instruments and Methods in Physics Research B 243, 167-173 (2006) Wolf G., Lerchner J., Schmidt H., Gamsjäger H., Königsberger E., Schmidt P. Journal of Thermal Analysis 46, 353-359 (1996) Yoshioka S., Ohde S., Kitano Y., Kanamori N. Marine Chemistry 18, 35-48 (1986)
McConnel, Craig S; McNeil, Ashleigh A; Hadrich, Joleen C; Lombard, Jason E; Garry, Franklyn B; Heller, Jane
2017-08-01
Over the past 175 years, data related to human disease and death have progressed to a summary measure of population health, the Disability-Adjusted Life Year (DALY). As dairies have intensified there has been no equivalent measure of the impact of disease on the productive life and well-being of animals. The development of a disease-adjusted metric requires a consistent set of disability weights that reflect the relative severity of important diseases. The objective of this study was to use an international survey of dairy authorities to derive disability weights for primary disease categories recorded on dairies. National and international dairy health and management authorities were contacted through professional organizations, dairy industry publications and conferences, and industry contacts. Estimates of minimum, most likely, and maximum disability weights were derived for 12 common dairy cow diseases. Survey participants were asked to estimate the impact of each disease on overall health and milk production. Diseases were classified from 1 (minimal adverse effects) to 10 (death). The data was modelled using BetaPERT distributions to demonstrate the variation in these dynamic disease processes, and to identify the most likely aggregated disability weights for each disease classification. A single disability weight was assigned to each disease using the average of the combined medians for the minimum, most likely, and maximum severity scores. A total of 96 respondents provided estimates of disability weights. The final disability weight values resulted in the following order from least to most severe: retained placenta, diarrhea, ketosis, metritis, mastitis, milk fever, lame (hoof only), calving trauma, left displaced abomasum, pneumonia, musculoskeletal injury (leg, hip, back), and right displaced abomasum. The peaks of the probability density functions indicated that for certain disease states such as retained placenta there was a relatively narrow range of expected impact whereas other diseases elicited a wider breadth of impact. This was particularly apparent with respect to calving trauma, lameness and musculoskeletal injury, all of which could be redefined using gradients of severity or accounting for sequelae. These disability weight distributions serve as an initial step in the development of the disease-adjusted lactation (DALact) metric. They will be used to assess the time lost due to dynamic phases of dairy cow diseases and injuries. Prioritizing health interventions based on time expands the discussion of animal health to view profits and losses in light of the quality and length of life. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Homier, Ram
Dans le contexte environnemental actuel, le photovoltaïque bénéficie de l'augmentation des efforts de recherche dans le domaine des énergies renouvelables. Pour réduire le coût de la production d'électricité par conversion directe de l'énergie lumineuse en électricité, le photovoltaïque concentré est intéressant. Le principe est de concentrer une grande quantité d'énergie lumineuse sur des petites surfaces de cellules solaires multi-jonction à haute efficacité. Lors de la fabrication d'une cellule solaire, il est essentiel d'inclure une méthode pour réduire la réflexion de la lumière à la surface du dispositif. Le design d'un revêtement antireflet (ARC) pour cellules solaires multi-jonction présente des défis à cause de la large bande d'absorption et du besoin d'égaliser le courant produit par chaque sous-cellule. Le nitrure de silicium déposé par PECVD en utilisant des conditions standards est largement utilisé dans l'industrie des cellules solaires à base de silicium. Cependant, ce diélectrique présente de l'absorption dans la plage des courtes longueurs d'onde. Nous proposons l'utilisation du nitrure de silicium déposé par PECVD basse fréquence (LFSiN) optimisé pour avoir un haut indice de réfraction et une faible absorption optique pour l'ARC pour cellules solaires triple-jonction III-V/Ge. Ce matériau peut aussi servir de couche de passivation/encapsulation. Les simulations montrent que l'ARC double couche SiO2/LFSiN peut être très efficace pour réduire les pertes par réflexion dans la plage de longueurs d'onde de la sous-cellule limitante autant pour des cellules solaires triple-jonction limitées par la sous-cellule du haut que pour celles limitées par la sous-cellule du milieu. Nous démontrons aussi que la performance de la structure est robuste par rapport aux fluctuations des paramètres des couches PECVD (épaisseurs, indice de réfraction). Mots-clés : Photovoltaïque concentré (CPV), cellules solaires multi-jonction (MJSC), revêtement antireflet (ARC), passivation des semiconducteurs III-V, nitrure de silicium (Si"Ny), PECVD.
Control of Giardia infections with ronidazole and intensive hygiene management in a dog kennel.
Fiechter, Ruth; Deplazes, Peter; Schnyder, Manuela
2012-06-08
Infections with the intestinal protozoan parasite Giardia in dogs and cats are common. Clinical signs vary from asymptomatic to small bowel diarrhea and associated discomfort. The control of infections in dogs is frequently a frustrating issue for animal owners and veterinarians. Drugs with antiprotozoal activity such as fenbendazole and metronidazole are recommended, however, they do not show 100% efficacy and superinfections occur regularly. Ronidazole is currently the drug of choice for the treatment of Tritrichomonas foetus in cats and there is now limited information available about its efficacy against Giardia spp. In the kennel investigated, dogs regularly showed loose feces and the presence of Giardia (assemblage C, renamed as G. canis) cysts. An elimination strategy of this parasite involving strict hygiene management and disinfection of the enclosures with 4-chlorine-M-cresol, oral treatment with ronidazole (30-50mg/kg BW bid for 7 days) and two shampooings (containing chlorhexidine) at the beginning and the end of the treatments was implemented for a group of 6 dogs. As a control another group of 7 dogs was transferred to the disinfected enclosures and shampooed, but left untreated. Dog feces were tested for the presence of Giardia cysts (SAF concentration technique) or Giardia antigen with a commercial ELISA (NOVITEC(®)) and a quick immunochromatography-based test (SensPERT(®)) before and between 5 and 40 days after the last treatment. All ronidazole-treated dogs were negative for Giardia cysts and antigen up to 26 days after the last treatment, while between 1 and 5 of the control animals tested positive in each of the test series. At this point, also dogs of the control group were again moved into clean enclosures, shampooed twice and treated with ronidazole. Five, 12 and 19 days after the last treatment, the dogs in the control group tested negative for Giardia cysts and antigen. However, all animals had again positive results at later time points in at least one of the three applied diagnostic techniques within 33-61 days after treatment. Furthermore, all dogs had episodes of diarrhea (for 1-4 days) within 14-31 days after treatment and unformed feces during the whole experiment. The positive effect of ronidazole against Giardia infections in dogs could be confirmed in this study. In particular, the combination of ronidazole treatment combined with the disinfection of the environment and shampooing of the dogs was highly effective in reducing Giardia cyst excretion and may therefore constitute an alternative control strategy for canine giardiosis. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Grosdidier, Yves
2000-12-01
Les spectres des étoiles Wolf-Rayet pop. I (WR) présentent de larges raies en émission dues à des vents stellaires chauds en expansion rapide (vitesse terminale de l'ordre de 1000 km/s). Le modèle standard des étoiles WR reproduit qualitativement le profil général et l'intensité des raies observées. Mais la spectroscopie intensive à moyenne résolution de ces étoiles révèle l'existence de variations stochastiques dans les raies (sous-pics mobiles en accélération échelles de temps: environ 10-100 min.). Ces variations ne sont pas comprises dans le cadre du modèle standard et suggèrent une fragmentation intrinsèque des vents. Cette thèse de doctorat présente une étude de la variabilité des raies spectrales en émission des étoiles WR pop. II; la question de l'impact d'un vent WR fragmenté sur le milieu circumstellaire est aussi étudiée: 1) à partir du suivi spectroscopique intensif des raies CIIIl5696 et CIVl5801/12, nous analysons quantitativement (via le calcul des Spectres de Variance Temporelle) les vents issus de 5 étoiles centrales de nébuleuses planétaires (NP) galactiques présentant le phénomène WR; 2) nous étudions l'impact de la fragmentation des vents issus de deux étoiles WR pop. I sur le milieu circumstellaire via: i) l'imagerie IR (NICMOS2/HST) de WR 137, et ii) l'imagerie H-alpha (WFPC2/HST) et l'interférométrie Fabry-Perot H-alpha (SIS-CFHT) de la nébuleuse M 1-67 (étoile centrale: WR 124). Les principaux résultats sont les suivants: VENTS WR POP. II: (1) Nous démontrons la variabilité spectroscopique intrinsèque des vents issus des noyaux de NP HD 826 ([WC 8]), BD +30 3639 ([WC 9]) et LSS 3169 ([WC 9]), observés durant respectivement 22, 15 et 1 nuits, et rapportons des indications de variabilité pour les noyaux [WC 9] HD 167362 et He 2-142. Les variabilités de HD 826 et BD +30 3639 apparaissent parfois plus soutenues (``bursts'' qui se maintiennent durant plusieurs nuits); (2) La cinématique des sous-pics de BD +30 3639 suggère une anisotropie transitoire de la distribution des fragments dans le vent; (3) Le phénomène WR apparaît purement atmosphérique: la cinématique des sous-pics, les amplitudes et les échelles de temps caractéristiques des variations, ainsi que les accélérations observées sont similaires pour les deux populations. Mais, pour HD 826, une accélération maximale d'environ 70 m/s2 est détectée, valeur significativement plus importante que celles rapportées pour les autres étoiles WR pop. I & II (environ 15 m/s2). La petitesse du rayon de HD 826 en serait la cause; (4) Comme pour les WR pop. I, de grands paramètres (β supérieur ou égal à 3-10) sont requis pour ajuster les accélérations observées avec une loi de vitesse de type beta. La loi beta sous-estime systématiquement les gradients de vitesse au sein de la région de formation de la raie CIIIl5696; (5) Les vents WR pop. II étant fragmentés, l'estimation des taux de perte de masse actuels à partir de méthodes supposant les atmosphères homogènes conduit à une surestimation i) des taux de perte de masse eux-mêmes, et ii) des masses initiales des étoiles avant qu'elles n'entrent dans la phase WR. IMPACT DES VENTS: (1) Au périastre, de la poussière est détectée dans l'environnement de la binaire WC+OB WR 137. La formation de poussières est soit facilitée, soit provoquée par la collision des deux vents chauds; le rôle capital de la fragmentation des vents (fournissant une compression localisée supplémentaire du plasma) est suggéré (2) La nébuleuse M 1-67 affiche une interaction avec le milieu interstellaire (MIS) non-négligeable (``bow-shock''). Les champs de densité et de vitesse sont très perturbés. Ces perturbations sont reliées, d'une part, à l'histoire des vents issus de WR 124 durant sa propre évolution, d'autre part, à l'interaction avec le MIS. Les fonctions de structure des champs de densitéet de vitesse de M 1-67 ne révèlent aucun indice en faveur d'une turbulence au sein de la nébuleuse (3) Des simulations hydrodynamiques 2D réalisées avec le code ZEUS-3D montrent qu'un fragment dense formé près du coeur hydrostatique stellaire ne peut probablement pas, sans adjoindre les effets de bouclier et de confinement radiatifs, atteindre des distances nébulaires.
Spurio, Maria Grazia
2016-09-01
For a long time, terms like "mind" and "emotion" have rarely been taken into account, not even mentioned in the medical texts. The latest scientific researches, including the studies of Candace Pert, on the contrary, have emphasized that the entire body thinks, because every single cell hears, and feels emotions. The international researcher has discovered the endocrines and a vast number of neuropeptides, that work as an "information network" that interconnects the entire body, the "psychic" molecules are transmitted and travel, communicating information as in a circular and recursive body - mind mechanism. This is a sort of body and mind functional identity, which is different in each person, because each person is a unique universe, and the body is the place where mind and body meet in a unique and unrepeatable alchemy. So, if it is true that only in the body the secret of its potential for development and transformation is well hidden, it is also true that this secret is unique for each of us. Then, the strategic therapy becomes 'tailor-made', and the knowledge of the body component is essential to unlock behavior patterns and planning new ones, in order to improve relationships and the quality of life, and enhance the sense of well-being. People are not as simple containers which merely record external incitements, on the contrary, they are able to evaluate and weigh what happens around. Depending on the meaning attributed to each stimulus, a stress response of different magnitude and duration is activated, this can be considered functional or dysfunctional. Many recent studies, in fact, states that there is a significant correlation between the coping strategy chosen and the onset of a disease. According to the theory of 'psychogenic tumor', for example, anyone can potentially develop cancer, but only those who do not have the psychological strength to resist disease get sick. No matter what is the theoretical framework and the conclusion adopted, we go towards a consensus of considering body and mind as a part of a unique and complex functional identity. As a consequence of this completely new approach on how to consider the wellbeing and health, researchers suggest that the goal of a satisfactory physical and mental balance can be achieved through a transversal approach to the various disciplines (psychotherapy, surgical, nutritionist, medical aesthetics and medicine in general). According to this bio-psycho-social approach, each person should be approached in his entirety, bodily and psychological; each individual should be 'hosted' in a sort of 'Body and Mind' zone, where the entire body is able to think.
L’alimentation des enfants ayant une déficience neurologique
2009-01-01
La malnutrition, qu’il s’agisse de sous-alimentation ou de suralimentation, est courante chez les enfants ayant une déficience neurologique. Les besoins en énergie sont difficiles à définir au sein de cette population hétérogène. De plus, on manque d’information sur ce qui constitue la croissance normale chez ces enfants. Des facteurs non nutritionnels peuvent influer sur la croissance, mais des facteurs nutritionnels, tels qu’un apport calorique insuffisant, des pertes excessives d’éléments nutritifs et un métabolisme énergétique anormal, contribuent également au retard de croissance de ces enfants. La malnutrition est liée à une importante morbidité, tandis que la réadaptation nutritionnelle améliore l’état de santé global. Le soutien nutritionnel doit faire partie intégrante de la prise en charge des enfants ayant une déficience nutritionnelle et viser non seulement à améliorer l’état nutritionnel, mais également la qualité de vie des patients et de leur famille. Au moment d’envisager une intervention nutritionnelle, il faut tenir compte du dysfonctionnement oromoteur, du reflux gastro-œsophagien et de l’aspiration pulmonaire, et une équipe multidisciplinaire doit se concerter. Il faut repérer rapidement les enfants vulnérables à des troubles nutritionnels et procéder à une évaluation de leur état nutritionnel au moins une fois par année, et plus souvent chez les nourrissons et les jeunes enfants ou chez les enfants qui risquent de souffrir de malnutrition. Il faut optimiser l’apport oral s’il est sécuritaire, mais entreprendre une alimentation entérale chez les enfants ayant un dysfonctionnement oromoteur qui provoque une aspiration marquée ou chez ceux qui sont incapables de maintenir un état nutritionnel suffisant au moyen de l’apport oral. Il faut réserver l’alimentation par sonde nasogastrique aux interventions à court terme, mais si une intervention nutritionnelle prolongée s’impose, il faut envisager la gastrostomie. Il faut réserver les mesures antireflux aux enfants présentant un reflux gastro-œsophagien considérable. Il faut surveiller étroitement la réponse du patient à l’intervention nutritionnelle afin d’éviter une prise de poids excessive après l’amorce de l’alimentation entérale, et privilégier les préparations pédiatriques afin d’éviter les carences en micronutriments. PMID:20592968
Van de Vijver, Els; Desager, Kristine; Mulberg, Andrew E; Staelens, Sofie; Verkade, Henkjan J; Bodewes, Frank A J A; Malfroot, Anne; Hauser, Bruno; Sinaasappel, Maarten; Van Biervliet, Stefanie; Behm, Martin; Pelckmans, Paul; Callens, Dirk; Veereman-Wauters, Gigi
2011-07-01
Pancreatic enzyme replacement therapy (PERT) improves nutritional status and growth in patients with cystic fibrosis (CF) with pancreatic insufficiency (PI). The current recommendation for infants and young children, who are not able to swallow the whole capsule, is to open the capsule and mix the beads in a spoon with some applesauce; however, the efficacy and safety data of this approach are currently lacking. The aim of this study was to assess the efficacy, palatability (ease of swallowing), and safety of 4 dose levels of pancrelipase microtablets (Pancrease MT) in infants and young children with CF-related PI. This study was a phase II randomized, investigator-blinded, parallel-group pilot study in DNA-proven infants with CF and PI. The study design included a run-in period (days 1-5) and an experimental period (days 6-11). Pancrelipase microtablets (2-mm, enteric coated) were provided orally. Sixteen subjects, 6 to 30 months of age, were provided 500 U lipase/kg/meal for 5 days (baseline period). Subsequently, subjects were randomly assigned to 1 of 4 treatment groups (each n = 4), receiving 500, 1000, 1500, or 2000 U (Ph. EUR) of lipase/kg/meal, respectively, for 5 days (experimental period). The primary endpoint was medication efficacy assessed by the 72-hour fecal fat excretion, expressed as coefficient of fecal fat absorption (CFA), and 13C mixed triglyceride breath test. Secondary endpoints were safety and palatability. Overall compliance, defined as used study medication, was 89% to 99% for the entire study. None of the 4 dose regimens significantly influenced the CFA, relative to the baseline period (median range 83%-93%). During the run-in period the median cumulative % 13C was 11 (range -8 to 59). After randomization the median cumulative % 13C was 18 (range 14-23) in the 500-U, 14 (range -1 to 17) in the 1000-U, 10 (range 10-27) in the 1500-U, and 3 (range 1-49) in the 2000-U groups. Palatability was scored fair to good by the parents in each of the treatment groups. Gastrointestinal symptoms were reported in some patients, including common adverse events reported in clinical trials involving pancreatic enzyme therapy. No serious or other adverse events were reported. Treatment with Pancrease MT at a dosage of 500 U lipase/kg/meal resulted in a CFA of approximately 89% in pediatric subjects ages 6 to 30 months with PI resulting from CF. Pancrease MT doses were well tolerated and mean palatability was scored as fair to good. Present results do not indicate that a dosage higher than 500 U (Ph. EUR) lipase/kg/meal increases the coefficient of fat absorption in a cohort of infants 6 to 30 months of age.
Intrinsic mechanical properties and strengthening methods in inorganic crystalline materials
NASA Astrophysics Data System (ADS)
Mecking, H.; Hartig, Ch.; Seeger, J.
1991-06-01
The paper deals with strength and fracture in metals, ceramics and intermetallic compounds. The emphasis is on the interrelation between microstructure and macroscopic behavior and how the concepts for alloy design are mirroring this interrelationship. The three materials classes are distinguished by the physical nature of the atomic bonding forces. In metals metallic bonding predominates which causes high ductility but poor strength. Accordingly material development concentrates on production of microstructures which optimize the yield strength without unacceptable loss in ductility. In ceramics covalent bonding prevails which results in high hardness and high elastic stiffness but at the same time extreme brittleness. Contrary to the metal-ease material development aims at a kind of pseudo ductility in order to rise the fracture toughness to sufficiently high levels. In intermetallic phases the atomic bonds are a mixture of metallic and covalent bonding where depending on the alloying system the balance between the two contributions may be quite different. Accordingly the properties of intermetallics are in the range between metals and ceramics. By a variety of microstructural measures their properties can be changed in direction. either towards metallic or ceramic behavior. General rules for alloy design are not available, rather every system demands very specific experience since properties depend to a considerable part on intrinsic properties of lattice defects such as dislocations, antiphase boundaries, stacking faults and grain boundaries. Cet article traite de la résistance et de la fracture des métaux, des céramiques et des composés intermétalliques. L'accent est mis sur les correspondances entre la microstructure et le comportement macroscopique ainsi que sur la façon dont de tels concepts se reflètent dans la création de nouveaux alliages. C'est la nature des forces de liaisons qui distingue chaque type de matériaux. Dans les métaux, les liaisons métalliques dominent, ce qui entraîne une grande ductilité mais une médiocre résistance. En conséquence, dans le développement de nouveaux matériaux on cherche préférentiellement à produire des microstructures qui optimisent la résistance élastique sans perte inacceptable de ductilité. Dans les céramiques, les liaisons covalentes prédominent; ceci entraîne une dureté élevée, une grande rigidité, mais en même temps une extrême fragilité. Au contraire des métaux, le développement de ces matériaux vise à obtenir une pseudoductilité afin d'amener la tenacité à des niveaux suffisamment élevés. Dans les phases intermétalliques les liaisons atomiques correspondent à un mélange de liaisons métalliques et covalentes. La contribution de chacune d'entre elles varie en fonction du système allié. En conséquence, les propriétés des intermétalliques se situent entre celles des métaux et des céramiques. Par divers changements microstructuraux des propriétés peuvent être déplacées pour se rapprocher d'un comportement de type métallique ou de type céramique. Donner des règles générales pour la création de nouveaux alliages n'est pas possible car chaque système demande à être testé, les propriétés dépendent en effet, pour une part considérable, des propriétés intrinsèques des défauts de réseau comme les dislocations, les parois d'antiphase ou les joints de grains.
NASA Astrophysics Data System (ADS)
Brahmi, Noura; Dhieb, Mohsen; Chedly Rabia, Mohamed
2018-05-01
La submersion marine est l'une des principales menaces qui pèsent sur les zones humides littorales de la péninsule du Cap Bon. Or, cette menace est amenée à se renforcer en raison du réchauffement climatique qui entraînera une élévation du niveau marin et vraisemblablement un renforcement de l'intensité des tempêtes et des cyclones tropicaux d'ici à l'horizon 2100. L'objectif a donc été d'évaluer la vulnérabilité de la lagune face à la submersion. Dans cette optique, après avoir identifié les enjeux liés à l'élévation du niveau de la mer pour la lagune, nous avons dressé une cartographie prévisionnelle des risques de submersion par l'intermédiaire d'une application cartographique basée sur une modélisation numérique de terrain et analysé les impacts potentiels de ce phénomène. La cartographie est donc amenée à devenir centrale pour l'étude de l'impact du risque de la submersion marine sur les lagunes côtières et la gestion de ces espaces dans les décennies à venir. La cartographie de l'aléa submersion marine a montré que l'ouverture épisodique de brèches lors des tempêtes pourrait entraîner la submersion de l'ensemble du domaine lagunaire et aurait un impact morphogénique essentiel. En effet, le phénomène d'une accélération de l'élévation du niveau marin et d'un renforcement des tempêtes génère le morcellement du cordon littoral qui sépare la lagune de la mer. Par ailleurs, les pertes de matériel sédimentaire pour la plage augmenteront, dans la mesure où, lors d'une tempête, une grande partie des matériaux déplacés par les vagues dans les étangs par submersion ou par ouverture d'une brèche, ne peut être récupérée par la suite. Tout ceci constitue un facteur d'accélération du recul ou de disparition de la plage déjà très érodée. Les impacts de cette submersion pourraient être importants en absence de mesures préventives. Elle aurait ainsi des répercussions profondes sur les systèmes naturels et environnementaux et sur la qualité de vie de la population locale.
NASA Astrophysics Data System (ADS)
Coulibaly, Issa
Principale source d'approvisionnement en eau potable de la municipalite d'Edmundston, le bassin versant Iroquois/Blanchette est un enjeu capital pour cette derniere, d'ou les efforts constants deployes pour assurer la preservation de la qualite de son eau. A cet effet, plusieurs etudes y ont ete menees. Les plus recentes ont identifie des menaces de pollution de diverses origines dont celles associees aux changements climatiques (e.g. Maaref 2012). Au regard des impacts des modifications climatiques annonces a l'echelle du Nouveau-Brunswick, le bassin versant Iroquois/Blanchette pourrait etre fortement affecte, et cela de diverses facons. Plusieurs scenarios d'impacts sont envisageables, notamment les risques d'inondation, d'erosion et de pollution a travers une augmentation des precipitations et du ruissellement. Face a toutes ces menaces eventuelles, l'objectif de cette etude est d'evaluer les impacts potentiels des changements climatiques sur les risques d'erosion et de pollution a l'echelle du bassin versant Iroquois/Blanchette. Pour ce faire, la version canadienne de l'equation universelle revisee des pertes en sol RUSLE-CAN et le modele hydrologique SWAT ( Soil and Water Assessment Tool) ont ete utilises pour modeliser les risques d'erosion et de pollution au niveau dans la zone d'etude. Les donnees utilisees pour realiser ce travail proviennent de sources diverses et variees (teledetections, pedologiques, topographiques, meteorologiques, etc.). Les simulations ont ete realisees en deux etapes distinctes, d'abord dans les conditions actuelles ou l'annee 2013 a ete choisie comme annee de reference, ensuite en 2025 et 2050. Les resultats obtenus montrent une tendance a la hausse de la production de sediments dans les prochaines annees. La production maximale annuelle augmente de 8,34 % et 8,08 % respectivement en 2025 et 2050 selon notre scenario le plus optimiste, et de 29,99 % en 2025 et 29,72 % en 2050 selon le scenario le plus pessimiste par rapport a celle de 2013. Pour ce qui est de la pollution, les concentrations observees (sediment, nitrate et phosphore) connaissent une evolution avec les changements climatiques. La valeur maximale de la concentration en sediments connait une baisse en 2025 et 2050 par rapport a 2013, de 11,20 mg/l en 2013, elle passe a 9,03 mg/l en 2025 puis a 6,25 en 2050. On s'attend egalement a une baisse de la valeur maximale de la concentration en nitrate au fil des annees, plus accentuee en 2025. De 4,12 mg/l en 2013, elle passe a 1,85 mg/l en 2025 puis a 2,90 en 2050. La concentration en phosphore par contre connait une augmentation dans les annees a venir par rapport a celle de 2013, elle passe de 0,056 mg/l en 2013 a 0,234 mg/l en 2025 puis a 0,144 en 2050.
Support for EU fundraising in the field of Environment & Energy - BayFOR
NASA Astrophysics Data System (ADS)
Ammerl, Thomas; Baumann, Cornelia; Reiter, Andrea; Blume, Andreas; Just, Jana; Franke, Jan
2013-04-01
The Bavarian Research Alliance (BayFOR, http://www.bayfor.org) is a private company for the support of Bavaria (Free State in the South East of Germany) as a centre for science and innovation within the European Research Area. It was set up on the initiative of the Bavarian universities to strengthen their networking at regional, national and international level while helping them to prepare to meet the requirements for European research funding. The focus is directed at the current EU Framework Programme (FP7) and the forthcoming Framework Programme for Research and Innovation "Horizon 2020", but also comprises the wide range of European programmes (e.g. FP7, LIFE+, Interreg, COST, EUREKA, ERA-Nets, IEE (CIP), LLP, Calls for tender). BayFOR is also a partner institution in the Bavarian "Haus der Forschung" (www.hausderforschung.bayern.de/en). BayFORs overall aim is to strengthen and permanently anchor the science and innovation location of Bavaria in the European Research Area through: a) Initiation of national and in particular European innovation and science partnerships from academia and business b) Improvement of innovation potential of Bavarian universities and SME c) Support in acquisition, management and dissemination of results of European and international projects in the field of research and technological development The service portfolio of the EU Funding Advisory Service reaches from the first project idea to project implementation. The minimum condition for BayFOR support is at least one partner from Bavaria (Germany) must be part of the applying consortium: a) Recommendation of funding programmes/instruments (incl. integration of relevant EU policies & directives) b) Partner search c) Project development and proposal elaboration (Online platform, Creation of consortium, Attendance at meetings, Preparation of documents, Proposal structure elaboration, Provision of templates, Editorial support: Gantt, PERT, Impact, EU added value) d) Support in the Contract negotiations with the European Commission e) Project implementation (Project management, dissemination, Science-Policy-Interface) BayFOR staff has profound R&D background, as well as knowledge and experience in disseminating project outcomes, in particular with regard to the adaptation of results to the needs of relevant target groups like science, industry/SMEs, policy makers and the public. Furthermore, BayFOR can draw on distinct experience in the management of European research projects (e.g. CLIMB, Largecells, AlpBC, GeoMol, WE-EEN, WINALP). As a partner in the network for SMEs "Enterprise Europe Network" (EEN), BayFOR offers advice and support on topics such as funding, research programs, public procurement, market penetration and the promotion of innovation at European level. Beyond, BayFOR will make use of its regional networks to promote uptake and exploitation of project results. BayFOR is also commissioned by Bavaria's State Ministry of Science, Research and the Arts to look after the Bavarian University Funding Programme for the Initiation of International Projects (BayIntAn). Our efforts are aimed at initiating or strengthening transnational collaborative research involving Bavarian universities and universities of applied sciences.
Étude expérimentale de cristaux photoniques bi-dimensionnels
NASA Astrophysics Data System (ADS)
Labilloy, D.
Experimental study of two-dimensional photonic crystals Photonic bandgap materials (PBGs), the so-called photonic crystals, are structures with a periodic dielectric constant. For strong enough index contrast, it was theoretically predicted that they should prevent light propagation in all directions, because they create spectral regions with zero-density of states. We study the optical properties of two-dimensional photonic crystals etched through waveguiding semiconductor heterostructures. Photoluminescence of quantum wells or quantum dots embedded in the waveguide are used as internal probe source. This technique allows a full characterization of these objects, giving access to quantitative values of the transmission, reflection and diffraction coefficients. Weak transmissions correspond to high reflection or diffraction values, which indicates that light remains guided upon interaction with the crystals, confirming their high potential for integrated optics. These reflectors are next used as cavity mirrors. One-dimensional cavities demonstrate a high finesse through transmission measurements, confirming the low amount of out-of-plane losses. Small volume three-dimensional cavities (sim5 μm^3) are also probed, using the photoluminescence of the emitters placed inside the cavity. Narrow peaks in the photoluminescence spectrum prove the strong confinement and allow to envision applications for spontaneous emission control. Les matériaux à bande interdite de photons (BIPs) ou cristaux photoniques, sont des structures, généralement artificielles, dont l'indice diélectrique varie périodiquement. Lorsque le contraste d'indice est fort, on prédit théoriquement qu'elles doivent empêcher la propagation de la lumière dans toutes les directions en créant des plages spectrales (les bandes interdites) à densité d'état de photons nulle. Nous avons étudié le comportement optique de cristaux photoniques bidimensionnels gravés dans des hétérostructures semiconductrices guidantes. L'originalité consiste à utiliser la photoluminescence de boîtes ou puits quantiques comme source lumineuse interne. Cette technique a permis d'effectuer une caractérisation complète de ces objets en mesurant quantitativement les coefficients de transmission et de réflexion ainsi que les propriétés de diffraction. Aux zones de faible transmission correspondent de forts coefficients de réflexion ou de diffraction, ce qui indique que l'onde reste guidée lors de l'interaction avec les cristaux et confirme leur fort potentiel pour l'optique intégrée. Nous avons utilisé ces réflecteurs pour réaliser des cavités, d'abord unidimensionnelles, qui montrent une bonne finesse en transmission, confirmant que les pertes hors du plan du guide sont faibles. Nous avons ensuite étudié des cavités tridimensionnelles de faible volume (sim 5 μm^3), sondées cette fois-ci à l'aide d'émetteurs internes à la cavité. L'apparition de pics étroits montre que l'effet de confinement est important et laisse présager de réelles potentialités de modification de l'émission spontanée.
Elamrani, Driss; Droussi, Hatim; Boukind, Samira; Elatiqi, Keltoum; Dlimi, Meriem; Benchamkha, Yassine; Ettalbi, Saloua
2014-01-01
Le dermatofibrosarcome (DFS) est une tumeur fibreuse de la peau, de croissance lente, à très haut risque de récidive locale, mais à potentiel métastatique faible. A partir d'une étude rétrospective étalée sur une période de 5 ans (décembre 2008 - décembre 2013), nous avons analysé les caractéristiques épidémiologiques et cliniques, le délai de diagnostic, le type de thérapeutique et le devenir de 32 patients présentant des tumeurs de Darier et Ferrand histologiquement prouvées. Parmi les 32 patients, 10 se sont présentés initialement au service pour une récidive tumorale. Une discrète prédominance masculine a été notée. Les DFS touchent préférentiellement l'adulte jeune. Le délai diagnostique observé est en moyenne de 4 ans. Le tronc est la localisation préférentielle (60%), suivi par les extrémités proximales (30%). Les 32 patients ont été traités par exérèse chirurgicale avec une marge de 5cm en surface, emportant en profondeur une barrière anatomique saine. La couverture de la perte de substance (PDS) a été réalisée après confirmation anatomopathologique du caractère carcinologique de l'exérèse, et a fait appel à différents moyens allant de la greffe cutanée aux lambeaux musculo –cutanés libres. L’évolution a été marquée par la survenue d'une récidive tumorale chez 8 patients (3 cas parmi les tumeurs vues en première intention et 5 cas parmi les tumeurs vues en récidive) et les résultats ont été jugés satisfaisants sur le plan esthétique et fonctionnel. Le DFS de Darier et Ferrand est une tumeur dont le pronostic et le risque évolutif sont principalement liés au délai diagnostic et la qualité de la première exérèse. Le diagnostic tardif, rend difficile la chirurgie d'exérèse et de reconstruction Les possibilités de guérison en cas de chirurgie primaire bien conduite sont significativement supérieures à celles d'une chirurgie de rattrapage. L'amélioration du pronostic passe par une prise en charge multidisciplinaire précoce et codifiée d'où l'intérêt de la sensibilisation et de l'information du médecin généraliste pour le diagnostic précoce et l'orientation correcte de ces malades vers des centres spécialisés. PMID:25821539
Грузєва, Тетяна С; Пельо, Ігор М; Сміянов, Владислав А; Галієнко, Людмила І
in modern conditions of social development become very important the issues of reorganization of public health services and their staffing. This is due to the significant spread of numerous challenges and threats to health of the population and the leading rule of public health service in preventing many diseases, reducing their negative impact and promotion the health of the population. One of the operational functions of public healthis providing the public health service with professional personnel,sufficientin numbers and of good quality. Itsrealization shouldinclude a thoroughunderstanding and evaluation of needs inex perts of public heal thinaccording to the national context, the wording of there quirements totheirknowledge and practicals kills, professional competences, supporting of educational training programs and the irimplementation to higher education system. to justify the approaches to formation of educational programs for training specialists in public health sphere into account of contemporary needs, international experience and WHO recommendations. the research was founded on the analysis of the integral indicators of the population health of Ukraine, existing problems in fieldof public health, the study of educational programs for training of public health specialists of leading world and European universities, domestic and international experience on an investigated problem. There were used biblio-semantic and medical-statistical methods. The information base are: statistical data from database "HFA" for 2000-2014, Center for health statistics of the MOH of Ukraine for 2000-2015, electronic resources of universities, strategic and policy documents of the WHO, WHO Regional Office for Europe Results: for Ukraine as for other countries it is extremely important the provision of public health service with a sufficient number of specialists of adequate quality. The need to create such a service and its staffing was caused by low health indicators, significant levels of morbidity and mortality due to noncommunicable and infectious diseases and insufficient implementation of the preventive principles in health care. In the ranking of countries in WHO European region, Ukraine occupies first place in terms of AIDS, tuberculosis. Standardized mortality rates from all diseases in Ukraine are twice higher than in EU countries, in the working age able population - in 2.5 times, due to infectious diseases - in 2.8 times, blood circulation system diseases. - in 3.5 times. An adequate response to modern challenges and threats to population health is the study and development of public health service. The draft of its Concept was created by an international interdisciplinary group of experts. Providing the public health service with human resources requires the development and implementation of training programs for public health specialists. The analysis of curricula of training of specialists at universities in Europe and the world helped to identify the institutional features of training, duration and content of training programs. As a rule, the bachelor's programs include 180-240 credits and continue for 6-8 semesters. Master's programs on the base on the undergraduate programs include from 90 to 120 credits and last for 3-6 semesters. Professional training is completed performing the master's work. Postgraduate study lasts 3-4 years and includes training and scientific research, after which research work is awarded the degree of doctor of philosophy. The content of the curriculum has a considerable variability, but provides for the mandatory study of biostatistics, epidemiology, environmental health, policy and health care management; social and psychological Sciences, social determinants and inequities in health, and interagency teamwork, medical technology, the basic operational functions of public health, concepts of mental health, health promotion, management in public health, carrying out research. the need for the development of public health service is due the state of health of population in Ukraine, the existing challenges and threats, strategic directions of development of national health system and international obligations. Staffing of public health service needs of training a new generation of professionals and that actualizes the formation of modern curricula and programs. Experience of training of public health professionals in more than 30 Universities in Europe and the world, as well as the requirements of the European program of core competencies of public health professionals, are the foundation for the formation of national training programs and plans according to the national context.
Runaway reactions, their courses and the methods to establish safe process conditions
NASA Astrophysics Data System (ADS)
Gustin, J. L.
1991-08-01
Much of the literature on runaway reactions deals with the consequences such as mechanical damage toxic and flammable release. The DIERS literature provides effective methods for vent sizing where experimental information is requested. Thermal stability measurements provide information on the onset temperature and kinetic data for chemical reactions. There is less information on the way the runaway reactions occur whereas the runaway reactions may have different causes. The purpose of this paper is to describe the various process deviations which can cause a runaway reaction to occur and to discuss the experimental information necessary for risk assessment, the choice of a safe process and the mitigation of the consequences of the runaway reaction. Each possible hazardous process deviation is illustrated by examples from the process industry and/or relevant experimental information obtained from laboratory experiments. The typical hazardous situations to be considered are the following: 1) The homogeneous thermal runaway due to too high a temperature. 2) The homogeneous runaway reaction by unintended introduction of additional reactants or catalyst. 3) The heterogeneous runaway reaction due to too high a local temperature. 4) The heterogeneous runaway reaction caused by slow heat conduction to the outside. 5) The runaway reaction caused by excess residence time at the process temperature (autocatalytic reactions). 6) The runaway reaction caused by reactant accumulation. The controling reactant feed rate is higher than the consumption rate perhaps because the temperature is too low, or the catalyst is absent. 7) The runaway reaction due to the pressurization of the enclosure by gaseous oxidizing intermediates (typical of nitric oxidations). 8) The runaway reaction due to phase separation of unstable species (liquids, solids) by loss of mixing or on cooling. 9) The runaway reaction on mixing of fast reacting chemicals in separate phases. 10)The runaway reaction due to fire or external heating. Considering the various runaway situations, the effectiveness of the following approaches is discussed: - Theoretical and experimental information required for hazard assessment. - Choice of adequate process conditions. - Choice of adequate methods for process control. - Experimental information required for vent sizing. La plus grande partie de la littérature sur les emballements thermiques traite des conséquences de l'accident telles que les effets mécaniques, les émissions toxiques et inflammables. Les travaux publiés par le DIERS fournissent des méthodes permettant le dimensionnement d'évents, nécessitant des déterminations expérimentales. Il y a moins d'information sur la manière dont les emballements thermiques peuvent survenir alors que ceux-ci peuvent avoir différentes causes. Le propos de cet article est de décrire les différentes dérives de procédé qui peuvent entraîner un emballement thermique et de déterminer l'information expérimentale nécessaire pour l'analyse des risques du procédé, le choix de conditions opératoires sûres et la réduction des conséquences de l'emballement thermique. Chaque dérive de procédé dangereuse, est illustrée par des exemples connus dans l'industrie chimique et par des données expérimentales obtenues dans des essais de laboratoire. Les conditions de procédé dangereuses prises en compte sont les suivantes: 1)L'emballement thermique homogène dû à une température excessive; 2) L'emballement thermique homogène par introduction d'un catalyseur ou d'un réactif contrôlant; 3) L'emballement thermique hétérogène dû à une température locale excessive; 4)L'emballement thermique hétérogène dû à une faible conduction thermique vers l'extérieur; 5) L'emballement thermique dû à un temps de séjour excessif à la température du procédé (Réactions autocatalytiques); 6) L'emballement thermique par accumulation de réactifs. La vitesse d'introduction d'un réactif contrôlant est supérieure à la vitesse de consommation de ce réactif, parce que la température est trop basse ou le catalyseur absent; 7) L'emballement thermique dû à la pressurisation d'une enceinte par des intermédiaires gazeux oxydants (situation caractéristique des oxydations nitriques), 8)L'emballement thermique dû à la séparation de phases contenant des espèces instables (liquides, solides) par perte de l'agitation ou par refroidissement; 9) L'emballement thermique par mélange de produits incompatibles, se trouvant précédemment dans des phases séparées; 10) L'emballement thermique dû à un chauffage externe ou à un feu. Considérant les différentes situations conduisant à un emballement thermique, l'intérêt de l'approche systématique suivante est examiné: - Information théorique et expérimentale nécessaire pour déterminer les risques du procédé. - Choix de conditions opératoires adéquates. - Choix de méthodes convenables pour le contrôle du procédé. - Information expérimentale nécessaire pour le calcul d'évent.
NASA Astrophysics Data System (ADS)
Fatih, Khalid
L'electrolyse de l'eau demeure la seule technologie industrielle de generation de l'hydrogene et de l'oxygene tres purs sans rejet de CO2 dans l'atmosphere, ce qui le rend tres attrayant par rapport a la combustion de carburants fossiles qui provoque presentement de serieux problemes environnementaux. Dans le but d'ameliorer le rendement de ce procede, nous avons developpe de nouveaux materiaux d'anode peu couteux, a base de l'oxyde mixte CuyCo3-yO 4, qui possedent une cinetique rapide pour la reaction de degagement de l'oxygene (RDO). Cette reaction suscite un interet particulier en raison de la surtension d'activation relativement elevee a l'anode qui cause la principale perte de rendement du procede. Une etude systematique a ete effectuee sur la substitution du Cu par du Li (0 a 40%), afin d'elucider les proprietes electrocatalytiques des oxydes LixCuy-xCo3-yO4. Ces oxydes, prepares sous forme de poudres par decomposition thermique des nitrates precurseurs entre 300 et 500°C, ont montre (DRX et FTIR) une structure spinelle inverse non-stcechiometrique avec une diminution du volume de la maille cristalline. La surface specifique par BET est d'environ 6 m2 g-1. Le pcn, obtenu par titrage acido-basique, a indique une diminution de la force du lien M-OH avec le taux du Li dans l'oxyde. Les analyses par XPS, realisees sur des films d'oxyde prepares par nebulisation reactive sur un substrat lisse de nickel, revelent un enrichissement de la surface en Cu a partir de 30% Li, et la presence des cations de surface Co2+, Co3+, Cu +, Cu2+ et Cu3+. La concentration de ce dernier montre un maximum a 10 et 20% Li. Suite a la substitution du Cu par du Li, la compensation de la charge serait assuree principalement par la formation d'especes Cu3+ pour les oxydes contenant jusqu'a 20% Li, et par la formation d'especes Co3+ aux taux de substitution superieurs. Les micrographies MEB montrent une morphologie hemispherique des particules d'oxyde reparties uniformement sur le substrat de nickel avec une tendance a l'agglomeration lorsque le taux de Li est accru. Les micrographies AFM revelent une micro-porosite de ces particules d'oxyde. Les films minces deposes sur un substrat de verre ont montre une conductivite tres interessante de l'ordre de 103O-1 cm-1 pour LixCuy-x Co3-yO4 comparee a 2,8 O-1 cm-1 pour Co3O4. La spectroscopie d'impedance a permis de distinguer les processus de la surface interne des films d'oxydes de ceux de l'interface electrode/electrolyte. Les courbes Mott-Schottky (1/C2 en fonction du potentiel) montrent que les films d'oxyde se comportent comme des semi-conducteurs de type p tres dopes (degeneres). Le potentiel de bandes plates et la densite des porteurs de charge majoritaires des oxydes LixCuy-xCo3-yO4 sont de l'ordre de 0,48 a 0,52 V et 1019 a 10 20 cm-3 respectivement. Pour Co 3O4 ces parametres sont de 0,47 V et ≈10 18 cm-3. (Abstract shortened by UMI.)
A psychoanalytic study of Alexander the Great.
Thomas, K R
1995-12-01
The purpose of this paper was to demonstrate how Freudian concepts such as the Oedipus complex, castration anxiety, fear of loss of love, the psychosexual stages of development, and the tripartite structure of personality can be used to understand the life and achievements of Alexander the Great. To accomplish this purpose, specific incidents, myths, and relationships in Alexander's life were analyzed from a Freudian psychoanalytic perspective. Green (1991), in his recent biography of Alexander, has questioned the merit of using Freudian concepts to understand Alexander's character. In fact, he stated specifically: If he (Alexander) had any kind of Oedipus complex it came in a poor second to the burning dynastic ambition which Olympias so sedulously fostered in him; those who insist on his psychological motivation would do better to take Adler as their mentor than Freud (p.56). Later, in the concluding section of his book, Green (1991, pp. 486-487) discounted Freudian interpretations of Alexander's distaste for sex, the rumors of his homosexual liaisons, his partiality for middle-aged or elderly ladies, and the systematic domination of his early years by Olympias as little more than the projected fears and desires of the interpreters. And again, an Adlerian power-complex paradigm was suggested as the preferable theoretical framework to use. Green's argument was based primarily on an exchange, reported originally by Plutarch, which took place between Alexander and Philip prior to Alexander's tutorship with Aristotle. Purportedly, Philip enjoined his son to study hard and pay close attention to all Aristotle said "so that you may not do a great many things of the sort that I am sorry I have done." At this point, Alexander "somewhat pertly" took Philip to task "because he was having children by other women besides his wife." Philip's reply was: "Well then, if you have many competitors for the kingdom, prove yourself honorable and good, so that you may obtain the kingdom not because of me, but because of yourself." Green interpreted this exchange as confirming that Alexander was more interested in his succession to the throne (power) than in any sexual relationships Philip might be having with any women other than Olympias. That is, Alexander's concern in this exchange was not about Philip's marital infidelity per se, but rather about the prospect of potential competitors (other children) for the throne. Significantly, by emphasizing the manifest content of the exchange, Green ignored a myriad of other possible fears and wishes on Alexander's part, including the fear of castration, the wish to have sex (like his father) with Olympias and other women, the wish to challenge his father's authority and superiority, the fear of loss of love, and the wish (given Philip's homosexual exploits with other boys) to have sex with Philip. Moreover, one could easily explain what Green has described as "the burning dynastic ambition which Olympias so sedulously fostered in him" (p.56), and Alexander's so called "power-complex" in terms which are perfectly consistent with drive/structure theory (e.g., see Freud, 1900/1953a and Freud, 1914/1957, respectively). In other words, Green's arguments against the possibility of a Freudian solution to the puzzle of Alexander's character are less than compelling. By contrast, as demonstrated in this paper, a plethora of historical data exist to suggest that much of Alexander's personality structure and behavior can be explained by his unresolved Oedipus complex, the ambition and self-confidence instilled in him by Olympias, the anal-sadistic and narcissistic organization of his character, his unconscious wish to please his mother, and his being lapped (from birth) in the myth of the hero. Although it is risky, at best, to attempt to analyze an individual without the benefit of clinical data, and even more risky to base such an analysis on fragmentary and often contradictory data assimilated long
NASA Astrophysics Data System (ADS)
Lavalley, Claudia
2000-06-01
Le phénomène de perte de masse joue un rôle essentiel dès les premières étapes de la formation stellaire et semble être intimement lié à l'accrétion de matière sur l'étoile, probablement par l'intermédiaire de champs magnétiques permettant de convertir l'énergie cinétique accrétée, en puissance d'éjection. Les étoiles T Tauri classiques, âgées de quelques millions d'années et présentant une faible extinction, offrent un excellent cadre pour étudier les régions internes des vents stellaires. Dans ce travail, je présente les premières études sur la morphologie des jets associés aux étoiles DG Tau, CW Tau et RW Aur à une résolution angulaire de 0.1'' et sur la cinématique à deux dimensions de l'émission des raies de [O I]?6300Å, [N II]?6583Å et [S II] ?6716,6731Å dans le jet de DG Tau. Ces données ont été obtenues avec deux techniques d'observation complètement nouvelles, devenues disponibles entre 1994 et 1998 au télescope CFH, et idéalement adaptées à ce problème: l'imagerie en bande étroite derrière l'optique adaptative (PUEO) qui fournit des données à très haute résolution angulaire (~0.1''), et la spectro-imagerie intégrale de champ (TIGRE/OASIS) qui donne accès à l'information spatiale et spectrale à 2D, à haute résolution angulaire (ici ~0.5''-0.75'') et moyenne résolution spectrale (100-170 km/s). Les trois jets étudiés, résolus pour la première fois à partir de 55 u.a. de l'étoile, présentent une largeur similaire (30-35 u.a.) jusqu'à 100 u.a. et une morphologie dominée par des noeuds d'émission. Les jets des étoiles à faible excès infrarouge CW Tau et RW Aur sont très similaires aux deux autres jets des sources peu enfouies observés jusqu'à présent à la même échelle spatiale. Le jet de DG Tau, plus perturbé que les deux autres, et provenant d'une source avec une enveloppe encore importante, est aussi très similaire au seul autre jet associé à une source encore enfouie résolu à ces distances de l'étoile. Ceci donne des pistes sur l'évolution de l'interaction des jets avec l'environnement circumstellaire. La morphologie et la cinématique du jet de DG Tau suggèrent fortement une variabilité dans la vitesse d'éjection, qui pourrait aussi expliquer certains des noeuds des deux autres jets. La compatibilité d'un des noeuds observés avec les chocs en arc attendus dans une telle situation, a été bien mise en évidence. Des rapports de raies à différentes distances le long d'un jet (DG Tau) et à plusieurs intervalles de vitesses ont été obtenus ici pour la première fois. Des routines d'inversion considérant l'équilibre d'ionisation pour l'oxygène et l'azote et la fraction d'ionisation de l'hydrogène comme paramètre libre, ont permis de faire une estimation des variations de conditions d'excitation (Te, xe et ne) tout au long du jet. Une comparaison détaillée des rapports de raies observés avec les prédictions de différents modèles d'excitation, à l'aide de diagrammes rapport-rapport discriminants identifiés ici pour la première fois, favorise fortement la présence de chocs avec des vitesses de 50-100 km/s, à partir de 0.2'' de l'étoile.
Marine hydrogeology: recent accomplishments and future opportunities
NASA Astrophysics Data System (ADS)
Fisher, A. T.
2005-03-01
Marine hydrogeology is a broad-ranging scientific discipline involving the exploration of fluid-rock interactions below the seafloor. Studies have been conducted at seafloor spreading centers, mid-plate locations, and in plate- and continental-margin environments. Although many seafloor locations are remote, there are aspects of marine systems that make them uniquely suited for hydrologic analysis. Newly developed tools and techniques, and the establishment of several multidisciplinary programs for oceanographic exploration, have helped to push marine hydrogeology forward over the last several decades. Most marine hydrogeologic work has focused on measurement or estimation of hydrogeologic properties within the shallow subsurface, but additional work has emphasized measurements of local and global fluxes, fluid source and sink terms, and quantitative links between hydrogeologic, chemical, tectonic, biological, and geophysical processes. In addition to summarizing selected results from a small number of case studies, this paper includes a description of several new experiments and programs that will provide outstanding opportunities to address fundamental hydrogeologic questions within the seafloor during the next 20-30 years. L'hydrogéologie marine est une large discipline scientifique impliquant l' exploration des interactions entre les fluides et les roches sous les fonds marins. Des études ont été menées dans les différents environnements sous-marins (zone abyssale, plaque océanique, marges continentales). Bien que de nombreux fonds marins soient connus, il existe des aspects des systèmes marins qui les rendent inadaptés à l'analyse hydrologique. De nouveaux outils et techniques, et la mise en oeuvre de nombreux programmes multidisciplinaires d'exploration océanographique, ont aidé à pousser en avant l'hydrogéologie marine ces dix dernières années. La plus part des études hydrogéologiques se sont concentrées jusqu'à présent sur la mesure ou l'estimation des propriétés à la sub-surface des fonds marins, et des travaux complémentaires ont mis en valeur les mesures de flux, local ou global, de termes « sources » et « pertes », et des liens quantitatifs entre l'hydrogéologie, la chimie, la tectonique, la biologie, et les processus géophysiques. Cet article vise à résumer des résultats sélectionnés parmi un petit nombre d'études, et à décrire plusieurs nouvelles expériences et programmes, qui sont autant d'opportunités pour répondre aux questions fondamentales relatives aux fonds marins, posées ces dernières 20-30 années. La hidrogeología marina es una disciplina científica de amplios alcances que involucra la exploración de interacciones fluido-roca por debajo del fondo del mar. Se han llevado a cabo estudios en centros de expansión del fondo del mar, lugares en medio de una placa, y en ambientes de placa y margen continental. Aunque muchos sitios en el fondo del mar son remotos, existen aspectos de estos sistemas marinos que los hacen particularmente adaptables para análisis hidrológico. Nuevas técnicas y herramientas desarrolladas, y el establecimiento de varios programas multidisciplinarios para exploración oceanográfica, han ayudado a impulsar la hidrogeología marina hacia delante durante las ultimas décadas. La mayor parte del trabajo hidrogeológico marino se ha enfocado en la medición o estimación de propiedades hidrogeológicas dentro del subsuelo superficial, pero trabajo adicionalha enfatizado mediciones de flujos globales y locales, términos de fuente y sumidero de fluidos, y vínculos cuantitativos entre procesos hidrogeológicos, químicos, tectónicos, biológicos y geofísicos. Además de resumir resultados seleccionados de un número pequeño de estudios de caso, este artículo incluye una descripción de varios programas y experimentos nuevos que aportarán oportunidades excepcionales para dirigir preguntas hidrogeológicas fundamentales dentro del fondo oceánico durante los siguientes 20-30 años.
Office of the Chief Financial Officer Annual Report 2007
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fernandez, Jeffrey
2007-12-18
2007 was a year of progress and challenges for the Office of the Chief Financial Officer (OCFO). I believe that with the addition of a new Controller, the OCFO senior management team is stronger than ever. With the new Controller on board, the senior management team spent two intensive days updating our strategic plan for the next five years ending in 2012, while making sure that we continue to execute on our existing strategic initiatives. In 2007 the Budget Office, teaming with Human Resources, worked diligently with our colleagues on campus to reengineer the Multi-Location Appointment (MLA) process, making itmore » easier for our Principal Investigators (PIs) to work simultaneously between the Laboratory and UC campuses. The hiring of a point-of-contact in Human Resources to administer the program will also make the process flow smoother. In order to increase our financial flexibility, the OCFO worked with the Department of Energy (DOE) to win approval to reduce the burden rates on research and development (R&D) subcontracts and Intra-University Transfers (IUT). The Budget Office also performed a 'return on investment' (ROI) analysis to secure UCRP funding for a much needed vocational rehabilitation counselor. This new counselor now works with employees who are on medical leave to ensure that they can return to work in a more timely fashion, or if not able to return, usher them through the various options available to them. Under the direction of the new Controller, PriceWaterhouse Coopers (PWC) performed their annual audit of the Laboratory's financial data and reported positive results. In partnership with the Financial Policy and Training Office, the Controller's Office also helped to launch self-assessments of some of our financial processes, including timekeeping and resource adjustments. These self assessments were conducted to promote efficiencies and mitigate risk. In some cases they provided assurance that our practices are sound, and in others highlighted opportunities to improve. A third, and most important assessment on funds control was also conducted that proved very useful in making sure that our financial processes are sound and of the highest ethical standards. In June of 2007 the Procurement Department was awarded the DOE's FY2006 Secretarial Small Business Award for the advancement of small business contracts at Lawrence Berkeley National Laboratory (LBNL). The award was presented in Washington, D.C. Procurement also distinguished itself by passing the tri-ennial Procurement Evaluation and Re-engineering Team (PERT) Review of its systems and processes. We continue to reduce costs through the Supply Chain Initiative saving the Laboratory {approx}$6M to date and have placed over 11,000 orders with over seven vendors using the eBuy system. Our wall-to-wall inventory, which was completed in March of 2007, reported a result of 99+% for item count and 99.51% by value. This was a remarkable achievement that required the hard work of every Division and the Property Department working together. Training continues to be a major initiative for the OCFO and in 2007 we rolled out financial training programs specifically tailored to meet the needs of the scientific divisions. FY2008 presents several opportunities to enhance and improve our service to the scientific community. With the awarding of the HELIOS and JBEI programs, we will be developing new financial paradigms to provide senior management flexibility in decision making. Last year we heard the Laboratory community loud and clear when they expressed their frustration with our current travel system. As we head into the new fiscal year, a cross-functional travel team has identified a new model for how we provide travel services. We will be implementing the Oracle PeopleSoft Travel Reimbursement system by July of 2008. The new system will be more user-friendly and provide better information to the divisions and travel operations. We will also continue to review the travel disbursements operation for further improvement. Also in FY2008, several key information systems implementation projects are under way which will strengthen the Laboratory's financial and business processes. These include Supply Chain Management, and the Budget and Planning System. Future planned systems development includes an electronic sponsored research administration system. Continuing to improve the procurement process at the Laboratory is another major priority for the OCFO. To that end, we will be working to re-engineer the 'procure-to-pay' process. The goal will be to correct process flow to maximize efficiency and effectiveness, while implementing sound business practices and incorporating strong internal controls. Along the same lines, we will also be working with the divisions to implement the Property Management Improvement Program that was identified in FY2007.« less
NASA Astrophysics Data System (ADS)
Dim, J. R.; Sakura, Y.; Fukami, H.; Miyakoshi, A.
2002-03-01
In porous sediments of the Ishikari Lowland, there is a gradual increase in the background geothermal gradient from the Ishikari River (3-4 °C 100 m-1) to the southwest highland area (10 °C 100 m-1). However, the geothermal gradient at shallow depths differs in detail from the background distribution. In spite of convective heat-flow loss generally associated with groundwater flow, heat flow remains high (100 mW m-2) in the recharge area in the southwestern part of the Ishikari basin, which is part of an active geothermal field. In the northeastern part of the lowland, heat flow locally reaches 140 mW m-2, probably due to upward water flow from the deep geothermal field. Between the two areas the heat flow is much lower. To examine the role of hydraulic flow in the distortion of the isotherms in this area, thermal gradient vs. temperature analyses were made, and they helped to define the major components of the groundwater-flow system of the region. Two-dimensional simulation modeling aided in understanding not only the cause of horizontal heat-flow variations in this field but also the contrast between thermal properties of shallow and deep groundwater reservoirs. Résumé. Dans les sédiments poreux des basses terres d'Ishikari, on observe une augmentation graduelle du gradient géothermal général depuis la rivière Ishikari (3-4 °C 100 m-1) vers la zone élevée située au sud-ouest (10 °C 100 m-1). Toutefois, le gradient géothermal aux faibles profondeurs diffère dans le détail de la distribution générale. Malgré la perte de flux de chaleur par convection, généralement associée aux écoulements souterrains, le flux de chaleur reste élevé (100 mW m-2) dans la zone de recharge de la partie sud-ouest du bassin de l'Ishikari, qui appartient à un champ géothermal actif. Dans la partie nord-est des basses terres, le flux de chaleur atteint localement 140 mW m-2, probablement à cause d'un écoulement souterrain ascendant depuis le champ géothermal profond. Entre les deux zones, le flux de chaleur est beaucoup plus faible. Afin de déterminer le rôle du flux d'eau souterraine dans la distorsion des isothermes dans cette zone, des analyses du gradient thermal en fonction de la température ont été réalisées elles ont permis de définir les composantes majeures du système aquifère régional. Une modélisation deux-dimensionnelle pour la simulation a ensuite contribué à la compréhension non seulement de la cause des variations horizontales du flux de chaleur dans cette région, mais également du contraste entre les propriétés des réservoirs superficiel et profond. Resumen. En los sedimentos porosos de las tierras bajas de Ishikari, hay un incremento gradual en el gradiente geotérmico desde el río Ishikari (3-4 °C 100 m-1) hacia la zona elevada del sudoeste (10 °C 100 m-1). Sin embargo, el gradiente geotérmico a profundidades someras difiere de la distribución de fondo. A pesar de las pérdidas por el flujo convectivo de calor asociadas generalmente al flujo de aguas subterráneas, el flujo de calor permanece elevado (100 mW m-2) en el área de recarga, hacia el sudoeste de la cuenca del Ishikari, la cual pertenece a un campo geotérmico activo. Al nordeste de las tierras bajas, el flujo de calor alcanza 140 mW m-2, probablemente por el flujo ascendente de agua procedente del campo geotérmico profundo. Entre ambas áreas, el flujo de calor es mucho menor. Para examinar el papel del flujo hidráulico en la distorsión de las isotermas de esta región, se ha comparado el gradiente térmico con la temperatura, con lo cual se ha podido definir los componentes mayoritarios del sistema de flujo de las aguas subterráneas. El uso de modelos bidimensionales ha servido para comprender no sólo del origen de las variaciones horizontales del flujo de calor en este campo, sino también el contraste entre las propiedades térmicas de los reservorios someros y profundos de aguas subterráneas.
NASA Astrophysics Data System (ADS)
Lebeuf, Martin
La production d'aluminium est une industrie importante au Québec. Les propriétés de ce métal le vouent à de multiples usages présents et futurs dans le cadre d'une économie moderne durable. Toutefois, le procédé Hall-Héroult est très énergivore et des progrès demeurent donc nécessaires pour en diminuer les coûts financiers et environnementaux. Parmi les améliorations envisageables de la cellule d'électrolyse se trouve le contact entre la cathode et la barre collectrice, qui doit offrir une faible résistivité au passage du courant électrique. En cours d'opération de la cellule, ce contact a tendance à se dégrader, générant des pertes énergétiques significatives. Les causes de cette dégradation, pouvant provenir de phénomènes chimiques, thermiques, mécaniques et/ou électriques, demeurent mal comprises. Le but du présent projet était donc d'étudier les phénomènes chimiques se produisant au contact bloc-barre de la cellule d'électrolyse Hall-Héroult. En premier lieu, un aspect crucial à considérer est la pénétration du bain électrolytique dans la cathode, car des composés de bain atteignent éventuellement la barre collectrice et peuvent y réagir. A cet effet, une méthode novatrice a été développée afin d'étudier les cathodes et la pénétration du bain dans celles-ci à l'aide de la microtomographie à rayons X. Cette méthode rapide et efficace s'est avérée fort utile dans le projet et a un potentiel important pour l'étude future des cathodes et des phénomènes qui s'y produisent. Ensuite, une cellule d'électrolyse rectangulaire à petite échelle a été développée. Plusieurs phénomènes observés 'en industrie sur des autopsies de cellules post-opération et rapportés dans la littérature ont été reproduis avec succès à l'aide de cette cellule expérimentale. Puis, des tests sans électrolyse, ciblant l'effet du bain électrolytique sur l'acier, ont aussi été conçus et complétés afin de ségréger l'influence des différents paramètres en jeu. L'analyse des résultats de l'ensemble de ces tests a permis de constater différents phénomènes au contact bloc-barre, dont la présence systématique de NaF et, surtout, de β-Al 2O3. Outre la carburation inévitable de la barre collectrice, la formation d'une couche Fe-Al a aussi été observée, favorisée par une pénétration rapide du bain électrolytique dans la cathode ainsi que par une composition de bain acide en surface de la barre. Cette couche comportait par ailleurs des cristaux de β-Al 2O3 pouvant nuire à sa conductivité électrique. Ensuite, à des ratios de bain entre 2.5 et 4.9, une mince couche contenant les éléments Al et N peut se former en surface de la barre. Pour un bain très basique (> 6.0), c'est plutôt une couche Na 2O qui a été observée. En conditions d'électrolyse mais sans une pénétration rapide du bain dans la cathode, du Na a pu carrément pénétrer dans la barre collectrice, préférentiellement avec le carbone. De plus, de la corrosion ainsi que des couches de fer et d'oxyde de fer peuvent se former sur la barre et potentiellement dégrader la qualité du contact électrique. Pour la suite des travaux, des mesures de résistivité ainsi que l'analyse des échantillons industriels permettraient d'évaluer l'impact de ces phénomènes sur la qualité du contact. Mots-clés : Électrolyse, aluminium, Hall-Héroult, interface barre-cathode, bain électrolytique.
NASA Astrophysics Data System (ADS)
Vesper, Dorothy J.; White, William B.
Continuous records of discharge, specific conductance, and temperature were collected through a series of storm pulses on two limestone springs at Fort Campbell, western Kentucky/Tennessee, USA. Water samples, collected at short time intervals across the same storm pulses, were analyzed for calcium, magnesium, bicarbonate, total organic carbon, and pH. Chemographs of calcium, calcite saturation index, and carbon dioxide partial pressure were superimposed on the storm hydrographs. Calcium concentration and specific conductance track together and dip to a minimum either coincident with the peak of the hydrograph or lag slightly behind it. The CO2 pressure continues to rise on the recession limb of the hydrograph and, as a result, the saturation index decreases on the recession limb of the hydrograph. These results are interpreted as being due to dispersed infiltration through CO2-rich soils lagging the arrival of quickflow from sinkhole recharge in the transport of storm flow to the springs. Karst spring hydrographs reflect not only the changing mix of base flow and storm flow but also a shift in source of recharge water over the course of the storm. L'enregistrement en continu du débit, de la conductivité et de la température de l'eau a été réalisé au cours d'une série de crues à deux sources émergeant de calcaires, à Fort Campbell (Kentucky occidental, Tennessee, États-Unis). Des échantillons d'eau, prélevés à de courts pas de temps lors de ces crues, ont été analysés pour le calcium, le magnésium, les bicarbonates, le carbone organique total et le pH. Les chimiogrammes de calcium, d'indice de saturation de la calcite et de la pression partielle en CO2 ont été superposés aux hydrogrammes de crue. La concentration en calcium et la conductivité de l'eau se suivent bien et passent par un minimum correspondant au pic de l'hydrogramme ou légèrement retardé. La pression partielle en CO2 continue de croître au cours de la récession de l'hydrogramme de même que l'indice de saturation de la calcite décroît. Ces résultats sont interprétés comme étant dus à l'infiltration dispersée au travers de sols riches en CO2, décalée par rapport à l'arrivée de l'écoulement rapide provenant de la recharge, à partir d'une perte, de l'écoulement de crue vers les sources. Les hydrogrammes de sources karstiques ne reflètent pas seulement le mélange variable de l'écoulement de base et de l'écoulement de crue, mais également un changement d'origine de l'eau de la recharge au cours de l'épisode de crue. Se ha registrado en continuo la descarga, conductancia específica y temperatura de una serie de episodios de tormenta en dos manantiales en calizas ubicados en Fort Campbell, en el oeste de Kentucky/Tennessee (Estados Unidos de América). Se ha analizado muestras de agua recogidas en breves intervalos de tiempo durante los episodios de tormenta, determinando el calcio, magnesio, bicarbonato, carbono orgánico total y pH. Se ha superpuesto quimiogramas de calcio, índice de saturación en calcita y presión parcial de dióxido de carbono en los hidrogramas de las tormentas. La concentración de calcio y la conductancia específica se comportan de forma similar y presentan un mínimo que coincide también con un pico del hidrograma o que se retrasa ligeramente con respecto a él. La presión de dióxido de carbono sigue aumentando en la rama de recesión del hidrograma y, como consecuencia, disminuye el índice de saturación de la rama de recesión del hidrograma. Se interpreta que estos resultados son debidos a la infiltración dispersa a través de suelos enriquecidos en dióxido de carbono que retrasan el flujo rápido desde la recarga en los sumideros hasta su afloramiento en los manantiales. Los hidrogramas en manantiales kársticos reflejan no sólo la mezcla cambiante del flujo de base y el de tormenta, sino también el cambio en el origen del agua de recarga durante el curso de la tormenta.
NASA Astrophysics Data System (ADS)
Jalludin, Mohamed; Razack, Moumtaz
The Republic of Djibouti (23,000 km2 500,000 inhabitants), located within the Horn of Africa, undergoes an arid climate with an average annual rainfall less than 150 mm. Water resources are provided up to 98% by groundwater. Two types of aquifers are encountered: volcanic and sedimentary aquifers. This paper focuses on the assessment of their hydraulic properties, which is necessary for future tasks regarding the management of these aquifers. To this end, a data base consisting of all available pumping test data obtained since the 1960s was compiled. Pumping tests have been interpreted to determine transmissivity. Solely for volcanic aquifers, transmissivity also has been estimated through an empirical relationship using specific capacity corrected for turbulent well losses. The transmissivity of each type of aquifer can span up to four orders of magnitude, pointing out their strong heterogeneity. For the various volcanic rocks, the younger the rock, the higher the transmissivity. The transmissivity of volcanic rocks has therefore decreased in the course of geological time. At present, a much better understanding of the hydraulic properties of these complex aquifers has been obtained, which should enable optimal management of their groundwater resources through the use of numerical modeling. La République de Djibouti (23,000 km2 500,000 habitants), située dans la Corne de l'Afrique, subit un climat aride avec une pluviométrie moyenne annuelle inférieure à 150 mm. Les ressources en eau sont fournies à plus de 98% par les eaux souterraines contenues dans des aquifères sédimentaires ou volcaniques. Cet article a pour objectif l'évaluation des propriétés hydrauliques de ces aquifères, étape indispensable pour entreprendre par la suite des études en vue de la gestion de ces aquifères. Une base rassemblant les données d'essais par pompage disponibles depuis les années Soixante a d'abord été établie. Les essais par pompage ont été interprétés pour déduire la transmissivité. Concernant les aquifères volcaniques, la transmissivité a également été estimée à l'aide d'une relation empirique reliant la transmissivité et le débit spécifique corrigé des pertes de charge anormales. La transmissivité pour chaque aquifère couvre jusqu'à quatre ordres de grandeur, montrant la forte hétérogénéité de ces milieux. Pour les roches volcaniques, on observe que la transmissivité est d'autant meilleure que la roche est plus jeune. Leur transmissivité a ainsi diminué durant les temps géologiques. La compréhension des propriétés hydrauliques de ces aquifères complexes est à présent bien meilleure, ce qui permet d'envisager une gestion optimale de leurs ressources à l'aide de modèles numériques. La República de Yibuti (23.000 km2, 500.000 habitantes) está situada en el Cuerno de África, donde se han formado diversas unidades volcánicas-basaltos y riolitas-y rocas sedimentarias desde la expansión de los continentes acaecida al inicio de la deriva continental (hace 30 millones de años). La precipitación media anual es inferior a 150 mm. Las rocas volcánicas y sedimentarias, con dimensión inferior a 2.000 km2, constituyen acuíferos locales. Los basaltos estratificados forman un acuífero regional que se extiende a más de 9.000 km2. La multitud de datos existente ha sido almacenada desde los años 1960 en una base de datos. La transmisividad de estos acuíferos ha sido determinada mediante datos de ensayos de bombeo y una relación empírica que usa la capacidad específica corregida con las pérdidas turbulentas en el pozo. La transmisividad de estos acuíferos se comporta como una variable lognormal, hecho importante para los trabajos previstos de modelación con métodos estadísticos. La transmisividad de cada acuífero puede variar hasta en cuatro órdenes de magnitud, manifestando su gran heterogeneidad. Para los materiales volcánicos, la transmisividad es mayor cuanto más joven es la roca. La permeabilidad de las rocas volcánicas ha evolucionado por tanto con el tiempo geológico. Actualmente, se posee un mayor conocimiento sobre las propiedades hidráulicas de estos acuíferos complejos, de manera que se puede hacer una gestión óptima de sus recursos hídricos subterráneos con la utilización de modelos numéricos.
Good clinical outcomes from a 7-year holistic programme of fistula repair in Guinea
Delamou, Alexandre; Diallo, Moustapha; Beavogui, Abdoul Habib; Delvaux, Thérèse; Millimono, Sita; Kourouma, Mamady; Beattie, Karen; Barone, Mark; Barry, Thierno Hamidou; Khogali, Mohamed; Edginton, Mary; Hinderaker, Sven Gudmund; Ruminjo, Joseph; Zhang, Wei-Hong; De Brouwere, Vincent
2015-01-01
Objectives Female genital fistula remains a public health concern in developing countries. From January 2007 to September 2013, the Fistula Care project, managed by EngenderHealth in partnership with the Ministry of Health and supported by USAID, integrated fistula repair services in the maternity wards of general hospitals in Guinea. The objective of this article was to present and discuss the clinical outcomes of 7 years of work involving 2116 women repaired in three hospitals across the country. Methods This was a retrospective cohort study using data abstracted from medical records for fistula repairs conducted from 2007 to 2013. The study data were reviewed during the period April to August 2014. Results The majority of the 2116 women who underwent surgical repair had vesicovaginal fistula (n = 2045, 97%) and 3% had rectovaginal fistula or a combination of both. Overall 1748 (83%) had a closed fistula and were continent of urine immediately after surgery. At discharge, 1795 women (85%) had a closed fistula and 1680 (79%) were dry, meaning they no longer leaked urine and/or faeces. One hundred and fifteen (5%) remained with residual incontinence despite fistula closure. Follow-up at 3 months was completed by 1663 (79%) women of whom 1405 (84.5%) had their fistula closed and 80% were continent. Twenty-one per cent were lost to follow-up. Conclusion Routine programmatic repair for obstetric fistula in low resources settings can yield good outcomes. However, more efforts are needed to address loss to follow-up, sustain the results and prevent the occurrence and/or recurrence of fistula. Objectifs La fistule génitale féminine reste un problème de santé publique dans les pays en développement. De janvier 2007 à septembre 2013, le projet Fistula Care, géré par Engender Health en partenariat avec le Ministère de la Santé et soutenu par l’USAID, a intégré les services de réparation de fistules dans les maternités des hôpitaux généraux en Guinée. L'objectif de cet article est de présenter et de discuter les résultats cliniques de sept années de travail impliquant 2116 femmes traitées dans trois hôpitaux à travers le pays. Méthodes Il s'agit d'une étude de cohorte rétrospective utilisant des données extraites des dossiers médicaux de réparations de fistules menées de 2007 à 2013. Les données de l’étude ont été analysées au cours de la période allant d'avril à août 2014. Résultats La majorité des 2116 femmes qui ont subi une réparation chirurgicale avaient une fistule vésico vaginale (n = 2 045, 97%) et 3% avaient une fistule recto vaginale ou une combinaison des deux. Au total, 1748 (83%) femmes ont eu leur fistule refermée et sont devenues continentes d'urine immédiatement après la chirurgie. À la sortie, 1795 femmes (85%) avaient une fistule fermée et 1680 (79%) étaient sèches, c'est à dire qu'elles n'avaient plus de fuite d'urine et/ou de matières fécales. 115 (5%) femmes avaient toujours une incontinence résiduelle malgré la fermeture de la fistule. Le suivi à trois mois a été complété par 1663 (79%) femmes dont 1405 (84,5%) ont eu leur fistule fermée et 80% étaient continentes. 21% ont été perdues au suivi. Conclusion La réparation programmatique de routine de la fistule obstétricale dans les régions à faibles ressources peut donner de bons résultats. Toutefois, davantage d'efforts sont nécessaires pour remédier à la perte au suivi, maintenir les résultats et prévenir l'apparition et/ou la réapparition de fistules. Objetivos La fístula genital femenina continúa siendo una preocupación de salud pública en países en vías de desarrollo. Entre Enero 2007 y Septiembre 2013, el proyecto Fistula Care, manejado por EngenderHealth junto con el Ministerio de Salud de Guinea, y financiado por USAID, integró los servicios de reparación de fistula en las maternidades de hospitales generales en Guinea. El objetivo de este artículo es presentar y discutir los resultados clínicos de 7 años de trabajo con 2116 mujeres intervenidas en tres hospitales del país. Métodos Estudio retrospectivo de cohortes utilizando datos tomados de historias clínicas de reparaciones de fístula realizadas entre el 2007 y el 2013. Los datos del estudio se revisaron durante el periodo entre Abril y Agosto 2014. Resultados La mayoría de las 2116 mujeres que se sometieron a la reparación quirúrgica tenían una fistula vesico-vaginal (n = 2045, 97%) y 3% tenían una fístula recto-vaginal o una combinación de ambas. En general, 1748 (83%) tenían la fístula cerrada y eran continentes inmediatamente después de la cirugía. En el momento del alta, 1795 mujeres (85%) tenían la fistula cerrada y 1680 (79%) estaban secas, es decir que ya no perdían orina y/o heces. 115 (5%) continuaron teniendo incontinencia residual a pesar de que la fistula estaba cerrada. El seguimiento a los tres meses se completó para 1663 (79%) mujeres, de las cuales 1405 (84.5%) tenían la fistula cerrada y 80% eran continentes. Un 21% fueron perdidas durante el seguimiento. Conclusión La reparación rutinaria programada de la fístula obstétrica en lugares con pocos recursos puede dar buenos resultados. Sin embargo, se requieren más esfuerzos para resolver la pérdida durante el seguimiento, mantener los resultados y prevenir la aparición y/o reaparición de la fístula. PMID:25706671
ERIC Educational Resources Information Center
Cui, Zhongmin; Kolen, Michael J.
2009-01-01
This article considers two new smoothing methods in equipercentile equating, the cubic B-spline presmoothing method and the direct presmoothing method. Using a simulation study, these two methods are compared with established methods, the beta-4 method, the polynomial loglinear method, and the cubic spline postsmoothing method, under three sample…
Comparison of DNA extraction methods for meat analysis.
Yalçınkaya, Burhanettin; Yumbul, Eylem; Mozioğlu, Erkan; Akgoz, Muslum
2017-04-15
Preventing adulteration of meat and meat products with less desirable or objectionable meat species is important not only for economical, religious and health reasons, but also, it is important for fair trade practices, therefore, several methods for identification of meat and meat products have been developed. In the present study, ten different DNA extraction methods, including Tris-EDTA Method, a modified Cetyltrimethylammonium Bromide (CTAB) Method, Alkaline Method, Urea Method, Salt Method, Guanidinium Isothiocyanate (GuSCN) Method, Wizard Method, Qiagen Method, Zymogen Method and Genespin Method were examined to determine their relative effectiveness for extracting DNA from meat samples. The results show that the salt method is easy to perform, inexpensive and environmentally friendly. Additionally, it has the highest yield among all the isolation methods tested. We suggest this method as an alternative method for DNA isolation from meat and meat products. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Yanran; Chen, Duo; Zhang, Jiwei; Chen, Ning; Li, Xiaoqi; Gong, Xiaojing
2017-09-01
GIS (gas insulated switchgear), is an important equipment in power system. Partial discharge plays an important role in detecting the insulation performance of GIS. UHF method and ultrasonic method frequently used in partial discharge (PD) detection for GIS. It is necessary to investigate UHF method and ultrasonic method for partial discharge in GIS. However, very few studies have been conducted on the method combined this two methods. From the view point of safety, a new method based on UHF method and ultrasonic method of PD detection for GIS is proposed in order to greatly enhance the ability of anti-interference of signal detection and the accuracy of fault localization. This paper presents study aimed at clarifying the effect of the new method combined UHF method and ultrasonic method. Partial discharge tests were performed in laboratory simulated environment. Obtained results show the ability of anti-interference of signal detection and the accuracy of fault localization for this new method combined UHF method and ultrasonic method.
The multigrid preconditioned conjugate gradient method
NASA Technical Reports Server (NTRS)
Tatebe, Osamu
1993-01-01
A multigrid preconditioned conjugate gradient method (MGCG method), which uses the multigrid method as a preconditioner of the PCG method, is proposed. The multigrid method has inherent high parallelism and improves convergence of long wavelength components, which is important in iterative methods. By using this method as a preconditioner of the PCG method, an efficient method with high parallelism and fast convergence is obtained. First, it is considered a necessary condition of the multigrid preconditioner in order to satisfy requirements of a preconditioner of the PCG method. Next numerical experiments show a behavior of the MGCG method and that the MGCG method is superior to both the ICCG method and the multigrid method in point of fast convergence and high parallelism. This fast convergence is understood in terms of the eigenvalue analysis of the preconditioned matrix. From this observation of the multigrid preconditioner, it is realized that the MGCG method converges in very few iterations and the multigrid preconditioner is a desirable preconditioner of the conjugate gradient method.
Energy minimization in medical image analysis: Methodologies and applications.
Zhao, Feng; Xie, Xianghua
2016-02-01
Energy minimization is of particular interest in medical image analysis. In the past two decades, a variety of optimization schemes have been developed. In this paper, we present a comprehensive survey of the state-of-the-art optimization approaches. These algorithms are mainly classified into two categories: continuous method and discrete method. The former includes Newton-Raphson method, gradient descent method, conjugate gradient method, proximal gradient method, coordinate descent method, and genetic algorithm-based method, while the latter covers graph cuts method, belief propagation method, tree-reweighted message passing method, linear programming method, maximum margin learning method, simulated annealing method, and iterated conditional modes method. We also discuss the minimal surface method, primal-dual method, and the multi-objective optimization method. In addition, we review several comparative studies that evaluate the performance of different minimization techniques in terms of accuracy, efficiency, or complexity. These optimization techniques are widely used in many medical applications, for example, image segmentation, registration, reconstruction, motion tracking, and compressed sensing. We thus give an overview on those applications as well. Copyright © 2015 John Wiley & Sons, Ltd.
Li, Xuelin; Tang, Jinfa; Meng, Fei; Li, Chunxiao; Xie, Yanming
2011-10-01
To study the adverse reaction of Danhong injection with four kinds of methods, central monitoring method, chart review method, literature study method and spontaneous reporting method, and to compare the differences between them, explore an appropriate method to carry out post-marketing safety evaluation of traditional Chinese medicine injection. Set down the adverse reactions' questionnaire of four kinds of methods, central monitoring method, chart review method, literature study method and collect the information on adverse reactions in a certain period. Danhong injection adverse reaction information from Henan Province spontaneous reporting system was collected with spontaneous reporting method. Carry on data summary and descriptive analysis. Study the adverse reaction of Danhong injection with four methods of central monitoring method, chart review method, literature study method and spontaneous reporting method, the rates of adverse events were 0.993%, 0.336%, 0.515%, 0.067%, respectively. Cyanosis, arrhythmia, hypotension, sweating, erythema, hemorrhage dermatitis, rash, irritability, bleeding gums, toothache, tinnitus, asthma, elevated aminotransferases, constipation, pain are new discovered adverse reactions. The central monitoring method is the appropriate method to carry out post-marketing safety evaluation of traditional Chinese medicine injection, which could objectively reflect the real world of clinical usage.
Ensemble Methods for MiRNA Target Prediction from Expression Data.
Le, Thuc Duy; Zhang, Junpeng; Liu, Lin; Li, Jiuyong
2015-01-01
microRNAs (miRNAs) are short regulatory RNAs that are involved in several diseases, including cancers. Identifying miRNA functions is very important in understanding disease mechanisms and determining the efficacy of drugs. An increasing number of computational methods have been developed to explore miRNA functions by inferring the miRNA-mRNA regulatory relationships from data. Each of the methods is developed based on some assumptions and constraints, for instance, assuming linear relationships between variables. For such reasons, computational methods are often subject to the problem of inconsistent performance across different datasets. On the other hand, ensemble methods integrate the results from individual methods and have been proved to outperform each of their individual component methods in theory. In this paper, we investigate the performance of some ensemble methods over the commonly used miRNA target prediction methods. We apply eight different popular miRNA target prediction methods to three cancer datasets, and compare their performance with the ensemble methods which integrate the results from each combination of the individual methods. The validation results using experimentally confirmed databases show that the results of the ensemble methods complement those obtained by the individual methods and the ensemble methods perform better than the individual methods across different datasets. The ensemble method, Pearson+IDA+Lasso, which combines methods in different approaches, including a correlation method, a causal inference method, and a regression method, is the best performed ensemble method in this study. Further analysis of the results of this ensemble method shows that the ensemble method can obtain more targets which could not be found by any of the single methods, and the discovered targets are more statistically significant and functionally enriched. The source codes, datasets, miRNA target predictions by all methods, and the ground truth for validation are available in the Supplementary materials.
Ensemble Methods for MiRNA Target Prediction from Expression Data
Le, Thuc Duy; Zhang, Junpeng; Liu, Lin; Li, Jiuyong
2015-01-01
Background microRNAs (miRNAs) are short regulatory RNAs that are involved in several diseases, including cancers. Identifying miRNA functions is very important in understanding disease mechanisms and determining the efficacy of drugs. An increasing number of computational methods have been developed to explore miRNA functions by inferring the miRNA-mRNA regulatory relationships from data. Each of the methods is developed based on some assumptions and constraints, for instance, assuming linear relationships between variables. For such reasons, computational methods are often subject to the problem of inconsistent performance across different datasets. On the other hand, ensemble methods integrate the results from individual methods and have been proved to outperform each of their individual component methods in theory. Results In this paper, we investigate the performance of some ensemble methods over the commonly used miRNA target prediction methods. We apply eight different popular miRNA target prediction methods to three cancer datasets, and compare their performance with the ensemble methods which integrate the results from each combination of the individual methods. The validation results using experimentally confirmed databases show that the results of the ensemble methods complement those obtained by the individual methods and the ensemble methods perform better than the individual methods across different datasets. The ensemble method, Pearson+IDA+Lasso, which combines methods in different approaches, including a correlation method, a causal inference method, and a regression method, is the best performed ensemble method in this study. Further analysis of the results of this ensemble method shows that the ensemble method can obtain more targets which could not be found by any of the single methods, and the discovered targets are more statistically significant and functionally enriched. The source codes, datasets, miRNA target predictions by all methods, and the ground truth for validation are available in the Supplementary materials. PMID:26114448
46 CFR 160.077-5 - Incorporation by reference.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., Breaking of Woven Cloth; Grab Method. (ii) Method 5132, Strength of Cloth, Tearing; Falling-Pendulum Method. (iii) Method 5134, Strength of Cloth, Tearing; Tongue Method. (iv) Method 5804.1, Weathering Resistance of Cloth; Accelerated Weathering Method. (v) Method 5762, Mildew Resistance of Textile Materials...
46 CFR 160.077-5 - Incorporation by reference.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Elongation, Breaking of Woven Cloth; Grab Method. (2) Method 5132, Strength of Cloth, Tearing; Falling-Pendulum Method. (3) Method 5134, Strength of Cloth, Tearing; Tongue Method. (4) Method 5804.1, Weathering Resistance of Cloth; Accelerated Weathering Method. (5) Method 5762, Mildew Resistance of Textile Materials...
46 CFR 160.077-5 - Incorporation by reference.
Code of Federal Regulations, 2014 CFR
2014-10-01
..., Breaking of Woven Cloth; Grab Method. (ii) Method 5132, Strength of Cloth, Tearing; Falling-Pendulum Method. (iii) Method 5134, Strength of Cloth, Tearing; Tongue Method. (iv) Method 5804.1, Weathering Resistance of Cloth; Accelerated Weathering Method. (v) Method 5762, Mildew Resistance of Textile Materials...
46 CFR 160.077-5 - Incorporation by reference.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Elongation, Breaking of Woven Cloth; Grab Method. (2) Method 5132, Strength of Cloth, Tearing; Falling-Pendulum Method. (3) Method 5134, Strength of Cloth, Tearing; Tongue Method. (4) Method 5804.1, Weathering Resistance of Cloth; Accelerated Weathering Method. (5) Method 5762, Mildew Resistance of Textile Materials...
Methods for analysis of cracks in three-dimensional solids
NASA Technical Reports Server (NTRS)
Raju, I. S.; Newman, J. C., Jr.
1984-01-01
Various analytical and numerical methods used to evaluate the stress intensity factors for cracks in three-dimensional (3-D) solids are reviewed. Classical exact solutions and many of the approximate methods used in 3-D analyses of cracks are reviewed. The exact solutions for embedded elliptic cracks in infinite solids are discussed. The approximate methods reviewed are the finite element methods, the boundary integral equation (BIE) method, the mixed methods (superposition of analytical and finite element method, stress difference method, discretization-error method, alternating method, finite element-alternating method), and the line-spring model. The finite element method with singularity elements is the most widely used method. The BIE method only needs modeling of the surfaces of the solid and so is gaining popularity. The line-spring model appears to be the quickest way to obtain good estimates of the stress intensity factors. The finite element-alternating method appears to yield the most accurate solution at the minimum cost.
Sharma, Sangita; Neog, Madhurjya; Prajapati, Vipul; Patel, Hiren; Dabhi, Dipti
2010-01-01
Five simple, sensitive, accurate and rapid visible spectrophotometric methods (A, B, C, D and E) have been developed for estimating Amisulpride in pharmaceutical preparations. These are based on the diazotization of Amisulpride with sodium nitrite and hydrochloric acid, followed by coupling with N-(1-naphthyl)ethylenediamine dihydrochloride (Method A), diphenylamine (Method B), beta-naphthol in an alkaline medium (Method C), resorcinol in an alkaline medium (Method D) and chromotropic acid in an alkaline medium (Method E) to form a colored chromogen. The absorption maxima, lambda(max), are at 523 nm for Method A, 382 and 490 nm for Method B, 527 nm for Method C, 521 nm for Method D and 486 nm for Method E. Beer's law was obeyed in the concentration range of 2.5-12.5 microg mL(-1) in Method A, 5-25 and 10-50 microg mL(-1) in Method B, 4-20 microg mL(-1) in Method C, 2.5-12.5 microg mL(-1) in Method D and 5-15 microg mL(-1) in Method E. The results obtained for the proposed methods are in good agreement with labeled amounts, when marketed pharmaceutical preparations were analyzed.
Reconstruction of fluorescence molecular tomography with a cosinoidal level set method.
Zhang, Xuanxuan; Cao, Xu; Zhu, Shouping
2017-06-27
Implicit shape-based reconstruction method in fluorescence molecular tomography (FMT) is capable of achieving higher image clarity than image-based reconstruction method. However, the implicit shape method suffers from a low convergence speed and performs unstably due to the utilization of gradient-based optimization methods. Moreover, the implicit shape method requires priori information about the number of targets. A shape-based reconstruction scheme of FMT with a cosinoidal level set method is proposed in this paper. The Heaviside function in the classical implicit shape method is replaced with a cosine function, and then the reconstruction can be accomplished with the Levenberg-Marquardt method rather than gradient-based methods. As a result, the priori information about the number of targets is not required anymore and the choice of step length is avoided. Numerical simulations and phantom experiments were carried out to validate the proposed method. Results of the proposed method show higher contrast to noise ratios and Pearson correlations than the implicit shape method and image-based reconstruction method. Moreover, the number of iterations required in the proposed method is much less than the implicit shape method. The proposed method performs more stably, provides a faster convergence speed than the implicit shape method, and achieves higher image clarity than the image-based reconstruction method.
A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.
Yang, Harry; Zhang, Jianchun
2015-01-01
The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current methods. Analytical methods are often used to ensure safety, efficacy, and quality of medicinal products. According to government regulations and regulatory guidelines, these methods need to be validated through well-designed studies to minimize the risk of accepting unsuitable methods. This article describes a novel statistical test for analytical method validation, which provides better protection for the risk of accepting unsuitable analytical methods. © PDA, Inc. 2015.
Method Engineering: A Service-Oriented Approach
NASA Astrophysics Data System (ADS)
Cauvet, Corine
In the past, a large variety of methods have been published ranging from very generic frameworks to methods for specific information systems. Method Engineering has emerged as a research discipline for designing, constructing and adapting methods for Information Systems development. Several approaches have been proposed as paradigms in method engineering. The meta modeling approach provides means for building methods by instantiation, the component-based approach aims at supporting the development of methods by using modularization constructs such as method fragments, method chunks and method components. This chapter presents an approach (SO2M) for method engineering based on the service paradigm. We consider services as autonomous computational entities that are self-describing, self-configuring and self-adapting. They can be described, published, discovered and dynamically composed for processing a consumer's demand (a developer's requirement). The method service concept is proposed to capture a development process fragment for achieving a goal. Goal orientation in service specification and the principle of service dynamic composition support method construction and method adaptation to different development contexts.
Ramadan, Nesrin K; El-Ragehy, Nariman A; Ragab, Mona T; El-Zeany, Badr A
2015-02-25
Four simple, sensitive, accurate and precise spectrophotometric methods were developed for the simultaneous determination of a binary mixture containing Pantoprazole Sodium Sesquihydrate (PAN) and Itopride Hydrochloride (ITH). Method (A) is the derivative ratio method ((1)DD), method (B) is the mean centering of ratio spectra method (MCR), method (C) is the ratio difference method (RD) and method (D) is the isoabsorptive point coupled with third derivative method ((3)D). Linear correlation was obtained in range 8-44 μg/mL for PAN by the four proposed methods, 8-40 μg/mL for ITH by methods A, B and C and 10-40 μg/mL for ITH by method D. The suggested methods were validated according to ICH guidelines. The obtained results were statistically compared with those obtained by the official and a reported method for PAN and ITH, respectively, showing no significant difference with respect to accuracy and precision. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ramadan, Nesrin K.; El-Ragehy, Nariman A.; Ragab, Mona T.; El-Zeany, Badr A.
2015-02-01
Four simple, sensitive, accurate and precise spectrophotometric methods were developed for the simultaneous determination of a binary mixture containing Pantoprazole Sodium Sesquihydrate (PAN) and Itopride Hydrochloride (ITH). Method (A) is the derivative ratio method (1DD), method (B) is the mean centering of ratio spectra method (MCR), method (C) is the ratio difference method (RD) and method (D) is the isoabsorptive point coupled with third derivative method (3D). Linear correlation was obtained in range 8-44 μg/mL for PAN by the four proposed methods, 8-40 μg/mL for ITH by methods A, B and C and 10-40 μg/mL for ITH by method D. The suggested methods were validated according to ICH guidelines. The obtained results were statistically compared with those obtained by the official and a reported method for PAN and ITH, respectively, showing no significant difference with respect to accuracy and precision.
NASA Astrophysics Data System (ADS)
Moustafa, Azza Aziz; Salem, Hesham; Hegazy, Maha; Ali, Omnia
2015-02-01
Simple, accurate, and selective methods have been developed and validated for simultaneous determination of a ternary mixture of Chlorpheniramine maleate (CPM), Pseudoephedrine HCl (PSE) and Ibuprofen (IBF), in tablet dosage form. Four univariate methods manipulating ratio spectra were applied, method A is the double divisor-ratio difference spectrophotometric method (DD-RD). Method B is double divisor-derivative ratio spectrophotometric method (DD-RD). Method C is derivative ratio spectrum-zero crossing method (DRZC), while method D is mean centering of ratio spectra (MCR). Two multivariate methods were also developed and validated, methods E and F are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods have the advantage of simultaneous determination of the mentioned drugs without prior separation steps. They were successfully applied to laboratory-prepared mixtures and to commercial pharmaceutical preparation without any interference from additives. The proposed methods were validated according to the ICH guidelines. The obtained results were statistically compared with the official methods where no significant difference was observed regarding both accuracy and precision.
Methods for elimination of dampness in Building walls
NASA Astrophysics Data System (ADS)
Campian, Cristina; Pop, Maria
2016-06-01
Dampness elimination in building walls is a very sensitive problem, with high costs. Many methods are used, as: chemical method, electro osmotic method or physical method. The RECON method is a representative and a sustainable method in Romania. Italy has the most radical method from all methods. The technology consists in cutting the brick walls, insertion of a special plastic sheeting and injection of a pre-mixed anti-shrinking mortar.
A comparison of several methods of solving nonlinear regression groundwater flow problems
Cooley, Richard L.
1985-01-01
Computational efficiency and computer memory requirements for four methods of minimizing functions were compared for four test nonlinear-regression steady state groundwater flow problems. The fastest methods were the Marquardt and quasi-linearization methods, which required almost identical computer times and numbers of iterations; the next fastest was the quasi-Newton method, and last was the Fletcher-Reeves method, which did not converge in 100 iterations for two of the problems. The fastest method per iteration was the Fletcher-Reeves method, and this was followed closely by the quasi-Newton method. The Marquardt and quasi-linearization methods were slower. For all four methods the speed per iteration was directly related to the number of parameters in the model. However, this effect was much more pronounced for the Marquardt and quasi-linearization methods than for the other two. Hence the quasi-Newton (and perhaps Fletcher-Reeves) method might be more efficient than either the Marquardt or quasi-linearization methods if the number of parameters in a particular model were large, although this remains to be proven. The Marquardt method required somewhat less central memory than the quasi-linearization metilod for three of the four problems. For all four problems the quasi-Newton method required roughly two thirds to three quarters of the memory required by the Marquardt method, and the Fletcher-Reeves method required slightly less memory than the quasi-Newton method. Memory requirements were not excessive for any of the four methods.
Hybrid DFP-CG method for solving unconstrained optimization problems
NASA Astrophysics Data System (ADS)
Osman, Wan Farah Hanan Wan; Asrul Hery Ibrahim, Mohd; Mamat, Mustafa
2017-09-01
The conjugate gradient (CG) method and quasi-Newton method are both well known method for solving unconstrained optimization method. In this paper, we proposed a new method by combining the search direction between conjugate gradient method and quasi-Newton method based on BFGS-CG method developed by Ibrahim et al. The Davidon-Fletcher-Powell (DFP) update formula is used as an approximation of Hessian for this new hybrid algorithm. Numerical result showed that the new algorithm perform well than the ordinary DFP method and proven to posses both sufficient descent and global convergence properties.
Generalization of the Engineering Method to the UNIVERSAL METHOD.
ERIC Educational Resources Information Center
Koen, Billy Vaughn
1987-01-01
Proposes that there is a universal method for all realms of knowledge. Reviews Descartes's definition of the universal method, the engineering definition, and the philosophical basis for the universal method. Contends that the engineering method best represents the universal method. (ML)
Colloidal Electrolytes and the Critical Micelle Concentration
ERIC Educational Resources Information Center
Knowlton, L. G.
1970-01-01
Describes methods for determining the Critical Micelle Concentration of Colloidal Electrolytes; methods described are: (1) methods based on Colligative Properties, (2) methods based on the Electrical Conductivity of Colloidal Electrolytic Solutions, (3) Dye Method, (4) Dye Solubilization Method, and (5) Surface Tension Method. (BR)
Huang, Jianhua
2012-07-01
There are three methods for calculating thermal insulation of clothing measured with a thermal manikin, i.e. the global method, the serial method, and the parallel method. Under the condition of homogeneous clothing insulation, these three methods yield the same insulation values. If the local heat flux is uniform over the manikin body, the global and serial methods provide the same insulation value. In most cases, the serial method gives a higher insulation value than the global method. There is a possibility that the insulation value from the serial method is lower than the value from the global method. The serial method always gives higher insulation value than the parallel method. The insulation value from the parallel method is higher or lower than the value from the global method, depending on the relationship between the heat loss distribution and the surface temperatures. Under the circumstance of uniform surface temperature distribution over the manikin body, the global and parallel methods give the same insulation value. If the constant surface temperature mode is used in the manikin test, the parallel method can be used to calculate the thermal insulation of clothing. If the constant heat flux mode is used in the manikin test, the serial method can be used to calculate the thermal insulation of clothing. The global method should be used for calculating thermal insulation of clothing for all manikin control modes, especially for thermal comfort regulation mode. The global method should be chosen by clothing manufacturers for labelling their products. The serial and parallel methods provide more information with respect to the different parts of clothing.
Comparison of five methods for the estimation of methane production from vented in vitro systems.
Alvarez Hess, P S; Eckard, R J; Jacobs, J L; Hannah, M C; Moate, P J
2018-05-23
There are several methods for estimating methane production (MP) from feedstuffs in vented in vitro systems. One method (A; "gold standard") measures methane proportions in the incubation bottle's head space (HS) and in the vented gas collected in gas bags. Four other methods (B, C, D and E) measure methane proportion in a single gas sample from HS. Method B assumes the same methane proportion in the vented gas as in HS, method C assumes constant methane to carbon dioxide ratio, method D has been developed based on empirical data and method E assumes constant individual venting volumes. This study aimed to compare the MP predictions from these methods to that of the gold standard method under different incubation scenarios, to validate these methods based on their concordance with a gold standard method. Methods C, D and E had greater concordance (0.85, 0.88 and 0.81), lower root mean square error (RMSE) (0.80, 0.72 and 0.85) and lower mean bias (0.20, 0.35, -0.35) with the gold standard than did method B (concordance 0.67, RMSE 1.49 and mean bias 1.26). Methods D and E were simpler to perform than method C and method D was slightly more accurate than method E. Based on precision, accuracy and simplicity of implementation, it is recommended that, when method A cannot be used, methods D and E are preferred to estimate MP from vented in vitro systems. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Linna; Li, Gang; Sun, Meixiu; Li, Hongxiao; Wang, Zhennan; Li, Yingxin; Lin, Ling
2017-11-01
Identifying whole bloods to be either human or nonhuman is an important responsibility for import-export ports and inspection and quarantine departments. Analytical methods and DNA testing methods are usually destructive. Previous studies demonstrated that visible diffuse reflectance spectroscopy method can realize noncontact human and nonhuman blood discrimination. An appropriate method for calibration set selection was very important for a robust quantitative model. In this paper, Random Selection (RS) method and Kennard-Stone (KS) method was applied in selecting samples for calibration set. Moreover, proper stoichiometry method can be greatly beneficial for improving the performance of classification model or quantification model. Partial Least Square Discrimination Analysis (PLSDA) method was commonly used in identification of blood species with spectroscopy methods. Least Square Support Vector Machine (LSSVM) was proved to be perfect for discrimination analysis. In this research, PLSDA method and LSSVM method was used for human blood discrimination. Compared with the results of PLSDA method, this method could enhance the performance of identified models. The overall results convinced that LSSVM method was more feasible for identifying human and animal blood species, and sufficiently demonstrated LSSVM method was a reliable and robust method for human blood identification, and can be more effective and accurate.
A Novel Method to Identify Differential Pathways in Hippocampus Alzheimer's Disease.
Liu, Chun-Han; Liu, Lian
2017-05-08
BACKGROUND Alzheimer's disease (AD) is the most common type of dementia. The objective of this paper is to propose a novel method to identify differential pathways in hippocampus AD. MATERIAL AND METHODS We proposed a combined method by merging existed methods. Firstly, pathways were identified by four known methods (DAVID, the neaGUI package, the pathway-based co-expressed method, and the pathway network approach), and differential pathways were evaluated through setting weight thresholds. Subsequently, we combined all pathways by a rank-based algorithm and called the method the combined method. Finally, common differential pathways across two or more of five methods were selected. RESULTS Pathways obtained from different methods were also different. The combined method obtained 1639 pathways and 596 differential pathways, which included all pathways gained from the four existing methods; hence, the novel method solved the problem of inconsistent results. Besides, a total of 13 common pathways were identified, such as metabolism, immune system, and cell cycle. CONCLUSIONS We have proposed a novel method by combining four existing methods based on a rank product algorithm, and identified 13 significant differential pathways based on it. These differential pathways might provide insight into treatment and diagnosis of hippocampus AD.
Improved accuracy for finite element structural analysis via an integrated force method
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.
1992-01-01
A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.
NASA Astrophysics Data System (ADS)
Li, Yanran; Chen, Duo; Li, Li; Zhang, Jiwei; Li, Guang; Liu, Hongxia
2017-11-01
GIS (gas insulated switchgear), is an important equipment in power system. Partial discharge plays an important role in detecting the insulation performance of GIS. UHF method and ultrasonic method frequently used in partial discharge (PD) detection for GIS. However, few studies have been conducted on comparison of this two methods. From the view point of safety, it is necessary to investigate UHF method and ultrasonic method for partial discharge in GIS. This paper presents study aimed at clarifying the effect of UHF method and ultrasonic method for partial discharge caused by free metal particles in GIS. Partial discharge tests were performed in laboratory simulated environment. Obtained results show the ability of anti-interference of signal detection and the accuracy of fault localization for UHF method and ultrasonic method. A new method based on UHF method and ultrasonic method of PD detection for GIS is proposed in order to greatly enhance the ability of anti-interference of signal detection and the accuracy of detection localization.
Juárez, M; Polvillo, O; Contò, M; Ficco, A; Ballico, S; Failla, S
2008-05-09
Four different extraction-derivatization methods commonly used for fatty acid analysis in meat (in situ or one-step method, saponification method, classic method and a combination of classic extraction and saponification derivatization) were tested. The in situ method had low recovery and variation. The saponification method showed the best balance between recovery, precision, repeatability and reproducibility. The classic method had high recovery and acceptable variation values, except for the polyunsaturated fatty acids, showing higher variation than the former methods. The combination of extraction and methylation steps had great recovery values, but the precision, repeatability and reproducibility were not acceptable. Therefore the saponification method would be more convenient for polyunsaturated fatty acid analysis, whereas the in situ method would be an alternative for fast analysis. However the classic method would be the method of choice for the determination of the different lipid classes.
... Z Health Topics Birth control methods Birth control methods > A-Z Health Topics Birth control methods fact ... To receive Publications email updates Submit Birth control methods Birth control (contraception) is any method, medicine, or ...
26 CFR 1.381(c)(5)-1 - Inventories.
Code of Federal Regulations, 2011 CFR
2011-04-01
... the dollar-value method, use the double-extension method, pool under the natural business unit method... double-extension method, pool under the natural business unit method, and value annual inventory... natural business unit method while P corporation pools under the multiple pool method. In addition, O...
26 CFR 1.381(c)(5)-1 - Inventories.
Code of Federal Regulations, 2010 CFR
2010-04-01
... the dollar-value method, use the double-extension method, pool under the natural business unit method... double-extension method, pool under the natural business unit method, and value annual inventory... natural business unit method while P corporation pools under the multiple pool method. In addition, O...
46 CFR 160.076-11 - Incorporation by reference.
Code of Federal Regulations, 2011 CFR
2011-10-01
... following methods: (1) Method 5100, Strength and Elongation, Breaking of Woven Cloth; Grab Method, 160.076-25; (2) Method 5132, Strength of Cloth, Tearing; Falling-Pendulum Method, 160.076-25; (3) Method 5134, Strength of Cloth, Tearing; Tongue Method, 160.076-25. Underwriters Laboratories (UL) Underwriters...
Costs and Efficiency of Online and Offline Recruitment Methods: A Web-Based Cohort Study
Riis, Anders H; Hatch, Elizabeth E; Wise, Lauren A; Nielsen, Marie G; Rothman, Kenneth J; Toft Sørensen, Henrik; Mikkelsen, Ellen M
2017-01-01
Background The Internet is widely used to conduct research studies on health issues. Many different methods are used to recruit participants for such studies, but little is known about how various recruitment methods compare in terms of efficiency and costs. Objective The aim of our study was to compare online and offline recruitment methods for Internet-based studies in terms of efficiency (number of recruited participants) and costs per participant. Methods We employed several online and offline recruitment methods to enroll 18- to 45-year-old women in an Internet-based Danish prospective cohort study on fertility. Offline methods included press releases, posters, and flyers. Online methods comprised advertisements placed on five different websites, including Facebook and Netdoktor.dk. We defined seven categories of mutually exclusive recruitment methods and used electronic tracking via unique Uniform Resource Locator (URL) and self-reported data to identify the recruitment method for each participant. For each method, we calculated the average cost per participant and efficiency, that is, the total number of recruited participants. Results We recruited 8252 study participants. Of these, 534 were excluded as they could not be assigned to a specific recruitment method. The final study population included 7724 participants, of whom 803 (10.4%) were recruited by offline methods, 3985 (51.6%) by online methods, 2382 (30.8%) by online methods not initiated by us, and 554 (7.2%) by other methods. Overall, the average cost per participant was €6.22 for online methods initiated by us versus €9.06 for offline methods. Costs per participant ranged from €2.74 to €105.53 for online methods and from €0 to €67.50 for offline methods. Lowest average costs per participant were for those recruited from Netdoktor.dk (€2.99) and from Facebook (€3.44). Conclusions In our Internet-based cohort study, online recruitment methods were superior to offline methods in terms of efficiency (total number of participants enrolled). The average cost per recruited participant was also lower for online than for offline methods, although costs varied greatly among both online and offline recruitment methods. We observed a decrease in the efficiency of some online recruitment methods over time, suggesting that it may be optimal to adopt multiple online methods. PMID:28249833
Interior-Point Methods for Linear Programming: A Review
ERIC Educational Resources Information Center
Singh, J. N.; Singh, D.
2002-01-01
The paper reviews some recent advances in interior-point methods for linear programming and indicates directions in which future progress can be made. Most of the interior-point methods belong to any of three categories: affine-scaling methods, potential reduction methods and central path methods. These methods are discussed together with…
The Relation of Finite Element and Finite Difference Methods
NASA Technical Reports Server (NTRS)
Vinokur, M.
1976-01-01
Finite element and finite difference methods are examined in order to bring out their relationship. It is shown that both methods use two types of discrete representations of continuous functions. They differ in that finite difference methods emphasize the discretization of independent variable, while finite element methods emphasize the discretization of dependent variable (referred to as functional approximations). An important point is that finite element methods use global piecewise functional approximations, while finite difference methods normally use local functional approximations. A general conclusion is that finite element methods are best designed to handle complex boundaries, while finite difference methods are superior for complex equations. It is also shown that finite volume difference methods possess many of the advantages attributed to finite element methods.
[Baseflow separation methods in hydrological process research: a review].
Xu, Lei-Lei; Liu, Jing-Lin; Jin, Chang-Jie; Wang, An-Zhi; Guan, De-Xin; Wu, Jia-Bing; Yuan, Feng-Hui
2011-11-01
Baseflow separation research is regarded as one of the most important and difficult issues in hydrology and ecohydrology, but lacked of unified standards in the concepts and methods. This paper introduced the theories of baseflow separation based on the definitions of baseflow components, and analyzed the development course of different baseflow separation methods. Among the methods developed, graph separation method is simple and applicable but arbitrary, balance method accords with hydrological mechanism but is difficult in application, whereas time series separation method and isotopic method can overcome the subjective and arbitrary defects caused by graph separation method, and thus can obtain the baseflow procedure quickly and efficiently. In recent years, hydrological modeling, digital filtering, and isotopic method are the main methods used for baseflow separation.
Semi top-down method combined with earth-bank, an effective method for basement construction.
NASA Astrophysics Data System (ADS)
Tuan, B. Q.; Tam, Ng M.
2018-04-01
Choosing an appropriate method of deep excavation not only plays a decisive role in technical success, but also in economics of the construction project. Presently, we mainly base on to key methods: “Bottom-up” and “Top-down” construction method. Right now, this paper presents an another method of construction that is “Semi Top-down method combining with earth-bank” in order to take the advantages and limit the weakness of the above methods. The Bottom-up method was improved by using the earth-bank to stabilize retaining walls instead of the bracing steel struts. The Top-down method was improved by using the open cut method for the half of the earthwork quantities.
Klous, Miriam; Klous, Sander
2010-07-01
The aim of skin-marker-based motion analysis is to reconstruct the motion of a kinematical model from noisy measured motion of skin markers. Existing kinematic models for reconstruction of chains of segments can be divided into two categories: analytical methods that do not take joint constraints into account and numerical global optimization methods that do take joint constraints into account but require numerical optimization of a large number of degrees of freedom, especially when the number of segments increases. In this study, a new and largely analytical method for a chain of rigid bodies is presented, interconnected in spherical joints (chain-method). In this method, the number of generalized coordinates to be determined through numerical optimization is three, irrespective of the number of segments. This new method is compared with the analytical method of Veldpaus et al. [1988, "A Least-Squares Algorithm for the Equiform Transformation From Spatial Marker Co-Ordinates," J. Biomech., 21, pp. 45-54] (Veldpaus-method, a method of the first category) and the numerical global optimization method of Lu and O'Connor [1999, "Bone Position Estimation From Skin-Marker Co-Ordinates Using Global Optimization With Joint Constraints," J. Biomech., 32, pp. 129-134] (Lu-method, a method of the second category) regarding the effects of continuous noise simulating skin movement artifacts and regarding systematic errors in joint constraints. The study is based on simulated data to allow a comparison of the results of the different algorithms with true (noise- and error-free) marker locations. Results indicate a clear trend that accuracy for the chain-method is higher than the Veldpaus-method and similar to the Lu-method. Because large parts of the equations in the chain-method can be solved analytically, the speed of convergence in this method is substantially higher than in the Lu-method. With only three segments, the average number of required iterations with the chain-method is 3.0+/-0.2 times lower than with the Lu-method when skin movement artifacts are simulated by applying a continuous noise model. When simulating systematic errors in joint constraints, the number of iterations for the chain-method was almost a factor 5 lower than the number of iterations for the Lu-method. However, the Lu-method performs slightly better than the chain-method. The RMSD value between the reconstructed and actual marker positions is approximately 57% of the systematic error on the joint center positions for the Lu-method compared with 59% for the chain-method.
NASA Astrophysics Data System (ADS)
Lotfy, Hayam M.; Saleh, Sarah S.; Hassan, Nagiba Y.; Salem, Hesham
2015-02-01
This work presents the application of different spectrophotometric techniques based on two wavelengths for the determination of severely overlapped spectral components in a binary mixture without prior separation. Four novel spectrophotometric methods were developed namely: induced dual wavelength method (IDW), dual wavelength resolution technique (DWRT), advanced amplitude modulation method (AAM) and induced amplitude modulation method (IAM). The results of the novel methods were compared to that of three well-established methods which were: dual wavelength method (DW), Vierordt's method (VD) and bivariate method (BV). The developed methods were applied for the analysis of the binary mixture of hydrocortisone acetate (HCA) and fusidic acid (FSA) formulated as topical cream accompanied by the determination of methyl paraben and propyl paraben present as preservatives. The specificity of the novel methods was investigated by analyzing laboratory prepared mixtures and the combined dosage form. The methods were validated as per ICH guidelines where accuracy, repeatability, inter-day precision and robustness were found to be within the acceptable limits. The results obtained from the proposed methods were statistically compared with official ones where no significant difference was observed. No difference was observed between the obtained results when compared to the reported HPLC method, which proved that the developed methods could be alternative to HPLC techniques in quality control laboratories.
2014-01-01
In the current practice, to determine the safety factor of a slope with two-dimensional circular potential failure surface, one of the searching methods for the critical slip surface is Genetic Algorithm (GA), while the method to calculate the slope safety factor is Fellenius' slices method. However GA needs to be validated with more numeric tests, while Fellenius' slices method is just an approximate method like finite element method. This paper proposed a new method to determine the minimum slope safety factor which is the determination of slope safety factor with analytical solution and searching critical slip surface with Genetic-Traversal Random Method. The analytical solution is more accurate than Fellenius' slices method. The Genetic-Traversal Random Method uses random pick to utilize mutation. A computer automatic search program is developed for the Genetic-Traversal Random Method. After comparison with other methods like slope/w software, results indicate that the Genetic-Traversal Random Search Method can give very low safety factor which is about half of the other methods. However the obtained minimum safety factor with Genetic-Traversal Random Search Method is very close to the lower bound solutions of slope safety factor given by the Ansys software. PMID:24782679
Feldsine, Philip T; Leung, Stephanie C; Lienau, Andrew H; Mui, Linda A; Townsend, David E
2003-01-01
The relative efficacy of the SimPlate Total Plate Count-Color Indicator (TPC-CI) method (SimPlate 35 degrees C) was compared with the AOAC Official Method 966.23 (AOAC 35 degrees C) for enumeration of total aerobic microorganisms in foods. The SimPlate TPC-CI method, incubated at 30 degrees C (SimPlate 30 degrees C), was also compared with the International Organization for Standardization (ISO) 4833 method (ISO 30 degrees C). Six food types were analyzed: ground black pepper, flour, nut meats, frozen hamburger patties, frozen fruits, and fresh vegetables. All foods tested were naturally contaminated. Nineteen laboratories throughout North America and Europe participated in the study. Three method comparisons were conducted. In general, there was <0.3 mean log count difference in recovery among the SimPlate methods and their corresponding reference methods. Mean log counts between the 2 reference methods were also very similar. Repeatability (Sr) and reproducibility (SR) standard deviations were similar among the 3 method comparisons. The SimPlate method (35 degrees C) and the AOAC method were comparable for enumerating total aerobic microorganisms in foods. Similarly, the SimPlate method (30 degrees C) was comparable to the ISO method when samples were prepared and incubated according to the ISO method.
Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation
NASA Astrophysics Data System (ADS)
Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab
2015-05-01
3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.
Sun, Shi-Hua; Jia, Cun-Xian
2014-01-01
Background This study aims to describe the specific characteristics of completed suicides by violent methods and non-violent methods in rural Chinese population, and to explore the related factors for corresponding methods. Methods Data of this study came from investigation of 199 completed suicide cases and their paired controls of rural areas in three different counties in Shandong, China, by interviewing one informant of each subject using the method of Psychological Autopsy (PA). Results There were 78 (39.2%) suicides with violent methods and 121 (60.8%) suicides with non-violent methods. Ingesting pesticides, as a non-violent method, appeared to be the most common suicide method (103, 51.8%). Hanging (73 cases, 36.7%) and drowning (5 cases, 2.5%) were the only violent methods observed. Storage of pesticides at home and higher suicide intent score were significantly associated with choice of violent methods while committing suicide. Risk factors related to suicide death included negative life events and hopelessness. Conclusions Suicide with violent methods has different factors from suicide with non-violent methods. Suicide methods should be considered in suicide prevention and intervention strategies. PMID:25111835
A review of propeller noise prediction methodology: 1919-1994
NASA Technical Reports Server (NTRS)
Metzger, F. Bruce
1995-01-01
This report summarizes a review of the literature regarding propeller noise prediction methods. The review is divided into six sections: (1) early methods; (2) more recent methods based on earlier theory; (3) more recent methods based on the Acoustic Analogy; (4) more recent methods based on Computational Acoustics; (5) empirical methods; and (6) broadband methods. The report concludes that there are a large number of noise prediction procedures available which vary markedly in complexity. Deficiencies in accuracy of methods in many cases may be related, not to the methods themselves, but the accuracy and detail of the aerodynamic inputs used to calculate noise. The steps recommended in the report to provide accurate and easy to use prediction methods are: (1) identify reliable test data; (2) define and conduct test programs to fill gaps in the existing data base; (3) identify the most promising prediction methods; (4) evaluate promising prediction methods relative to the data base; (5) identify and correct the weaknesses in the prediction methods, including lack of user friendliness, and include features now available only in research codes; (6) confirm the accuracy of improved prediction methods to the data base; and (7) make the methods widely available and provide training in their use.
A different approach to estimate nonlinear regression model using numerical methods
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.
2017-11-01
This research paper concerns with the computational methods namely the Gauss-Newton method, Gradient algorithm methods (Newton-Raphson method, Steepest Descent or Steepest Ascent algorithm method, the Method of Scoring, the Method of Quadratic Hill-Climbing) based on numerical analysis to estimate parameters of nonlinear regression model in a very different way. Principles of matrix calculus have been used to discuss the Gradient-Algorithm methods. Yonathan Bard [1] discussed a comparison of gradient methods for the solution of nonlinear parameter estimation problems. However this article discusses an analytical approach to the gradient algorithm methods in a different way. This paper describes a new iterative technique namely Gauss-Newton method which differs from the iterative technique proposed by Gorden K. Smyth [2]. Hans Georg Bock et.al [10] proposed numerical methods for parameter estimation in DAE’s (Differential algebraic equation). Isabel Reis Dos Santos et al [11], Introduced weighted least squares procedure for estimating the unknown parameters of a nonlinear regression metamodel. For large-scale non smooth convex minimization the Hager and Zhang (HZ) conjugate gradient Method and the modified HZ (MHZ) method were presented by Gonglin Yuan et al [12].
Sorting protein decoys by machine-learning-to-rank
Jing, Xiaoyang; Wang, Kai; Lu, Ruqian; Dong, Qiwen
2016-01-01
Much progress has been made in Protein structure prediction during the last few decades. As the predicted models can span a broad range of accuracy spectrum, the accuracy of quality estimation becomes one of the key elements of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, and these methods could be roughly divided into three categories: the single-model methods, clustering-based methods and quasi single-model methods. In this study, we develop a single-model method MQAPRank based on the learning-to-rank algorithm firstly, and then implement a quasi single-model method Quasi-MQAPRank. The proposed methods are benchmarked on the 3DRobot and CASP11 dataset. The five-fold cross-validation on the 3DRobot dataset shows the proposed single model method outperforms other methods whose outputs are taken as features of the proposed method, and the quasi single-model method can further enhance the performance. On the CASP11 dataset, the proposed methods also perform well compared with other leading methods in corresponding categories. In particular, the Quasi-MQAPRank method achieves a considerable performance on the CASP11 Best150 dataset. PMID:27530967
Sorting protein decoys by machine-learning-to-rank.
Jing, Xiaoyang; Wang, Kai; Lu, Ruqian; Dong, Qiwen
2016-08-17
Much progress has been made in Protein structure prediction during the last few decades. As the predicted models can span a broad range of accuracy spectrum, the accuracy of quality estimation becomes one of the key elements of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, and these methods could be roughly divided into three categories: the single-model methods, clustering-based methods and quasi single-model methods. In this study, we develop a single-model method MQAPRank based on the learning-to-rank algorithm firstly, and then implement a quasi single-model method Quasi-MQAPRank. The proposed methods are benchmarked on the 3DRobot and CASP11 dataset. The five-fold cross-validation on the 3DRobot dataset shows the proposed single model method outperforms other methods whose outputs are taken as features of the proposed method, and the quasi single-model method can further enhance the performance. On the CASP11 dataset, the proposed methods also perform well compared with other leading methods in corresponding categories. In particular, the Quasi-MQAPRank method achieves a considerable performance on the CASP11 Best150 dataset.
Improved accuracy for finite element structural analysis via a new integrated force method
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.; Aiello, Robert A.; Berke, Laszlo
1992-01-01
A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.
Salissou, Yacoubou; Panneton, Raymond
2010-11-01
Several methods for measuring the complex wave number and the characteristic impedance of sound absorbers have been proposed in the literature. These methods can be classified into single frequency and wideband methods. In this paper, the main existing methods are revisited and discussed. An alternative method which is not well known or discussed in the literature while exhibiting great potential is also discussed. This method is essentially an improvement of the wideband method described by Iwase et al., rewritten so that the setup is more ISO 10534-2 standard-compliant. Glass wool, melamine foam and acoustical/thermal insulator wool are used to compare the main existing wideband non-iterative methods with this alternative method. It is found that, in the middle and high frequency ranges the alternative method yields results that are comparable in accuracy to the classical two-cavity method and the four-microphone transfer-matrix method. However, in the low frequency range, the alternative method appears to be more accurate than the other methods, especially when measuring the complex wave number.
Methods for environmental change; an exploratory study.
Kok, Gerjo; Gottlieb, Nell H; Panne, Robert; Smerecnik, Chris
2012-11-28
While the interest of health promotion researchers in change methods directed at the target population has a long tradition, interest in change methods directed at the environment is still developing. In this survey, the focus is on methods for environmental change; especially about how these are composed of methods for individual change ('Bundling') and how within one environmental level, organizations, methods differ when directed at the management ('At') or applied by the management ('From'). The first part of this online survey dealt with examining the 'bundling' of individual level methods to methods at the environmental level. The question asked was to what extent the use of an environmental level method would involve the use of certain individual level methods. In the second part of the survey the question was whether there are differences between applying methods directed 'at' an organization (for instance, by a health promoter) versus 'from' within an organization itself. All of the 20 respondents are experts in the field of health promotion. Methods at the individual level are frequently bundled together as part of a method at a higher ecological level. A number of individual level methods are popular as part of most of the environmental level methods, while others are not chosen very often. Interventions directed at environmental agents often have a strong focus on the motivational part of behavior change.There are different approaches targeting a level or being targeted from a level. The health promoter will use combinations of motivation and facilitation. The manager will use individual level change methods focusing on self-efficacy and skills. Respondents think that any method may be used under the right circumstances, although few endorsed coercive methods. Taxonomies of theoretical change methods for environmental change should include combinations of individual level methods that may be bundled and separate suggestions for methods targeting a level or being targeted from a level. Future research needs to cover more methods to rate and to be rated. Qualitative data may explain some of the surprising outcomes, such as the lack of large differences and the avoidance of coercion. Taxonomies should include the theoretical parameters that limit the effectiveness of the method.
A comparison theorem for the SOR iterative method
NASA Astrophysics Data System (ADS)
Sun, Li-Ying
2005-09-01
In 1997, Kohno et al. have reported numerically that the improving modified Gauss-Seidel method, which was referred to as the IMGS method, is superior to the SOR iterative method. In this paper, we prove that the spectral radius of the IMGS method is smaller than that of the SOR method and Gauss-Seidel method, if the relaxation parameter [omega][set membership, variant](0,1]. As a result, we prove theoretically that this method is succeeded in improving the convergence of some classical iterative methods. Some recent results are improved.
A review of parametric approaches specific to aerodynamic design process
NASA Astrophysics Data System (ADS)
Zhang, Tian-tian; Wang, Zhen-guo; Huang, Wei; Yan, Li
2018-04-01
Parametric modeling of aircrafts plays a crucial role in the aerodynamic design process. Effective parametric approaches have large design space with a few variables. Parametric methods that commonly used nowadays are summarized in this paper, and their principles have been introduced briefly. Two-dimensional parametric methods include B-Spline method, Class/Shape function transformation method, Parametric Section method, Hicks-Henne method and Singular Value Decomposition method, and all of them have wide application in the design of the airfoil. This survey made a comparison among them to find out their abilities in the design of the airfoil, and the results show that the Singular Value Decomposition method has the best parametric accuracy. The development of three-dimensional parametric methods is limited, and the most popular one is the Free-form deformation method. Those methods extended from two-dimensional parametric methods have promising prospect in aircraft modeling. Since different parametric methods differ in their characteristics, real design process needs flexible choice among them to adapt to subsequent optimization procedure.
Wan, Xiaomin; Peng, Liubao; Li, Yuanjian
2015-01-01
Background In general, the individual patient-level data (IPD) collected in clinical trials are not available to independent researchers to conduct economic evaluations; researchers only have access to published survival curves and summary statistics. Thus, methods that use published survival curves and summary statistics to reproduce statistics for economic evaluations are essential. Four methods have been identified: two traditional methods 1) least squares method, 2) graphical method; and two recently proposed methods by 3) Hoyle and Henley, 4) Guyot et al. The four methods were first individually reviewed and subsequently assessed regarding their abilities to estimate mean survival through a simulation study. Methods A number of different scenarios were developed that comprised combinations of various sample sizes, censoring rates and parametric survival distributions. One thousand simulated survival datasets were generated for each scenario, and all methods were applied to actual IPD. The uncertainty in the estimate of mean survival time was also captured. Results All methods provided accurate estimates of the mean survival time when the sample size was 500 and a Weibull distribution was used. When the sample size was 100 and the Weibull distribution was used, the Guyot et al. method was almost as accurate as the Hoyle and Henley method; however, more biases were identified in the traditional methods. When a lognormal distribution was used, the Guyot et al. method generated noticeably less bias and a more accurate uncertainty compared with the Hoyle and Henley method. Conclusions The traditional methods should not be preferred because of their remarkable overestimation. When the Weibull distribution was used for a fitted model, the Guyot et al. method was almost as accurate as the Hoyle and Henley method. However, if the lognormal distribution was used, the Guyot et al. method was less biased compared with the Hoyle and Henley method. PMID:25803659
NASA Astrophysics Data System (ADS)
Jaishree, J.; Haworth, D. C.
2012-06-01
Transported probability density function (PDF) methods have been applied widely and effectively for modelling turbulent reacting flows. In most applications of PDF methods to date, Lagrangian particle Monte Carlo algorithms have been used to solve a modelled PDF transport equation. However, Lagrangian particle PDF methods are computationally intensive and are not readily integrated into conventional Eulerian computational fluid dynamics (CFD) codes. Eulerian field PDF methods have been proposed as an alternative. Here a systematic comparison is performed among three methods for solving the same underlying modelled composition PDF transport equation: a consistent hybrid Lagrangian particle/Eulerian mesh (LPEM) method, a stochastic Eulerian field (SEF) method and a deterministic Eulerian field method with a direct-quadrature-method-of-moments closure (a multi-environment PDF-MEPDF method). The comparisons have been made in simulations of a series of three non-premixed, piloted methane-air turbulent jet flames that exhibit progressively increasing levels of local extinction and turbulence-chemistry interactions: Sandia/TUD flames D, E and F. The three PDF methods have been implemented using the same underlying CFD solver, and results obtained using the three methods have been compared using (to the extent possible) equivalent physical models and numerical parameters. Reasonably converged mean and rms scalar profiles are obtained using 40 particles per cell for the LPEM method or 40 Eulerian fields for the SEF method. Results from these stochastic methods are compared with results obtained using two- and three-environment MEPDF methods. The relative advantages and disadvantages of each method in terms of accuracy and computational requirements are explored and identified. In general, the results obtained from the two stochastic methods (LPEM and SEF) are very similar, and are in closer agreement with experimental measurements than those obtained using the MEPDF method, while MEPDF is the most computationally efficient of the three methods. These and other findings are discussed in detail.
AN EULERIAN-LAGRANGIAN LOCALIZED ADJOINT METHOD FOR THE ADVECTION-DIFFUSION EQUATION
Many numerical methods use characteristic analysis to accommodate the advective component of transport. Such characteristic methods include Eulerian-Lagrangian methods (ELM), modified method of characteristics (MMOC), and operator splitting methods. A generalization of characteri...
Capital investment analysis: three methods.
Gapenski, L C
1993-08-01
Three cash flow/discount rate methods can be used when conducting capital budgeting financial analyses: the net operating cash flow method, the net cash flow to investors method, and the net cash flow to equity holders method. The three methods differ in how the financing mix and the benefits of debt financing are incorporated. This article explains the three methods, demonstrates that they are essentially equivalent, and recommends which method to use under specific circumstances.
Effective description of a 3D object for photon transportation in Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Suganuma, R.; Ogawa, K.
2000-06-01
Photon transport simulation by means of the Monte Carlo method is an indispensable technique for examining scatter and absorption correction methods in SPECT and PET. The authors have developed a method for object description with maximum size regions (maximum rectangular regions: MRRs) to speed up photon transport simulation, and compared the computation time with that for conventional object description methods, a voxel-based (VB) method and an octree method, in the simulations of two kinds of phantoms. The simulation results showed that the computation time with the proposed method became about 50% of that with the VD method and about 70% of that with the octree method for a high resolution MCAT phantom. Here, details of the expansion of the MRR method to three dimensions are given. Moreover, the effectiveness of the proposed method was compared with the VB and octree methods.
Region of influence regression for estimating the 50-year flood at ungaged sites
Tasker, Gary D.; Hodge, S.A.; Barks, C.S.
1996-01-01
Five methods of developing regional regression models to estimate flood characteristics at ungaged sites in Arkansas are examined. The methods differ in the manner in which the State is divided into subrogions. Each successive method (A to E) is computationally more complex than the previous method. Method A makes no subdivision. Methods B and C define two and four geographic subrogions, respectively. Method D uses cluster/discriminant analysis to define subrogions on the basis of similarities in watershed characteristics. Method E, the new region of influence method, defines a unique subregion for each ungaged site. Split-sample results indicate that, in terms of root-mean-square error, method E (38 percent error) is best. Methods C and D (42 and 41 percent error) were in a virtual tie for second, and methods B (44 percent error) and A (49 percent error) were fourth and fifth best.
NASA Astrophysics Data System (ADS)
Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu
2018-04-01
A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.
Designing Class Methods from Dataflow Diagrams
NASA Astrophysics Data System (ADS)
Shoval, Peretz; Kabeli-Shani, Judith
A method for designing the class methods of an information system is described. The method is part of FOOM - Functional and Object-Oriented Methodology. In the analysis phase of FOOM, two models defining the users' requirements are created: a conceptual data model - an initial class diagram; and a functional model - hierarchical OO-DFDs (object-oriented dataflow diagrams). Based on these models, a well-defined process of methods design is applied. First, the OO-DFDs are converted into transactions, i.e., system processes that supports user task. The components and the process logic of each transaction are described in detail, using pseudocode. Then, each transaction is decomposed, according to well-defined rules, into class methods of various types: basic methods, application-specific methods and main transaction (control) methods. Each method is attached to a proper class; messages between methods express the process logic of each transaction. The methods are defined using pseudocode or message charts.
Simple Test Functions in Meshless Local Petrov-Galerkin Methods
NASA Technical Reports Server (NTRS)
Raju, Ivatury S.
2016-01-01
Two meshless local Petrov-Galerkin (MLPG) methods based on two different trial functions but that use a simple linear test function were developed for beam and column problems. These methods used generalized moving least squares (GMLS) and radial basis (RB) interpolation functions as trial functions. These two methods were tested on various patch test problems. Both methods passed the patch tests successfully. Then the methods were applied to various beam vibration problems and problems involving Euler and Beck's columns. Both methods yielded accurate solutions for all problems studied. The simple linear test function offers considerable savings in computing efforts as the domain integrals involved in the weak form are avoided. The two methods based on this simple linear test function method produced accurate results for frequencies and buckling loads. Of the two methods studied, the method with radial basis trial functions is very attractive as the method is simple, accurate, and robust.
Leapfrog variants of iterative methods for linear algebra equations
NASA Technical Reports Server (NTRS)
Saylor, Paul E.
1988-01-01
Two iterative methods are considered, Richardson's method and a general second order method. For both methods, a variant of the method is derived for which only even numbered iterates are computed. The variant is called a leapfrog method. Comparisons between the conventional form of the methods and the leapfrog form are made under the assumption that the number of unknowns is large. In the case of Richardson's method, it is possible to express the final iterate in terms of only the initial approximation, a variant of the iteration called the grand-leap method. In the case of the grand-leap variant, a set of parameters is required. An algorithm is presented to compute these parameters that is related to algorithms to compute the weights and abscissas for Gaussian quadrature. General algorithms to implement the leapfrog and grand-leap methods are presented. Algorithms for the important special case of the Chebyshev method are also given.
Development of a Coordinate Transformation method for direct georeferencing in map projection frames
NASA Astrophysics Data System (ADS)
Zhao, Haitao; Zhang, Bing; Wu, Changshan; Zuo, Zhengli; Chen, Zhengchao
2013-03-01
This paper develops a novel Coordinate Transformation method (CT-method), with which the orientation angles (roll, pitch, heading) of the local tangent frame of the GPS/INS system are transformed into those (omega, phi, kappa) of the map projection frame for direct georeferencing (DG). Especially, the orientation angles in the map projection frame were derived from a sequence of coordinate transformations. The effectiveness of orientation angles transformation was verified through comparing with DG results obtained from conventional methods (Legat method and POSPac method) using empirical data. Moreover, the CT-method was also validated with simulated data. One advantage of the proposed method is that the orientation angles can be acquired simultaneously while calculating position elements of exterior orientation (EO) parameters and auxiliary points coordinates by coordinate transformation. These three methods were demonstrated and compared using empirical data. Empirical results show that the CT-method is both as sound and effective as Legat method. Compared with POSPac method, the CT-method is more suitable for calculating EO parameters for DG in map projection frames. DG accuracy of the CT-method and Legat method are at the same level. DG results of all these three methods have systematic errors in height due to inconsistent length projection distortion in the vertical and horizontal components, and these errors can be significantly reduced using the EO height correction technique in Legat's approach. Similar to the results obtained with empirical data, the effectiveness of the CT-method was also proved with simulated data. POSPac method: The method is presented by Applanix POSPac software technical note (Hutton and Savina, 1997). It is implemented in the POSEO module of POSPac software.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, M.; Ma, L.Q.
1998-11-01
It is critical to compare existing sample digestion methods for evaluating soil contamination and remediation. USEPA Methods 3050, 3051, 3051a, and 3052 were used to digest standard reference materials and representative Florida surface soils. Fifteen trace metals (Ag, As, Ba, Be, Cd, Cr, Cu, Hg, Mn, Mo, Ni, Pb, Sb, Se, and Za), and six macro elements (Al, Ca, Fe, K, Mg, and P) were analyzed. Precise analysis was achieved for all elements except for Cd, Mo, Se, and Sb in NIST SRMs 2704 and 2709 by USEPA Methods 3050 and 3051, and for all elements except for As, Mo,more » Sb, and Se in NIST SRM 2711 by USEPA Method 3052. No significant differences were observed for the three NIST SRMs between the microwave-assisted USEPA Methods 3051 and 3051A and the conventional USEPA Method 3050 Methods 3051 and 3051a and the conventional USEPA Method 3050 except for Hg, Sb, and Se. USEPA Method 3051a provided comparable values for NIST SRMs certified using USEPA Method 3050. However, for method correlation coefficients and elemental recoveries in 40 Florida surface soils, USEPA Method 3051a was an overall better alternative for Method 3050 than was Method 3051. Among the four digestion methods, the microwave-assisted USEPA Method 3052 achieved satisfactory recoveries for all elements except As and Mg using NIST SRM 2711. This total-total digestion method provided greater recoveries for 12 elements Ag, Be, Cr, Fe, K, Mn, Mo, Ni, Pb, Sb, Se, and Zn, but lower recoveries for Mg in Florida soils than did the total-recoverable digestion methods.« less
[Comparative analysis between diatom nitric acid digestion method and plankton 16S rDNA PCR method].
Han, Jun-ge; Wang, Cheng-bao; Li, Xing-biao; Fan, Yan-yan; Feng, Xiang-ping
2013-10-01
To compare and explore the application value of diatom nitric acid digestion method and plankton 16S rDNA PCR method for drowning identification. Forty drowning cases from 2010 to 2011 were collected from Department of Forensic Medicine of Wenzhou Medical University. Samples including lung, kidney, liver and field water from each case were tested with diatom nitric acid digestion method and plankton 16S rDNA PCR method, respectively. The Diatom nitric acid digestion method and plankton 16S rDNA PCR method required 20 g and 2 g of each organ, and 15 mL and 1.5 mL of field water, respectively. The inspection time and detection rate were compared between the two methods. Diatom nitric acid digestion method mainly detected two species of diatoms, Centriae and Pennatae, while plankton 16S rDNA PCR method amplified a length of 162 bp band. The average inspection time of each case of the Diatom nitric acid digestion method was (95.30 +/- 2.78) min less than (325.33 +/- 14.18) min of plankton 16S rDNA PCR method (P < 0.05). The detection rates of two methods for field water and lung were both 100%. For liver and kidney, the detection rate of plankton 16S rDNA PCR method was both 80%, higher than 40% and 30% of diatom nitric acid digestion method (P < 0.05), respectively. The laboratory testing method needs to be appropriately selected according to the specific circumstances in the forensic appraisal of drowning. Compared with diatom nitric acid digestion method, plankton 16S rDNA PCR method has practice values with such advantages as less quantity of samples, huge information and high specificity.
Reliable clarity automatic-evaluation method for optical remote sensing images
NASA Astrophysics Data System (ADS)
Qin, Bangyong; Shang, Ren; Li, Shengyang; Hei, Baoqin; Liu, Zhiwen
2015-10-01
Image clarity, which reflects the sharpness degree at the edge of objects in images, is an important quality evaluate index for optical remote sensing images. Scholars at home and abroad have done a lot of work on estimation of image clarity. At present, common clarity-estimation methods for digital images mainly include frequency-domain function methods, statistical parametric methods, gradient function methods and edge acutance methods. Frequency-domain function method is an accurate clarity-measure approach. However, its calculation process is complicate and cannot be carried out automatically. Statistical parametric methods and gradient function methods are both sensitive to clarity of images, while their results are easy to be affected by the complex degree of images. Edge acutance method is an effective approach for clarity estimate, while it needs picking out the edges manually. Due to the limits in accuracy, consistent or automation, these existing methods are not applicable to quality evaluation of optical remote sensing images. In this article, a new clarity-evaluation method, which is based on the principle of edge acutance algorithm, is proposed. In the new method, edge detection algorithm and gradient search algorithm are adopted to automatically search the object edges in images. Moreover, The calculation algorithm for edge sharpness has been improved. The new method has been tested with several groups of optical remote sensing images. Compared with the existing automatic evaluation methods, the new method perform better both in accuracy and consistency. Thus, the new method is an effective clarity evaluation method for optical remote sensing images.
26 CFR 1.412(c)(1)-2 - Shortfall method.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 26 Internal Revenue 5 2013-04-01 2013-04-01 false Shortfall method. 1.412(c)(1)-2 Section 1.412(c... Shortfall method. (a) In general—(1) Shortfall method. The shortfall method is a funding method that adapts a plan's underlying funding method for purposes of section 412. As such, the use of the shortfall...
26 CFR 1.412(c)(1)-2 - Shortfall method.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 26 Internal Revenue 5 2012-04-01 2011-04-01 true Shortfall method. 1.412(c)(1)-2 Section 1.412(c... Shortfall method. (a) In general—(1) Shortfall method. The shortfall method is a funding method that adapts a plan's underlying funding method for purposes of section 412. As such, the use of the shortfall...
26 CFR 1.412(c)(1)-2 - Shortfall method.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 26 Internal Revenue 5 2014-04-01 2014-04-01 false Shortfall method. 1.412(c)(1)-2 Section 1.412(c... Shortfall method. (a) In general—(1) Shortfall method. The shortfall method is a funding method that adapts a plan's underlying funding method for purposes of section 412. As such, the use of the shortfall...
26 CFR 1.412(c)(1)-2 - Shortfall method.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 26 Internal Revenue 5 2011-04-01 2011-04-01 false Shortfall method. 1.412(c)(1)-2 Section 1.412(c... Shortfall method. (a) In general—(1) Shortfall method. The shortfall method is a funding method that adapts a plan's underlying funding method for purposes of section 412. As such, the use of the shortfall...
40 CFR 60.547 - Test methods and procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... materials. In the event of dispute, Method 24 shall be the reference method. For Method 24, the cement or... sample will be representative of the material as applied in the affected facility. (2) Method 25 as the... by the Administrator. (3) Method 2, 2A, 2C, or 2D, as appropriate, as the reference method for...
40 CFR 60.547 - Test methods and procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... materials. In the event of dispute, Method 24 shall be the reference method. For Method 24, the cement or... sample will be representative of the material as applied in the affected facility. (2) Method 25 as the... by the Administrator. (3) Method 2, 2A, 2C, or 2D, as appropriate, as the reference method for...
40 CFR 60.547 - Test methods and procedures.
Code of Federal Regulations, 2014 CFR
2014-07-01
... materials. In the event of dispute, Method 24 shall be the reference method. For Method 24, the cement or... sample will be representative of the material as applied in the affected facility. (2) Method 25 as the... by the Administrator. (3) Method 2, 2A, 2C, or 2D, as appropriate, as the reference method for...
The Dramatic Methods of Hans van Dam.
ERIC Educational Resources Information Center
van de Water, Manon
1994-01-01
Interprets for the American reader the untranslated dramatic methods of Hans van Dam, a leading drama theorist in the Netherlands. Discusses the functions of drama as a method, closed dramatic methods, open dramatic methods, and applying van Dam's methods. (SR)
Methods for environmental change; an exploratory study
2012-01-01
Background While the interest of health promotion researchers in change methods directed at the target population has a long tradition, interest in change methods directed at the environment is still developing. In this survey, the focus is on methods for environmental change; especially about how these are composed of methods for individual change (‘Bundling’) and how within one environmental level, organizations, methods differ when directed at the management (‘At’) or applied by the management (‘From’). Methods The first part of this online survey dealt with examining the ‘bundling’ of individual level methods to methods at the environmental level. The question asked was to what extent the use of an environmental level method would involve the use of certain individual level methods. In the second part of the survey the question was whether there are differences between applying methods directed ‘at’ an organization (for instance, by a health promoter) versus ‘from’ within an organization itself. All of the 20 respondents are experts in the field of health promotion. Results Methods at the individual level are frequently bundled together as part of a method at a higher ecological level. A number of individual level methods are popular as part of most of the environmental level methods, while others are not chosen very often. Interventions directed at environmental agents often have a strong focus on the motivational part of behavior change. There are different approaches targeting a level or being targeted from a level. The health promoter will use combinations of motivation and facilitation. The manager will use individual level change methods focusing on self-efficacy and skills. Respondents think that any method may be used under the right circumstances, although few endorsed coercive methods. Conclusions Taxonomies of theoretical change methods for environmental change should include combinations of individual level methods that may be bundled and separate suggestions for methods targeting a level or being targeted from a level. Future research needs to cover more methods to rate and to be rated. Qualitative data may explain some of the surprising outcomes, such as the lack of large differences and the avoidance of coercion. Taxonomies should include the theoretical parameters that limit the effectiveness of the method. PMID:23190712
Implementation of an improved adaptive-implicit method in a thermal compositional simulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, T.B.
1988-11-01
A multicomponent thermal simulator with an adaptive-implicit-method (AIM) formulation/inexact-adaptive-Newton (IAN) method is presented. The final coefficient matrix retains the original banded structure so that conventional iterative methods can be used. Various methods for selection of the eliminated unknowns are tested. AIM/IAN method has a lower work count per Newtonian iteration than fully implicit methods, but a wrong choice of unknowns will result in excessive Newtonian iterations. For the problems tested, the residual-error method described in the paper for selecting implicit unknowns, together with the IAN method, had an improvement of up to 28% of the CPU time over the fullymore » implicit method.« less
Green, Carla A; Duan, Naihua; Gibbons, Robert D; Hoagwood, Kimberly E; Palinkas, Lawrence A; Wisdom, Jennifer P
2015-09-01
Limited translation of research into practice has prompted study of diffusion and implementation, and development of effective methods of encouraging adoption, dissemination and implementation. Mixed methods techniques offer approaches for assessing and addressing processes affecting implementation of evidence-based interventions. We describe common mixed methods approaches used in dissemination and implementation research, discuss strengths and limitations of mixed methods approaches to data collection, and suggest promising methods not yet widely used in implementation research. We review qualitative, quantitative, and hybrid approaches to mixed methods dissemination and implementation studies, and describe methods for integrating multiple methods to increase depth of understanding while improving reliability and validity of findings.
Green, Carla A.; Duan, Naihua; Gibbons, Robert D.; Hoagwood, Kimberly E.; Palinkas, Lawrence A.; Wisdom, Jennifer P.
2015-01-01
Limited translation of research into practice has prompted study of diffusion and implementation, and development of effective methods of encouraging adoption, dissemination and implementation. Mixed methods techniques offer approaches for assessing and addressing processes affecting implementation of evidence-based interventions. We describe common mixed methods approaches used in dissemination and implementation research, discuss strengths and limitations of mixed methods approaches to data collection, and suggest promising methods not yet widely used in implementation research. We review qualitative, quantitative, and hybrid approaches to mixed methods dissemination and implementation studies, and describe methods for integrating multiple methods to increase depth of understanding while improving reliability and validity of findings. PMID:24722814
Bond additivity corrections for quantum chemistry methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
C. F. Melius; M. D. Allendorf
1999-04-01
In the 1980's, the authors developed a bond-additivity correction procedure for quantum chemical calculations called BAC-MP4, which has proven reliable in calculating the thermochemical properties of molecular species, including radicals as well as stable closed-shell species. New Bond Additivity Correction (BAC) methods have been developed for the G2 method, BAC-G2, as well as for a hybrid DFT/MP2 method, BAC-Hybrid. These BAC methods use a new form of BAC corrections, involving atomic, molecular, and bond-wise additive terms. These terms enable one to treat positive and negative ions as well as neutrals. The BAC-G2 method reduces errors in the G2 method duemore » to nearest-neighbor bonds. The parameters within the BAC-G2 method only depend on atom types. Thus the BAC-G2 method can be used to determine the parameters needed by BAC methods involving lower levels of theory, such as BAC-Hybrid and BAC-MP4. The BAC-Hybrid method should scale well for large molecules. The BAC-Hybrid method uses the differences between the DFT and MP2 as an indicator of the method's accuracy, while the BAC-G2 method uses its internal methods (G1 and G2MP2) to provide an indicator of its accuracy. Indications of the average error as well as worst cases are provided for each of the BAC methods.« less
Comparison of different methods to quantify fat classes in bakery products.
Shin, Jae-Min; Hwang, Young-Ok; Tu, Ock-Ju; Jo, Han-Bin; Kim, Jung-Hun; Chae, Young-Zoo; Rhu, Kyung-Hun; Park, Seung-Kook
2013-01-15
The definition of fat differs in different countries; thus whether fat is listed on food labels depends on the country. Some countries list crude fat content in the 'Fat' section on the food label, whereas other countries list total fat. In this study, three methods were used for determining fat classes and content in bakery products: the Folch method, the automated Soxhlet method, and the AOAC 996.06 method. The results using these methods were compared. Fat (crude) extracted by the Folch and Soxhlet methods was gravimetrically determined and assessed by fat class using capillary gas chromatography (GC). In most samples, fat (total) content determined by the AOAC 996.06 method was lower than the fat (crude) content determined by the Folch or automated Soxhlet methods. Furthermore, monounsaturated fat or saturated fat content determined by the AOAC 996.06 method was lowest. Almost no difference was observed between fat (crude) content determined by the Folch method and that determined by the automated Soxhlet method for nearly all samples. In three samples (wheat biscuits, butter cookies-1, and chocolate chip cookies), monounsaturated fat, saturated fat, and trans fat content obtained by the automated Soxhlet method was higher than that obtained by the Folch method. The polyunsaturated fat content obtained by the automated Soxhlet method was not higher than that obtained by the Folch method in any sample. Copyright © 2012 Elsevier Ltd. All rights reserved.
A CLASS OF RECONSTRUCTED DISCONTINUOUS GALERKIN METHODS IN COMPUTATIONAL FLUID DYNAMICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong Luo; Yidong Xia; Robert Nourgaliev
2011-05-01
A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison.more » Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness.« less
2014-01-01
Background The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. Methods We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Conclusions Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method. PMID:25052018
NASA Astrophysics Data System (ADS)
Kot, V. A.
2017-11-01
The modern state of approximate integral methods used in applications, where the processes of heat conduction and heat and mass transfer are of first importance, is considered. Integral methods have found a wide utility in different fields of knowledge: problems of heat conduction with different heat-exchange conditions, simulation of thermal protection, Stefantype problems, microwave heating of a substance, problems on a boundary layer, simulation of a fluid flow in a channel, thermal explosion, laser and plasma treatment of materials, simulation of the formation and melting of ice, inverse heat problems, temperature and thermal definition of nanoparticles and nanoliquids, and others. Moreover, polynomial solutions are of interest because the determination of a temperature (concentration) field is an intermediate stage in the mathematical description of any other process. The following main methods were investigated on the basis of the error norms: the Tsoi and Postol’nik methods, the method of integral relations, the Gudman integral method of heat balance, the improved Volkov integral method, the matched integral method, the modified Hristov method, the Mayer integral method, the Kudinov method of additional boundary conditions, the Fedorov boundary method, the method of weighted temperature function, the integral method of boundary characteristics. It was established that the two last-mentioned methods are characterized by high convergence and frequently give solutions whose accuracy is not worse that the accuracy of numerical solutions.
Method for producing smooth inner surfaces
Cooper, Charles A.
2016-05-17
The invention provides a method for preparing superconducting cavities, the method comprising causing polishing media to tumble by centrifugal barrel polishing within the cavities for a time sufficient to attain a surface smoothness of less than 15 nm root mean square roughness over approximately a 1 mm.sup.2 scan area. The method also provides for a method for preparing superconducting cavities, the method comprising causing polishing media bound to a carrier to tumble within the cavities. The method also provides for a method for preparing superconducting cavities, the method comprising causing polishing media in a slurry to tumble within the cavities.
A Hybrid Method for Pancreas Extraction from CT Image Based on Level Set Methods
Tan, Hanqing; Fujita, Hiroshi
2013-01-01
This paper proposes a novel semiautomatic method to extract the pancreas from abdominal CT images. Traditional level set and region growing methods that request locating initial contour near the final boundary of object have problem of leakage to nearby tissues of pancreas region. The proposed method consists of a customized fast-marching level set method which generates an optimal initial pancreas region to solve the problem that the level set method is sensitive to the initial contour location and a modified distance regularized level set method which extracts accurate pancreas. The novelty in our method is the proper selection and combination of level set methods, furthermore an energy-decrement algorithm and an energy-tune algorithm are proposed to reduce the negative impact of bonding force caused by connected tissue whose intensity is similar with pancreas. As a result, our method overcomes the shortages of oversegmentation at weak boundary and can accurately extract pancreas from CT images. The proposed method is compared to other five state-of-the-art medical image segmentation methods based on a CT image dataset which contains abdominal images from 10 patients. The evaluated results demonstrate that our method outperforms other methods by achieving higher accuracy and making less false segmentation in pancreas extraction. PMID:24066016
Polidori, David; Rowley, Clarence
2014-07-22
The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method.
Ross, John; Keesbury, Jill; Hardee, Karen
2015-01-01
ABSTRACT The method mix of contraceptive use is severely unbalanced in many countries, with over half of all use provided by just 1 or 2 methods. That tends to limit the range of user options and constrains the total prevalence of use, leading to unplanned pregnancies and births or abortions. Previous analyses of method mix distortions focused on countries where a single method accounted for more than half of all use (the 50% rule). We introduce a new measure that uses the average deviation (AD) of method shares around their own mean and apply that to a secondary analysis of method mix data for 8 contraceptive methods from 666 national surveys in 123 countries. A high AD value indicates a skewed method mix while a low AD value indicates a more uniform pattern across methods; the values can range from 0 to 21.9. Most AD values ranged from 6 to 19, with an interquartile range of 8.6 to 12.2. Using the AD measure, we identified 15 countries where the method mix has evolved from a distorted one to a better balanced one, with AD values declining, on average, by 35% over time. Countries show disparate paths in method gains and losses toward a balanced mix, but 4 patterns are suggested: (1) rise of one method partially offset by changes in other methods, (2) replacement of traditional with modern methods, (3) continued but declining domination by a single method, and (4) declines in dominant methods with increases in other methods toward a balanced mix. Regions differ markedly in their method mix profiles and preferences, raising the question of whether programmatic resources are best devoted to better provision of the well-accepted methods or to deploying neglected or new ones, or to a combination of both approaches. PMID:25745119
Wan, Xiaomin; Peng, Liubao; Li, Yuanjian
2015-01-01
In general, the individual patient-level data (IPD) collected in clinical trials are not available to independent researchers to conduct economic evaluations; researchers only have access to published survival curves and summary statistics. Thus, methods that use published survival curves and summary statistics to reproduce statistics for economic evaluations are essential. Four methods have been identified: two traditional methods 1) least squares method, 2) graphical method; and two recently proposed methods by 3) Hoyle and Henley, 4) Guyot et al. The four methods were first individually reviewed and subsequently assessed regarding their abilities to estimate mean survival through a simulation study. A number of different scenarios were developed that comprised combinations of various sample sizes, censoring rates and parametric survival distributions. One thousand simulated survival datasets were generated for each scenario, and all methods were applied to actual IPD. The uncertainty in the estimate of mean survival time was also captured. All methods provided accurate estimates of the mean survival time when the sample size was 500 and a Weibull distribution was used. When the sample size was 100 and the Weibull distribution was used, the Guyot et al. method was almost as accurate as the Hoyle and Henley method; however, more biases were identified in the traditional methods. When a lognormal distribution was used, the Guyot et al. method generated noticeably less bias and a more accurate uncertainty compared with the Hoyle and Henley method. The traditional methods should not be preferred because of their remarkable overestimation. When the Weibull distribution was used for a fitted model, the Guyot et al. method was almost as accurate as the Hoyle and Henley method. However, if the lognormal distribution was used, the Guyot et al. method was less biased compared with the Hoyle and Henley method.
Achieving cost-neutrality with long-acting reversible contraceptive methods.
Trussell, James; Hassan, Fareen; Lowin, Julia; Law, Amy; Filonenko, Anna
2015-01-01
This analysis aimed to estimate the average annual cost of available reversible contraceptive methods in the United States. In line with literature suggesting long-acting reversible contraceptive (LARC) methods become increasingly cost-saving with extended duration of use, it aimed to also quantify minimum duration of use required for LARC methods to achieve cost-neutrality relative to other reversible contraceptive methods while taking into consideration discontinuation. A three-state economic model was developed to estimate relative costs of no method (chance), four short-acting reversible (SARC) methods (oral contraceptive, ring, patch and injection) and three LARC methods [implant, copper intrauterine device (IUD) and levonorgestrel intrauterine system (LNG-IUS) 20 mcg/24 h (total content 52 mg)]. The analysis was conducted over a 5-year time horizon in 1000 women aged 20-29 years. Method-specific failure and discontinuation rates were based on published literature. Costs associated with drug acquisition, administration and failure (defined as an unintended pregnancy) were considered. Key model outputs were annual average cost per method and minimum duration of LARC method usage to achieve cost-savings compared to SARC methods. The two least expensive methods were copper IUD ($304 per women, per year) and LNG-IUS 20 mcg/24 h ($308). Cost of SARC methods ranged between $432 (injection) and $730 (patch), per women, per year. A minimum of 2.1 years of LARC usage would result in cost-savings compared to SARC usage. This analysis finds that even if LARC methods are not used for their full durations of efficacy, they become cost-saving relative to SARC methods within 3 years of use. Previous economic arguments in support of using LARC methods have been criticized for not considering that LARC methods are not always used for their full duration of efficacy. This study calculated that cost-savings from LARC methods relative to SARC methods, with discontinuation rates considered, can be realized within 3 years. Copyright © 2014 Elsevier Inc. All rights reserved.
Liu, Jie; Zhang, Fu-Dong; Teng, Fei; Li, Jun; Wang, Zhi-Hong
2014-10-01
In order to in-situ detect the oil yield of oil shale, based on portable near infrared spectroscopy analytical technology, with 66 rock core samples from No. 2 well drilling of Fuyu oil shale base in Jilin, the modeling and analyzing methods for in-situ detection were researched. By the developed portable spectrometer, 3 data formats (reflectance, absorbance and K-M function) spectra were acquired. With 4 different modeling data optimization methods: principal component-mahalanobis distance (PCA-MD) for eliminating abnormal samples, uninformative variables elimination (UVE) for wavelength selection and their combina- tions: PCA-MD + UVE and UVE + PCA-MD, 2 modeling methods: partial least square (PLS) and back propagation artificial neural network (BPANN), and the same data pre-processing, the modeling and analyzing experiment were performed to determine the optimum analysis model and method. The results show that the data format, modeling data optimization method and modeling method all affect the analysis precision of model. Results show that whether or not using the optimization method, reflectance or K-M function is the proper spectrum format of the modeling database for two modeling methods. Using two different modeling methods and four different data optimization methods, the model precisions of the same modeling database are different. For PLS modeling method, the PCA-MD and UVE + PCA-MD data optimization methods can improve the modeling precision of database using K-M function spectrum data format. For BPANN modeling method, UVE, UVE + PCA-MD and PCA- MD + UVE data optimization methods can improve the modeling precision of database using any of the 3 spectrum data formats. In addition to using the reflectance spectra and PCA-MD data optimization method, modeling precision by BPANN method is better than that by PLS method. And modeling with reflectance spectra, UVE optimization method and BPANN modeling method, the model gets the highest analysis precision, its correlation coefficient (Rp) is 0.92, and its standard error of prediction (SEP) is 0.69%.
Hammack, Thomas S; Valentin-Bon, Iris E; Jacobson, Andrew P; Andrews, Wallace H
2004-05-01
Soak and rinse methods were compared for the recovery of Salmonella from whole cantaloupes. Cantaloupes were surface inoculated with Salmonella cell suspensions and stored for 4 days at 2 to 6 degrees C. Cantaloupes were placed in sterile plastic bags with a nonselective preenrichment broth at a 1:1.5 cantaloupe weight-to-broth volume ratio. The cantaloupe broths were shaken for 5 min at 100 rpm after which 25-ml aliquots (rinse) were removed from the bags. The 25-ml rinses were preenriched in 225-ml portions of the same uninoculated broth type at 35 degrees C for 24 h (rinse method). The remaining cantaloupe broths were incubated at 35 degrees C for 24 h (soak method). The preenrichment broths used were buffered peptone water (BPW), modified BPW, lactose (LAC) broth, and Universal Preenrichment (UP) broth. The Bacteriological Analytical Manual Salmonella culture method was compared with the following rapid methods: the TECRA Unique Salmonella method, the VIDAS ICS/SLM method, and the VIDAS SLM method. The soak method detected significantly more Salmonella-positive cantaloupes (P < 0.05) than did the rinse method: 367 Salmonella-positive cantaloupes of 540 test cantaloupes by the soak method and 24 Salmonella-positive cantaloupes of 540 test cantaloupes by the rinse method. Overall, BPW, LAC, and UP broths were equivalent for the recovery of Salmonella from cantaloupes. Both the VIDAS ICS/SLM and TECRA Unique Salmonella methods detected significantly fewer Salmonella-positive cantaloupes than did the culture method: the VIDAS ICS/SLM method detected 23 of 50 Salmonella-positive cantaloupes (60 tested) and the TECRA Unique Salmonella method detected 16 of 29 Salmonella-positive cantaloupes (60 tested). The VIDAS SLM and culture methods were equivalent: both methods detected 37 of 37 Salmonella-positive cantaloupes (60 tested).
Temperature Profiles of Different Cooling Methods in Porcine Pancreas Procurement
Weegman, Brad P.; Suszynski, Thomas M.; Scott, William E.; Ferrer, Joana; Avgoustiniatos, Efstathios S.; Anazawa, Takayuki; O’Brien, Timothy D.; Rizzari, Michael D.; Karatzas, Theodore; Jie, Tun; Sutherland, David ER.; Hering, Bernhard J.; Papas, Klearchos K.
2014-01-01
Background Porcine islet xenotransplantation is a promising alternative to human islet allotransplantation. Porcine pancreas cooling needs to be optimized to reduce the warm ischemia time (WIT) following donation after cardiac death, which is associated with poorer islet isolation outcomes. Methods This study examines the effect of 4 different cooling Methods on core porcine pancreas temperature (n=24) and histopathology (n=16). All Methods involved surface cooling with crushed ice and chilled irrigation. Method A, which is the standard for porcine pancreas procurement, used only surface cooling. Method B involved an intravascular flush with cold solution through the pancreas arterial system. Method C involved an intraductal infusion with cold solution through the major pancreatic duct, and Method D combined all 3 cooling Methods. Results Surface cooling alone (Method A) gradually decreased core pancreas temperature to < 10 °C after 30 minutes. Using an intravascular flush (Method B) improved cooling during the entire duration of procurement, but incorporating an intraductal infusion (Method C) rapidly reduced core temperature 15–20 °C within the first 2 minutes of cooling. Combining all methods (Method D) was the most effective at rapidly reducing temperature and providing sustained cooling throughout the duration of procurement, although the recorded WIT was not different between Methods (p=0.36). Histological scores were different between the cooling Methods (p=0.02) and the worst with Method A. There were differences in histological scores between Methods A and C (p=0.02) and Methods A and D (p=0.02), but not between Methods C and D (p=0.95), which may highlight the importance of early cooling using an intraductal infusion. Conclusions In conclusion, surface cooling alone cannot rapidly cool large (porcine or human) pancreata. Additional cooling with an intravascular flush and intraductal infusion results in improved core porcine pancreas temperature profiles during procurement and histopathology scores. These data may also have implications on human pancreas procurement since use of an intraductal infusion is not common practice. PMID:25040217
Jang, Min Hye; Kim, Hyun Jung; Chung, Yul Ri; Lee, Yangkyu
2017-01-01
In spite of the usefulness of the Ki-67 labeling index (LI) as a prognostic and predictive marker in breast cancer, its clinical application remains limited due to variability in its measurement and the absence of a standard method of interpretation. This study was designed to compare the two methods of assessing Ki-67 LI: the average method vs. the hot spot method and thus to determine which method is more appropriate in predicting prognosis of luminal/HER2-negative breast cancers. Ki-67 LIs were calculated by direct counting of three representative areas of 493 luminal/HER2-negative breast cancers using the two methods. We calculated the differences in the Ki-67 LIs (ΔKi-67) between the two methods and the ratio of the Ki-67 LIs (H/A ratio) of the two methods. In addition, we compared the performance of the Ki-67 LIs obtained by the two methods as prognostic markers. ΔKi-67 ranged from 0.01% to 33.3% and the H/A ratio ranged from 1.0 to 2.6. Based on the receiver operating characteristic curve method, the predictive powers of the KI-67 LI measured by the two methods were similar (Area under curve: hot spot method, 0.711; average method, 0.700). In multivariate analysis, high Ki-67 LI based on either method was an independent poor prognostic factor, along with high T stage and node metastasis. However, in repeated counts, the hot spot method did not consistently classify tumors into high vs. low Ki-67 LI groups. In conclusion, both the average and hot spot method of evaluating Ki-67 LI have good predictive performances for tumor recurrence in luminal/HER2-negative breast cancers. However, we recommend using the average method for the present because of its greater reproducibility. PMID:28187177
Jang, Min Hye; Kim, Hyun Jung; Chung, Yul Ri; Lee, Yangkyu; Park, So Yeon
2017-01-01
In spite of the usefulness of the Ki-67 labeling index (LI) as a prognostic and predictive marker in breast cancer, its clinical application remains limited due to variability in its measurement and the absence of a standard method of interpretation. This study was designed to compare the two methods of assessing Ki-67 LI: the average method vs. the hot spot method and thus to determine which method is more appropriate in predicting prognosis of luminal/HER2-negative breast cancers. Ki-67 LIs were calculated by direct counting of three representative areas of 493 luminal/HER2-negative breast cancers using the two methods. We calculated the differences in the Ki-67 LIs (ΔKi-67) between the two methods and the ratio of the Ki-67 LIs (H/A ratio) of the two methods. In addition, we compared the performance of the Ki-67 LIs obtained by the two methods as prognostic markers. ΔKi-67 ranged from 0.01% to 33.3% and the H/A ratio ranged from 1.0 to 2.6. Based on the receiver operating characteristic curve method, the predictive powers of the KI-67 LI measured by the two methods were similar (Area under curve: hot spot method, 0.711; average method, 0.700). In multivariate analysis, high Ki-67 LI based on either method was an independent poor prognostic factor, along with high T stage and node metastasis. However, in repeated counts, the hot spot method did not consistently classify tumors into high vs. low Ki-67 LI groups. In conclusion, both the average and hot spot method of evaluating Ki-67 LI have good predictive performances for tumor recurrence in luminal/HER2-negative breast cancers. However, we recommend using the average method for the present because of its greater reproducibility.
Estimating Tree Height-Diameter Models with the Bayesian Method
Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei
2014-01-01
Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the “best” model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2. PMID:24711733
Estimating tree height-diameter models with the Bayesian method.
Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei
2014-01-01
Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the "best" model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2.
Wong, M S; Cheng, J C Y; Lo, K H
2005-04-01
The treatment effectiveness of the CAD/CAM method and the manual method in managing adolescent idiopathic scoliosis (AIS) was compared. Forty subjects were recruited with twenty subjects for each method. The clinical parameters namely Cobb's angle and apical vertebral rotation were evaluated at the pre-brace and the immediate in-brace visits. The results demonstrated that orthotic treatments rendered by the CAD/CAM method and the conventional manual method were effective in providing initial control of Cobb's angle. Significant decreases (p < 0.05) were found between the pre-brace and immediate in-brace visits for both methods. The mean reductions of Cobb's angle were 12.8 degrees (41.9%) for the CAD/CAM method and 9.8 degrees (32.1%) for the manual method. An initial control of the apical vertebral rotation was not shown in this study. In the comparison between the CAD/CAM method and the manual method, no significant difference was found in the control of Cobb's angle and apical vertebral rotation. The current study demonstrated that the CAD/CAM method can provide similar result in the initial stage of treatment as compared with the manual method.
Mattfeldt, Torsten
2011-04-01
Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.
Costs and Efficiency of Online and Offline Recruitment Methods: A Web-Based Cohort Study.
Christensen, Tina; Riis, Anders H; Hatch, Elizabeth E; Wise, Lauren A; Nielsen, Marie G; Rothman, Kenneth J; Toft Sørensen, Henrik; Mikkelsen, Ellen M
2017-03-01
The Internet is widely used to conduct research studies on health issues. Many different methods are used to recruit participants for such studies, but little is known about how various recruitment methods compare in terms of efficiency and costs. The aim of our study was to compare online and offline recruitment methods for Internet-based studies in terms of efficiency (number of recruited participants) and costs per participant. We employed several online and offline recruitment methods to enroll 18- to 45-year-old women in an Internet-based Danish prospective cohort study on fertility. Offline methods included press releases, posters, and flyers. Online methods comprised advertisements placed on five different websites, including Facebook and Netdoktor.dk. We defined seven categories of mutually exclusive recruitment methods and used electronic tracking via unique Uniform Resource Locator (URL) and self-reported data to identify the recruitment method for each participant. For each method, we calculated the average cost per participant and efficiency, that is, the total number of recruited participants. We recruited 8252 study participants. Of these, 534 were excluded as they could not be assigned to a specific recruitment method. The final study population included 7724 participants, of whom 803 (10.4%) were recruited by offline methods, 3985 (51.6%) by online methods, 2382 (30.8%) by online methods not initiated by us, and 554 (7.2%) by other methods. Overall, the average cost per participant was €6.22 for online methods initiated by us versus €9.06 for offline methods. Costs per participant ranged from €2.74 to €105.53 for online methods and from €0 to €67.50 for offline methods. Lowest average costs per participant were for those recruited from Netdoktor.dk (€2.99) and from Facebook (€3.44). In our Internet-based cohort study, online recruitment methods were superior to offline methods in terms of efficiency (total number of participants enrolled). The average cost per recruited participant was also lower for online than for offline methods, although costs varied greatly among both online and offline recruitment methods. We observed a decrease in the efficiency of some online recruitment methods over time, suggesting that it may be optimal to adopt multiple online methods. ©Tina Christensen, Anders H Riis, Elizabeth E Hatch, Lauren A Wise, Marie G Nielsen, Kenneth J Rothman, Henrik Toft Sørensen, Ellen M Mikkelsen. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 01.03.2017.
Ouyang, Ying; Mansell, Robert S; Nkedi-Kizza, Peter
2004-01-01
A high performance liquid chromatography (HPLC) method with UV detection was developed to analyze paraquat (1,1'-dimethyl-4,4'-dipyridinium dichloride) herbicide content in soil solution samples. The analytical method was compared with the liquid scintillation counting (LSC) method using 14C-paraquat. Agreement obtained between the two methods was reasonable. However, the detection limit for paraquat analysis was 0.5 mg L(-1) by the HPLC method and 0.05 mg L(-1) by the LSC method. The LSC method was, therefore, 10 times more precise than the HPLC method for solution concentrations less than 1 mg L(-1). In spite of the high detection limit, the UC (nonradioactive) HPLC method provides an inexpensive and environmentally safe means for determining paraquat concentration in soil solution compared with the 14C-LSC method.
Hybrid finite element and Brownian dynamics method for diffusion-controlled reactions.
Bauler, Patricia; Huber, Gary A; McCammon, J Andrew
2012-04-28
Diffusion is often the rate determining step in many biological processes. Currently, the two main computational methods for studying diffusion are stochastic methods, such as Brownian dynamics, and continuum methods, such as the finite element method. This paper proposes a new hybrid diffusion method that couples the strengths of each of these two methods. The method is derived for a general multidimensional system, and is presented using a basic test case for 1D linear and radially symmetric diffusion systems.
Noh, Jaesung; Lee, Kun Mo
2003-05-01
A relative significance factor (f(i)) of an impact category is the external weight of the impact category. The objective of this study is to propose a systematic and easy-to-use method for the determination of f(i). Multiattribute decision-making (MADM) methods including the analytical hierarchy process (AHP), the rank-order centroid method, and the fuzzy method were evaluated for this purpose. The results and practical aspects of using the three methods are compared. Each method shows the same trend, with minor differences in the value of f(i). Thus, all three methods can be applied to the determination of f(i). The rank order centroid method reduces the number of pairwise comparisons by placing the alternatives in order, although it has inherent weakness over the fuzzy method in expressing the degree of vagueness associated with assigning weights to criteria and alternatives. The rank order centroid method is considered a practical method for the determination of f(i) because it is easier and simpler to use compared to the AHP and the fuzzy method.
Zenita, O.; Basavaiah, K.
2011-01-01
Two titrimetric and two spectrophotometric methods are described for the assay of famotidine (FMT) in tablets using N-bromosuccinimide (NBS). The first titrimetric method is direct in which FMT is titrated directly with NBS in HCl medium using methyl orange as indicator (method A). The remaining three methods are indirect in which the unreacted NBS is determined after the complete reaction between FMT and NBS by iodometric back titration (method B) or by reacting with a fixed amount of either indigo carmine (method C) or neutral red (method D). The method A and method B are applicable over the range of 2–9 mg and 1–7 mg, respectively. In spectrophotometric methods, Beer's law is obeyed over the concentration ranges of 0.75–6.0 μg mL−1 (method C) and 0.3–3.0 μg mL−1 (method D). The applicability of the developed methods was demonstrated by the determination of FMT in pure drug as well as in tablets. PMID:21760785
Twostep-by-twostep PIRK-type PC methods with continuous output formulas
NASA Astrophysics Data System (ADS)
Cong, Nguyen Huu; Xuan, Le Ngoc
2008-11-01
This paper deals with parallel predictor-corrector (PC) iteration methods based on collocation Runge-Kutta (RK) corrector methods with continuous output formulas for solving nonstiff initial-value problems (IVPs) for systems of first-order differential equations. At nth step, the continuous output formulas are used not only for predicting the stage values in the PC iteration methods but also for calculating the step values at (n+2)th step. In this case, the integration processes can be proceeded twostep-by-twostep. The resulting twostep-by-twostep (TBT) parallel-iterated RK-type (PIRK-type) methods with continuous output formulas (twostep-by-twostep PIRKC methods or TBTPIRKC methods) give us a faster integration process. Fixed stepsize applications of these TBTPIRKC methods to a few widely-used test problems reveal that the new PC methods are much more efficient when compared with the well-known parallel-iterated RK methods (PIRK methods), parallel-iterated RK-type PC methods with continuous output formulas (PIRKC methods) and sequential explicit RK codes DOPRI5 and DOP853 available from the literature.
Pérez de Isla, Leopoldo; Casanova, Carlos; Almería, Carlos; Rodrigo, José Luis; Cordeiro, Pedro; Mataix, Luis; Aubele, Ada Lia; Lang, Roberto; Zamorano, José Luis
2007-12-01
Several studies have shown a wide variability among different methods to determine the valve area in patients with rheumatic mitral stenosis. Our aim was to evaluate if 3D-echo planimetry is more accurate than the Gorlin method to measure the valve area. Twenty-six patients with mitral stenosis underwent 2D and 3D-echo echocardiographic examinations and catheterization. Valve area was estimated by different methods. A median value of the mitral valve area, obtained from the measurements of three classical non-invasive methods (2D planimetry, pressure half-time and PISA method), was used as the reference method and it was compared with 3D-echo planimetry and Gorlin's method. Our results showed that the accuracy of 3D-echo planimetry is superior to the accuracy of the Gorlin method for the assessment of mitral valve area. We should keep in mind the fact that 3D-echo planimetry may be a better reference method than the Gorlin method to assess the severity of rheumatic mitral stenosis.
Küme, Tuncay; Sağlam, Barıs; Ergon, Cem; Sisman, Ali Rıza
2018-01-01
The aim of this study is to evaluate and compare the analytical performance characteristics of the two creatinine methods based on the Jaffe and enzymatic methods. Two original creatinine methods, Jaffe and enzymatic, were evaluated on Architect c16000 automated analyzer via limit of detection (LOD) and limit of quantitation (LOQ), linearity, intra-assay and inter-assay precision, and comparability in serum and urine samples. The method comparison and bias estimation using patient samples according to CLSI guideline were performed on 230 serum and 141 urine samples by analyzing on the same auto-analyzer. The LODs were determined as 0.1 mg/dL for both serum methods and as 0.25 and 0.07 mg/dL for the Jaffe and the enzymatic urine method respectively. The LOQs were similar with 0.05 mg/dL value for both serum methods, and enzymatic urine method had a lower LOQ than Jaffe urine method, values at 0.5 and 2 mg/dL respectively. Both methods were linear up to 65 mg/dL for serum and 260 mg/dL for urine. The intra-assay and inter-assay precision data were under desirable levels in both methods. The higher correlations were determined between two methods in serum and urine (r=.9994, r=.9998 respectively). On the other hand, Jaffe method gave the higher creatinine results than enzymatic method, especially at the low concentrations in both serum and urine. Both Jaffe and enzymatic methods were found to meet the analytical performance requirements in routine use. However, enzymatic method was found to have better performance in low creatinine levels. © 2017 Wiley Periodicals, Inc.
Parikh, Harshal R; De, Anuradha S; Baveja, Sujata M
2012-07-01
Physicians and microbiologists have long recognized that the presence of living microorganisms in the blood of a patient carries with it considerable morbidity and mortality. Hence, blood cultures have become critically important and frequently performed test in clinical microbiology laboratories for diagnosis of sepsis. To compare the conventional blood culture method with the lysis centrifugation method in cases of sepsis. Two hundred nonduplicate blood cultures from cases of sepsis were analyzed using two blood culture methods concurrently for recovery of bacteria from patients diagnosed clinically with sepsis - the conventional blood culture method using trypticase soy broth and the lysis centrifugation method using saponin by centrifuging at 3000 g for 30 minutes. Overall bacteria recovered from 200 blood cultures were 17.5%. The conventional blood culture method had a higher yield of organisms, especially Gram positive cocci. The lysis centrifugation method was comparable with the former method with respect to Gram negative bacilli. The sensitivity of lysis centrifugation method in comparison to conventional blood culture method was 49.75% in this study, specificity was 98.21% and diagnostic accuracy was 89.5%. In almost every instance, the time required for detection of the growth was earlier by lysis centrifugation method, which was statistically significant. Contamination by lysis centrifugation was minimal, while that by conventional method was high. Time to growth by the lysis centrifugation method was highly significant (P value 0.000) as compared to time to growth by the conventional blood culture method. For the diagnosis of sepsis, combination of the lysis centrifugation method and the conventional blood culture method with trypticase soy broth or biphasic media is advocable, in order to achieve faster recovery and a better yield of microorganisms.
Amin, Alaa S.; Kassem, Mohammed A.
2012-01-01
Aim and Background: Three simple, accurate and sensitive spectrophotometric methods for the determination of finasteride in pure, dosage and biological forms, and in the presence of its oxidative degradates were developed. Materials and Methods: These methods are indirect, involve the addition of excess oxidant potassium permanganate for method A; cerric sulfate [Ce(SO4)2] for methods B; and N-bromosuccinimide (NBS) for method C of known concentration in acid medium to finasteride, and the determination of the unreacted oxidant by measurement of the decrease in absorbance of methylene blue for method A, chromotrope 2R for method B, and amaranth for method C at a suitable maximum wavelength, λmax: 663, 528, and 520 nm, for the three methods, respectively. The reaction conditions for each method were optimized. Results: Regression analysis of the Beer plots showed good correlation in the concentration ranges of 0.12–3.84 μg mL–1 for method A, and 0.12–3.28 μg mL–1 for method B and 0.14 – 3.56 μg mL–1 for method C. The apparent molar absorptivity, Sandell sensitivity, detection and quantification limits were evaluated. The stoichiometric ratio between the finasteride and the oxidant was estimated. The validity of the proposed methods was tested by analyzing dosage forms and biological samples containing finasteride with relative standard deviation ≤ 0.95. Conclusion: The proposed methods could successfully determine the studied drug with varying excess of its oxidative degradation products, with recovery between 99.0 and 101.4, 99.2 and 101.6, and 99.6 and 101.0% for methods A, B, and C, respectively. PMID:23781478
John Butcher and hybrid methods
NASA Astrophysics Data System (ADS)
Mehdiyeva, Galina; Imanova, Mehriban; Ibrahimov, Vagif
2017-07-01
As is known there are the mainly two classes of the numerical methods for solving ODE, which is commonly called a one and multistep methods. Each of these methods has certain advantages and disadvantages. It is obvious that the method which has better properties of these methods should be constructed at the junction of them. In the middle of the XX century, Butcher and Gear has constructed at the junction of the methods of Runge-Kutta and Adams, which is called hybrid method. Here considers the construction of certain generalizations of hybrid methods, with the high order of accuracy and to explore their application to solving the Ordinary Differential, Volterra Integral and Integro-Differential equations. Also have constructed some specific hybrid methods with the degree p ≤ 10.
Critical study of higher order numerical methods for solving the boundary-layer equations
NASA Technical Reports Server (NTRS)
Wornom, S. F.
1978-01-01
A fourth order box method is presented for calculating numerical solutions to parabolic, partial differential equations in two variables or ordinary differential equations. The method, which is the natural extension of the second order box scheme to fourth order, was demonstrated with application to the incompressible, laminar and turbulent, boundary layer equations. The efficiency of the present method is compared with two point and three point higher order methods, namely, the Keller box scheme with Richardson extrapolation, the method of deferred corrections, a three point spline method, and a modified finite element method. For equivalent accuracy, numerical results show the present method to be more efficient than higher order methods for both laminar and turbulent flows.
A temperature match based optimization method for daily load prediction considering DLC effect
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Z.
This paper presents a unique optimization method for short term load forecasting. The new method is based on the optimal template temperature match between the future and past temperatures. The optimal error reduction technique is a new concept introduced in this paper. Two case studies show that for hourly load forecasting, this method can yield results as good as the rather complicated Box-Jenkins Transfer Function method, and better than the Box-Jenkins method; for peak load prediction, this method is comparable in accuracy to the neural network method with back propagation, and can produce more accurate results than the multi-linear regressionmore » method. The DLC effect on system load is also considered in this method.« less
[Isolation and identification methods of enterobacteria group and its technological advancement].
Furuta, Itaru
2007-08-01
In the last half-century, isolation and identification methods of enterobacteria groups have markedly improved by technological advancement. Clinical microbiology tests have changed overtime from tube methods to commercial identification kits and automated identification. Tube methods are the original method for the identification of enterobacteria groups, that is, a basically essential method to recognize bacterial fermentation and biochemical principles. In this paper, traditional tube tests are discussed, such as the utilization of carbohydrates, indole, methyl red, and citrate and urease tests. Commercial identification kits and automated instruments by computer based analysis as current methods are also discussed, and those methods provide rapidity and accuracy. Nonculture techniques of nucleic acid typing methods using PCR analysis, and immunochemical methods using monoclonal antibodies can be further developed.
Comparison of three commercially available fit-test methods.
Janssen, Larry L; Luinenburg, D Michael; Mullins, Haskell E; Nelson, Thomas J
2002-01-01
American National Standards Institute (ANSI) standard Z88.10, Respirator Fit Testing Methods, includes criteria to evaluate new fit-tests. The standard allows generated aerosol, particle counting, or controlled negative pressure quantitative fit-tests to be used as the reference method to determine acceptability of a new test. This study examined (1) comparability of three Occupational Safety and Health Administration-accepted fit-test methods, all of which were validated using generated aerosol as the reference method; and (2) the effect of the reference method on the apparent performance of a fit-test method under evaluation. Sequential fit-tests were performed using the controlled negative pressure and particle counting quantitative fit-tests and the bitter aerosol qualitative fit-test. Of 75 fit-tests conducted with each method, the controlled negative pressure method identified 24 failures; bitter aerosol identified 22 failures; and the particle counting method identified 15 failures. The sensitivity of each method, that is, agreement with the reference method in identifying unacceptable fits, was calculated using each of the other two methods as the reference. None of the test methods met the ANSI sensitivity criterion of 0.95 or greater when compared with either of the other two methods. These results demonstrate that (1) the apparent performance of any fit-test depends on the reference method used, and (2) the fit-tests evaluated use different criteria to identify inadequately fitting respirators. Although "acceptable fit" cannot be defined in absolute terms at this time, the ability of existing fit-test methods to reject poor fits can be inferred from workplace protection factor studies.
A Tale of Two Methods: Chart and Interview Methods for Identifying Delirium
Saczynski, Jane S.; Kosar, Cyrus M.; Xu, Guoquan; Puelle, Margaret R.; Schmitt, Eva; Jones, Richard N.; Marcantonio, Edward R.; Wong, Bonnie; Isaza, Ilean; Inouye, Sharon K.
2014-01-01
Background Interview and chart-based methods for identifying delirium have been validated. However, relative strengths and limitations of each method have not been described, nor has a combined approach (using both interviews and chart), been systematically examined. Objectives To compare chart and interview-based methods for identification of delirium. Design, Setting and Participants Participants were 300 patients aged 70+ undergoing major elective surgery (majority were orthopedic surgery) interviewed daily during hospitalization for delirium using the Confusion Assessment Method (CAM; interview-based method) and whose medical charts were reviewed for delirium using a validated chart-review method (chart-based method). We examined rate of agreement on the two methods and patient characteristics of those identified using each approach. Predictive validity for clinical outcomes (length of stay, postoperative complications, discharge disposition) was compared. In the absence of a gold-standard, predictive value could not be calculated. Results The cumulative incidence of delirium was 23% (n= 68) by the interview-based method, 12% (n=35) by the chart-based method and 27% (n=82) by the combined approach. Overall agreement was 80%; kappa was 0.30. The methods differed in detection of psychomotor features and time of onset. The chart-based method missed delirium in CAM-identified patients laacking features of psychomotor agitation or inappropriate behavior. The CAM-based method missed chart-identified cases occurring during the night shift. The combined method had high predictive validity for all clinical outcomes. Conclusions Interview and chart-based methods have specific strengths for identification of delirium. A combined approach captures the largest number and the broadest range of delirium cases. PMID:24512042
Inventory Management for Irregular Shipment of Goods in Distribution Centre
NASA Astrophysics Data System (ADS)
Takeda, Hitoshi; Kitaoka, Masatoshi; Usuki, Jun
2016-01-01
The shipping amount of commodity goods (Foods, confectionery, dairy products, such as public cosmetic pharmaceutical products) changes irregularly at the distribution center dealing with the general consumer goods. Because the shipment time and the amount of the shipment are irregular, the demand forecast becomes very difficult. For this, the inventory control becomes difficult, too. It cannot be applied to the shipment of the commodity by the conventional inventory control methods. This paper proposes the method for inventory control by cumulative flow curve method. It proposed the method of deciding the order quantity of the inventory control by the cumulative flow curve. Here, it proposes three methods. 1) Power method,2) Polynomial method and 3)Revised Holt's linear method that forecasts data with trends that is a kind of exponential smoothing method. This paper compares the economics of the conventional method, which is managed by the experienced and three new proposed methods. And, the effectiveness of the proposal method is verified from the numerical calculations.
Computational Methods in Drug Discovery
Sliwoski, Gregory; Kothiwale, Sandeepkumar; Meiler, Jens
2014-01-01
Computer-aided drug discovery/design methods have played a major role in the development of therapeutically important small molecules for over three decades. These methods are broadly classified as either structure-based or ligand-based methods. Structure-based methods are in principle analogous to high-throughput screening in that both target and ligand structure information is imperative. Structure-based approaches include ligand docking, pharmacophore, and ligand design methods. The article discusses theory behind the most important methods and recent successful applications. Ligand-based methods use only ligand information for predicting activity depending on its similarity/dissimilarity to previously known active ligands. We review widely used ligand-based methods such as ligand-based pharmacophores, molecular descriptors, and quantitative structure-activity relationships. In addition, important tools such as target/ligand data bases, homology modeling, ligand fingerprint methods, etc., necessary for successful implementation of various computer-aided drug discovery/design methods in a drug discovery campaign are discussed. Finally, computational methods for toxicity prediction and optimization for favorable physiologic properties are discussed with successful examples from literature. PMID:24381236
[Primary culture of human normal epithelial cells].
Tang, Yu; Xu, Wenji; Guo, Wanbei; Xie, Ming; Fang, Huilong; Chen, Chen; Zhou, Jun
2017-11-28
The traditional primary culture methods of human normal epithelial cells have disadvantages of low activity of cultured cells, the low cultivated rate and complicated operation. To solve these problems, researchers made many studies on culture process of human normal primary epithelial cell. In this paper, we mainly introduce some methods used in separation and purification of human normal epithelial cells, such as tissue separation method, enzyme digestion separation method, mechanical brushing method, red blood cell lysis method, percoll layered medium density gradient separation method. We also review some methods used in the culture and subculture, including serum-free medium combined with low mass fraction serum culture method, mouse tail collagen coating method, and glass culture bottle combined with plastic culture dish culture method. The biological characteristics of human normal epithelial cells, the methods of immunocytochemical staining, trypan blue exclusion are described. Moreover, the factors affecting the aseptic operation, the conditions of the extracellular environment, the conditions of the extracellular environment during culture, the number of differential adhesion, and the selection and dosage of additives are summarized.
A Modified Magnetic Gradient Contraction Based Method for Ferromagnetic Target Localization
Wang, Chen; Zhang, Xiaojuan; Qu, Xiaodong; Pan, Xiao; Fang, Guangyou; Chen, Luzhao
2016-01-01
The Scalar Triangulation and Ranging (STAR) method, which is based upon the unique properties of magnetic gradient contraction, is a high real-time ferromagnetic target localization method. Only one measurement point is required in the STAR method and it is not sensitive to changes in sensing platform orientation. However, the localization accuracy of the method is limited by the asphericity errors and the inaccurate value of position leads to larger errors in the estimation of magnetic moment. To improve the localization accuracy, a modified STAR method is proposed. In the proposed method, the asphericity errors of the traditional STAR method are compensated with an iterative algorithm. The proposed method has a fast convergence rate which meets the requirement of high real-time localization. Simulations and field experiments have been done to evaluate the performance of the proposed method. The results indicate that target parameters estimated by the modified STAR method are more accurate than the traditional STAR method. PMID:27999322
Comparison of three explicit multigrid methods for the Euler and Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Chima, Rodrick V.; Turkel, Eli; Schaffer, Steve
1987-01-01
Three explicit multigrid methods, Ni's method, Jameson's finite-volume method, and a finite-difference method based on Brandt's work, are described and compared for two model problems. All three methods use an explicit multistage Runge-Kutta scheme on the fine grid, and this scheme is also described. Convergence histories for inviscid flow over a bump in a channel for the fine-grid scheme alone show that convergence rate is proportional to Courant number and that implicit residual smoothing can significantly accelerate the scheme. Ni's method was slightly slower than the implicitly-smoothed scheme alone. Brandt's and Jameson's methods are shown to be equivalent in form but differ in their node versus cell-centered implementations. They are about 8.5 times faster than Ni's method in terms of CPU time. Results for an oblique shock/boundary layer interaction problem verify the accuracy of the finite-difference code. All methods slowed considerably on the stretched viscous grid but Brandt's method was still 2.1 times faster than Ni's method.
Robust numerical solution of the reservoir routing equation
NASA Astrophysics Data System (ADS)
Fiorentini, Marcello; Orlandini, Stefano
2013-09-01
The robustness of numerical methods for the solution of the reservoir routing equation is evaluated. The methods considered in this study are: (1) the Laurenson-Pilgrim method, (2) the fourth-order Runge-Kutta method, and (3) the fixed order Cash-Karp method. Method (1) is unable to handle nonmonotonic outflow rating curves. Method (2) is found to fail under critical conditions occurring, especially at the end of inflow recession limbs, when large time steps (greater than 12 min in this application) are used. Method (3) is computationally intensive and it does not solve the limitations of method (2). The limitations of method (2) can be efficiently overcome by reducing the time step in the critical phases of the simulation so as to ensure that water level remains inside the domains of the storage function and the outflow rating curve. The incorporation of a simple backstepping procedure implementing this control into the method (2) yields a robust and accurate reservoir routing method that can be safely used in distributed time-continuous catchment models.
Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A
2012-03-15
To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Monovasilis, Theodore; Kalogiratou, Zacharoula; Simos, T. E.
2014-10-01
In this work we derive exponentially fitted symplectic Runge-Kutta-Nyström (RKN) methods from symplectic exponentially fitted partitioned Runge-Kutta (PRK) methods methods (for the approximate solution of general problems of this category see [18] - [40] and references therein). We construct RKN methods from PRK methods with up to five stages and fourth algebraic order.
O'Cathain, Alicia; Murphy, Elizabeth; Nicholl, Jon
2007-01-01
Background Recently, there has been a surge of international interest in combining qualitative and quantitative methods in a single study – often called mixed methods research. It is timely to consider why and how mixed methods research is used in health services research (HSR). Methods Documentary analysis of proposals and reports of 75 mixed methods studies funded by a research commissioner of HSR in England between 1994 and 2004. Face-to-face semi-structured interviews with 20 researchers sampled from these studies. Results 18% (119/647) of HSR studies were classified as mixed methods research. In the documentation, comprehensiveness was the main driver for using mixed methods research, with researchers wanting to address a wider range of questions than quantitative methods alone would allow. Interviewees elaborated on this, identifying the need for qualitative research to engage with the complexity of health, health care interventions, and the environment in which studies took place. Motivations for adopting a mixed methods approach were not always based on the intrinsic value of mixed methods research for addressing the research question; they could be strategic, for example, to obtain funding. Mixed methods research was used in the context of evaluation, including randomised and non-randomised designs; survey and fieldwork exploratory studies; and instrument development. Studies drew on a limited number of methods – particularly surveys and individual interviews – but used methods in a wide range of roles. Conclusion Mixed methods research is common in HSR in the UK. Its use is driven by pragmatism rather than principle, motivated by the perceived deficit of quantitative methods alone to address the complexity of research in health care, as well as other more strategic gains. Methods are combined in a range of contexts, yet the emerging methodological contributions from HSR to the field of mixed methods research are currently limited to the single context of combining qualitative methods and randomised controlled trials. Health services researchers could further contribute to the development of mixed methods research in the contexts of instrument development, survey and fieldwork, and non-randomised evaluations. PMID:17570838
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor-Pashow, K.; Fondeur, F.; White, T.
Savannah River National Laboratory (SRNL) was tasked with identifying and developing at least one, but preferably two methods for quantifying the suppressor in the Next Generation Solvent (NGS) system. The suppressor is a guanidine derivative, N,N',N"-tris(3,7-dimethyloctyl)guanidine (TiDG). A list of 10 possible methods was generated, and screening experiments were performed for 8 of the 10 methods. After completion of the screening experiments, the non-aqueous acid-base titration was determined to be the most promising, and was selected for further development as the primary method. {sup 1}H NMR also showed promising results from the screening experiments, and this method was selected formore » further development as the secondary method. Other methods, including {sup 36}Cl radiocounting and ion chromatography, also showed promise; however, due to the similarity to the primary method (titration) and the inability to differentiate between TiDG and TOA (tri-n-ocytlamine) in the blended solvent, {sup 1}H NMR was selected over these methods. Analysis of radioactive samples obtained from real waste ESS (extraction, scrub, strip) testing using the titration method showed good results. Based on these results, the titration method was selected as the method of choice for TiDG measurement. {sup 1}H NMR has been selected as the secondary (back-up) method, and additional work is planned to further develop this method and to verify the method using radioactive samples. Procedures for analyzing radioactive samples of both pure NGS and blended solvent were developed and issued for the both methods.« less
Issa, M M; Nejem, R M; El-Abadla, N S; Al-Kholy, M; Saleh, Akila A
2008-01-01
A novel atomic absorption spectrometric method and two highly sensitive spectrophotometric methods were developed for the determination of paracetamol. These techniques based on the oxidation of paracetamol by iron (III) (method I); oxidation of p-aminophenol after the hydrolysis of paracetamol (method II). Iron (II) then reacts with potassium ferricyanide to form Prussian blue color with a maximum absorbance at 700 nm. The atomic absorption method was accomplished by extracting the excess iron (III) in method II and aspirates the aqueous layer into air-acetylene flame to measure the absorbance of iron (II) at 302.1 nm. The reactions have been spectrometrically evaluated to attain optimum experimental conditions. Linear responses were exhibited over the ranges 1.0-10, 0.2-2.0 and 0.1-1.0 mug/ml for method I, method II and atomic absorption spectrometric method, respectively. A high sensitivity is recorded for the proposed methods I and II and atomic absorption spectrometric method value indicate: 0.05, 0.022 and 0.012 mug/ml, respectively. The limit of quantitation of paracetamol by method II and atomic absorption spectrometric method were 0.20 and 0.10 mug/ml. Method II and the atomic absorption spectrometric method were applied to demonstrate a pharmacokinetic study by means of salivary samples in normal volunteers who received 1.0 g paracetamol. Intra and inter-day precision did not exceed 6.9%.
Issa, M. M.; Nejem, R. M.; El-Abadla, N. S.; Al-Kholy, M.; Saleh, Akila. A.
2008-01-01
A novel atomic absorption spectrometric method and two highly sensitive spectrophotometric methods were developed for the determination of paracetamol. These techniques based on the oxidation of paracetamol by iron (III) (method I); oxidation of p-aminophenol after the hydrolysis of paracetamol (method II). Iron (II) then reacts with potassium ferricyanide to form Prussian blue color with a maximum absorbance at 700 nm. The atomic absorption method was accomplished by extracting the excess iron (III) in method II and aspirates the aqueous layer into air-acetylene flame to measure the absorbance of iron (II) at 302.1 nm. The reactions have been spectrometrically evaluated to attain optimum experimental conditions. Linear responses were exhibited over the ranges 1.0-10, 0.2-2.0 and 0.1-1.0 μg/ml for method I, method II and atomic absorption spectrometric method, respectively. A high sensitivity is recorded for the proposed methods I and II and atomic absorption spectrometric method value indicate: 0.05, 0.022 and 0.012 μg/ml, respectively. The limit of quantitation of paracetamol by method II and atomic absorption spectrometric method were 0.20 and 0.10 μg/ml. Method II and the atomic absorption spectrometric method were applied to demonstrate a pharmacokinetic study by means of salivary samples in normal volunteers who received 1.0 g paracetamol. Intra and inter-day precision did not exceed 6.9%. PMID:20046743
An investigation of new methods for estimating parameter sensitivities
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1989-01-01
The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.
Rowlands, J A; Hunter, D M; Araj, N
1991-01-01
A new digital image readout method for electrostatic charge images on photoconductive plates is described. The method can be used to read out images on selenium plates similar to those used in xeromammography. The readout method, called the air-gap photoinduced discharge method (PID), discharges the latent image pixel by pixel and measures the charge. The PID readout method, like electrometer methods, is linear. However, the PID method permits much better resolution than scanning electrometers while maintaining quantum limited performance at high radiation exposure levels. Thus the air-gap PID method appears to be uniquely superior for high-resolution digital imaging tasks such as mammography.
Eubanks-Carter, Catherine; Gorman, Bernard S; Muran, J Christopher
2012-01-01
Analysis of change points in psychotherapy process could increase our understanding of mechanisms of change. In particular, naturalistic change point detection methods that identify turning points or breakpoints in time series data could enhance our ability to identify and study alliance ruptures and resolutions. This paper presents four categories of statistical methods for detecting change points in psychotherapy process: criterion-based methods, control chart methods, partitioning methods, and regression methods. Each method's utility for identifying shifts in the alliance is illustrated using a case example from the Beth Israel Psychotherapy Research program. Advantages and disadvantages of the various methods are discussed.
A comparative study of interface reconstruction methods for multi-material ALE simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kucharik, Milan; Garimalla, Rao; Schofield, Samuel
2009-01-01
In this paper we compare the performance of different methods for reconstructing interfaces in multi-material compressible flow simulations. The methods compared are a material-order-dependent Volume-of-Fluid (VOF) method, a material-order-independent VOF method based on power diagram partitioning of cells and the Moment-of-Fluid method (MOF). We demonstrate that the MOF method provides the most accurate tracking of interfaces, followed by the VOF method with the right material ordering. The material-order-independent VOF method performs some-what worse than the above two while the solutions with VOF using the wrong material order are considerably worse.
Digital photography and transparency-based methods for measuring wound surface area.
Bhedi, Amul; Saxena, Atul K; Gadani, Ravi; Patel, Ritesh
2013-04-01
To compare and determine a credible method of measurement of wound surface area by linear, transparency, and photographic methods for monitoring progress of wound healing accurately and ascertaining whether these methods are significantly different. From April 2005 to December 2006, 40 patients (30 men, 5 women, 5 children) admitted to the surgical ward of Shree Sayaji General Hospital, Baroda, had clean as well as infected wound following trauma, debridement, pressure sore, venous ulcer, and incision and drainage. Wound surface areas were measured by these three methods (linear, transparency, and photographic methods) simultaneously on alternate days. The linear method is statistically and significantly different from transparency and photographic methods (P value <0.05), but there is no significant difference between transparency and photographic methods (P value >0.05). Photographic and transparency methods provided measurements of wound surface area with equivalent result and there was no statistically significant difference between these two methods.
Anatomically-Aided PET Reconstruction Using the Kernel Method
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2016-01-01
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest (ROI) quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization (EM) algorithm. PMID:27541810
Anatomically-aided PET reconstruction using the kernel method.
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi
2016-09-21
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
[An automatic peak detection method for LIBS spectrum based on continuous wavelet transform].
Chen, Peng-Fei; Tian, Di; Qiao, Shu-Jun; Yang, Guang
2014-07-01
Spectrum peak detection in the laser-induced breakdown spectroscopy (LIBS) is an essential step, but the presence of background and noise seriously disturb the accuracy of peak position. The present paper proposed a method applied to automatic peak detection for LIBS spectrum in order to enhance the ability of overlapping peaks searching and adaptivity. We introduced the ridge peak detection method based on continuous wavelet transform to LIBS, and discussed the choice of the mother wavelet and optimized the scale factor and the shift factor. This method also improved the ridge peak detection method with a correcting ridge method. The experimental results show that compared with other peak detection methods (the direct comparison method, derivative method and ridge peak search method), our method had a significant advantage on the ability to distinguish overlapping peaks and the precision of peak detection, and could be be applied to data processing in LIBS.
A Method of DTM Construction Based on Quadrangular Irregular Networks and Related Error Analysis
Kang, Mengjun
2015-01-01
A new method of DTM construction based on quadrangular irregular networks (QINs) that considers all the original data points and has a topological matrix is presented. A numerical test and a real-world example are used to comparatively analyse the accuracy of QINs against classical interpolation methods and other DTM representation methods, including SPLINE, KRIGING and triangulated irregular networks (TINs). The numerical test finds that the QIN method is the second-most accurate of the four methods. In the real-world example, DTMs are constructed using QINs and the three classical interpolation methods. The results indicate that the QIN method is the most accurate method tested. The difference in accuracy rank seems to be caused by the locations of the data points sampled. Although the QIN method has drawbacks, it is an alternative method for DTM construction. PMID:25996691
Anatomically-aided PET reconstruction using the kernel method
NASA Astrophysics Data System (ADS)
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2016-09-01
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
[Theory, method and application of method R on estimation of (co)variance components].
Liu, Wen-Zhong
2004-07-01
Theory, method and application of Method R on estimation of (co)variance components were reviewed in order to make the method be reasonably used. Estimation requires R values,which are regressions of predicted random effects that are calculated using complete dataset on predicted random effects that are calculated using random subsets of the same data. By using multivariate iteration algorithm based on a transformation matrix,and combining with the preconditioned conjugate gradient to solve the mixed model equations, the computation efficiency of Method R is much improved. Method R is computationally inexpensive,and the sampling errors and approximate credible intervals of estimates can be obtained. Disadvantages of Method R include a larger sampling variance than other methods for the same data,and biased estimates in small datasets. As an alternative method, Method R can be used in larger datasets. It is necessary to study its theoretical properties and broaden its application range further.
NASA Technical Reports Server (NTRS)
Wood, C. A.
1974-01-01
For polynomials of higher degree, iterative numerical methods must be used. Four iterative methods are presented for approximating the zeros of a polynomial using a digital computer. Newton's method and Muller's method are two well known iterative methods which are presented. They extract the zeros of a polynomial by generating a sequence of approximations converging to each zero. However, both of these methods are very unstable when used on a polynomial which has multiple zeros. That is, either they fail to converge to some or all of the zeros, or they converge to very bad approximations of the polynomial's zeros. This material introduces two new methods, the greatest common divisor (G.C.D.) method and the repeated greatest common divisor (repeated G.C.D.) method, which are superior methods for numerically approximating the zeros of a polynomial having multiple zeros. These methods were programmed in FORTRAN 4 and comparisons in time and accuracy are given.
Wohlsen, T; Bates, J; Vesey, G; Robinson, W A; Katouli, M
2006-04-01
To use BioBall cultures as a precise reference standard to evaluate methods for enumeration of Escherichia coli and other coliform bacteria in water samples. Eight methods were evaluated including membrane filtration, standard plate count (pour and spread plate methods), defined substrate technology methods (Colilert and Colisure), the most probable number method and the Petrifilm disposable plate method. Escherichia coli and Enterobacter aerogenes BioBall cultures containing 30 organisms each were used. All tests were performed using 10 replicates. The mean recovery of both bacteria varied with the different methods employed. The best and most consistent results were obtained with Petrifilm and the pour plate method. Other methods either yielded a low recovery or showed significantly high variability between replicates. The BioBall is a very suitable quality control tool for evaluating the efficiency of methods for bacterial enumeration in water samples.
Wilsonian methods of concept analysis: a critique.
Hupcey, J E; Morse, J M; Lenz, E R; Tasón, M C
1996-01-01
Wilsonian methods of concept analysis--that is, the method proposed by Wilson and Wilson-derived methods in nursing (as described by Walker and Avant; Chinn and Kramer [Jacobs]; Schwartz-Barcott and Kim; and Rodgers)--are discussed and compared in this article. The evolution and modifications of Wilson's method in nursing are described and research that has used these methods, assessed. The transformation of Wilson's method is traced as each author has adopted his techniques and attempted to modify the method to correct for limitations. We suggest that these adaptations and modifications ultimately erode Wilson's method. Further, the Wilson-derived methods have been overly simplified and used by nurse researchers in a prescriptive manner, and the results often do not serve the purpose of expanding nursing knowledge. We conclude that, considering the significance of concept development for the nursing profession, the development of new methods and a means for evaluating conceptual inquiry must be given priority.
NASA Astrophysics Data System (ADS)
Gu, Junhua; Xu, Haiguang; Wang, Jingying; An, Tao; Chen, Wen
2013-08-01
We propose a continuous wavelet transform based non-parametric foreground subtraction method for the detection of redshifted 21 cm signal from the epoch of reionization. This method works based on the assumption that the foreground spectra are smooth in frequency domain, while the 21 cm signal spectrum is full of saw-tooth-like structures, thus their characteristic scales are significantly different. We can distinguish them in the wavelet coefficient space easily and perform the foreground subtraction. Compared with the traditional spectral fitting based method, our method is more tolerant to complex foregrounds. Furthermore, we also find that when the instrument has uncorrected response error, our method can also work significantly better than the spectral fitting based method. Our method can obtain similar results with the Wp smoothing method, which is also a non-parametric method, but our method consumes much less computing time.
Study report on a double isotope method of calcium absorption
NASA Technical Reports Server (NTRS)
1978-01-01
Some of the pros and cons of three methods to study gastrointestinal calcium absorption are briefly discussed. The methods are: (1) a balance study; (2) a single isotope method; and (3) a double isotope method. A procedure for the double isotope method is also included.
2012-01-01
Background A single-step blending approach allows genomic prediction using information of genotyped and non-genotyped animals simultaneously. However, the combined relationship matrix in a single-step method may need to be adjusted because marker-based and pedigree-based relationship matrices may not be on the same scale. The same may apply when a GBLUP model includes both genomic breeding values and residual polygenic effects. The objective of this study was to compare single-step blending methods and GBLUP methods with and without adjustment of the genomic relationship matrix for genomic prediction of 16 traits in the Nordic Holstein population. Methods The data consisted of de-regressed proofs (DRP) for 5 214 genotyped and 9 374 non-genotyped bulls. The bulls were divided into a training and a validation population by birth date, October 1, 2001. Five approaches for genomic prediction were used: 1) a simple GBLUP method, 2) a GBLUP method with a polygenic effect, 3) an adjusted GBLUP method with a polygenic effect, 4) a single-step blending method, and 5) an adjusted single-step blending method. In the adjusted GBLUP and single-step methods, the genomic relationship matrix was adjusted for the difference of scale between the genomic and the pedigree relationship matrices. A set of weights on the pedigree relationship matrix (ranging from 0.05 to 0.40) was used to build the combined relationship matrix in the single-step blending method and the GBLUP method with a polygenetic effect. Results Averaged over the 16 traits, reliabilities of genomic breeding values predicted using the GBLUP method with a polygenic effect (relative weight of 0.20) were 0.3% higher than reliabilities from the simple GBLUP method (without a polygenic effect). The adjusted single-step blending and original single-step blending methods (relative weight of 0.20) had average reliabilities that were 2.1% and 1.8% higher than the simple GBLUP method, respectively. In addition, the GBLUP method with a polygenic effect led to less bias of genomic predictions than the simple GBLUP method, and both single-step blending methods yielded less bias of predictions than all GBLUP methods. Conclusions The single-step blending method is an appealing approach for practical genomic prediction in dairy cattle. Genomic prediction from the single-step blending method can be improved by adjusting the scale of the genomic relationship matrix. PMID:22455934
Hua, Yang; Kaplan, Shannon; Reshatoff, Michael; Hu, Ernie; Zukowski, Alexis; Schweis, Franz; Gin, Cristal; Maroni, Brett; Becker, Michael; Wisniewski, Michele
2012-01-01
The Roka Listeria Detection Assay was compared to the reference culture methods for nine select foods and three select surfaces. The Roka method used Half-Fraser Broth for enrichment at 35 +/- 2 degrees C for 24-28 h. Comparison of Roka's method to reference methods requires an unpaired approach. Each method had a total of 545 samples inoculated with a Listeria strain. Each food and surface was inoculated with a different strain of Listeria at two different levels per method. For the dairy products (Brie cheese, whole milk, and ice cream), our method was compared to AOAC Official Method(SM) 993.12. For the ready-to-eat meats (deli chicken, cured ham, chicken salad, and hot dogs) and environmental surfaces (sealed concrete, stainless steel, and plastic), these samples were compared to the U.S. Department of Agriculture/Food Safety and Inspection Service-Microbiology Laboratory Guidebook (USDA/FSIS-MLG) method MLG 8.07. Cold-smoked salmon and romaine lettuce were compared to the U.S. Food and Drug Administration/Bacteriological Analytical Manual, Chapter 10 (FDA/BAM) method. Roka's method had 358 positives out of 545 total inoculated samples compared to 332 positive for the reference methods. Overall the probability of detection analysis of the results showed better or equivalent performance compared to the reference methods.
NASA Astrophysics Data System (ADS)
Tang, Qiuyan; Wang, Jing; Lv, Pin; Sun, Quan
2015-10-01
Propagation simulation method and choosing mesh grid are both very important to get the correct propagation results in wave optics simulation. A new angular spectrum propagation method with alterable mesh grid based on the traditional angular spectrum method and the direct FFT method is introduced. With this method, the sampling space after propagation is not limited to propagation methods no more, but freely alterable. However, choosing mesh grid on target board influences the validity of simulation results directly. So an adaptive mesh choosing method based on wave characteristics is proposed with the introduced propagation method. We can calculate appropriate mesh grids on target board to get satisfying results. And for complex initial wave field or propagation through inhomogeneous media, we can also calculate and set the mesh grid rationally according to above method. Finally, though comparing with theoretical results, it's shown that the simulation result with the proposed method coinciding with theory. And by comparing with the traditional angular spectrum method and the direct FFT method, it's known that the proposed method is able to adapt to a wider range of Fresnel number conditions. That is to say, the method can simulate propagation results efficiently and correctly with propagation distance of almost zero to infinity. So it can provide better support for more wave propagation applications such as atmospheric optics, laser propagation and so on.
Hanks, Andrew S; Wansink, Brian; Just, David R
2014-03-01
Measuring food waste is essential to determine the impact of school interventions on what children eat. There are multiple methods used for measuring food waste, yet it is unclear which method is most appropriate in large-scale interventions with restricted resources. This study examines which of three visual tray waste measurement methods is most reliable, accurate, and cost-effective compared with the gold standard of individually weighing leftovers. School cafeteria researchers used the following three visual methods to capture tray waste in addition to actual food waste weights for 197 lunch trays: the quarter-waste method, the half-waste method, and the photograph method. Inter-rater and inter-method reliability were highest for on-site visual methods (0.90 for the quarter-waste method and 0.83 for the half-waste method) and lowest for the photograph method (0.48). This low reliability is partially due to the inability of photographs to determine whether packaged items (such as milk or yogurt) are empty or full. In sum, the quarter-waste method was the most appropriate for calculating accurate amounts of tray waste, and the photograph method might be appropriate if researchers only wish to detect significant differences in waste or consumption of selected, unpackaged food. Copyright © 2014 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
Karamon, Jacek; Ziomko, Irena; Cencek, Tomasz; Sroka, Jacek
2008-10-01
The modification of flotation method for the examination of diarrhoeic piglet faeces for the detection of Isospora suis oocysts was elaborated. The method was based on removing fractions of fat from the sample of faeces by centrifugation with a 25% Percoll solution. The investigations were carried out in comparison to the McMaster method. From five variants of the Percoll flotation method, the best results were obtained when 2ml of flotation liquid per 1g of faeces were used. The limit of detection in the Percoll flotation method was 160 oocysts per 1g, and was better than with the McMaster method. The efficacy of the modified method was confirmed by results obtained in the examination of the I. suis infected piglets. From all faecal samples, positive samples in the Percoll flotation method were double the results than that of the routine method. Oocysts were first detected by the Percoll flotation method on day 4 post-invasion, i.e. one-day earlier than with the McMaster method. During the experiment (except for 3 days), the extensity of I. suis invasion in the litter examined by the Percoll flotation method was higher than that with the McMaster method. The obtained results show that the modified flotation method with the use of Percoll could be applied in the diagnostics of suckling piglet isosporosis.
Gyawali, P; Ahmed, W; Jagals, P; Sidhu, J P S; Toze, S
2015-12-01
Hookworm infection contributes around 700 million infections worldwide especially in developing nations due to increased use of wastewater for crop production. The effective recovery of hookworm ova from wastewater matrices is difficult due to their low concentrations and heterogeneous distribution. In this study, we compared the recovery rates of (i) four rapid hookworm ova concentration methods from municipal wastewater, and (ii) two concentration methods from sludge samples. Ancylostoma caninum ova were used as surrogate for human hookworm (Ancylostoma duodenale and Necator americanus). Known concentration of A. caninum hookworm ova were seeded into wastewater (treated and raw) and sludge samples collected from two wastewater treatment plants (WWTPs) in Brisbane and Perth, Australia. The A. caninum ova were concentrated from treated and raw wastewater samples using centrifugation (Method A), hollow fiber ultrafiltration (HFUF) (Method B), filtration (Method C) and flotation (Method D) methods. For sludge samples, flotation (Method E) and direct DNA extraction (Method F) methods were used. Among the four methods tested, filtration (Method C) method was able to recover higher concentrations of A. caninum ova consistently from treated wastewater (39-50%) and raw wastewater (7.1-12%) samples collected from both WWTPs. The remaining methods (Methods A, B and D) yielded variable recovery rate ranging from 0.2 to 40% for treated and raw wastewater samples. The recovery rates for sludge samples were poor (0.02-4.7), although, Method F (direct DNA extraction) provided 1-2 orders of magnitude higher recovery rate than Method E (flotation). Based on our results it can be concluded that the recovery rates of hookworm ova from wastewater matrices, especially sludge samples, can be poor and highly variable. Therefore, choice of concentration method is vital for the sensitive detection of hookworm ova in wastewater matrices. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.
Achieving cost-neutrality with long-acting reversible contraceptive methods⋆
Trussell, James; Hassan, Fareen; Lowin, Julia; Law, Amy; Filonenko, Anna
2014-01-01
Objectives This analysis aimed to estimate the average annual cost of available reversible contraceptive methods in the United States. In line with literature suggesting long-acting reversible contraceptive (LARC) methods become increasingly cost-saving with extended duration of use, it aimed to also quantify minimum duration of use required for LARC methods to achieve cost-neutrality relative to other reversible contraceptive methods while taking into consideration discontinuation. Study design A three-state economic model was developed to estimate relative costs of no method (chance), four short-acting reversible (SARC) methods (oral contraceptive, ring, patch and injection) and three LARC methods [implant, copper intrauterine device (IUD) and levonorgestrel intrauterine system (LNG-IUS) 20 mcg/24 h (total content 52 mg)]. The analysis was conducted over a 5-year time horizon in 1000 women aged 20–29 years. Method-specific failure and discontinuation rates were based on published literature. Costs associated with drug acquisition, administration and failure (defined as an unintended pregnancy) were considered. Key model outputs were annual average cost per method and minimum duration of LARC method usage to achieve cost-savings compared to SARC methods. Results The two least expensive methods were copper IUD ($304 per women, per year) and LNG-IUS 20 mcg/24 h ($308). Cost of SARC methods ranged between $432 (injection) and $730 (patch), per women, per year. A minimum of 2.1 years of LARC usage would result in cost-savings compared to SARC usage. Conclusions This analysis finds that even if LARC methods are not used for their full durations of efficacy, they become cost-saving relative to SARC methods within 3 years of use. Implications Previous economic arguments in support of using LARC methods have been criticized for not considering that LARC methods are not always used for their full duration of efficacy. This study calculated that cost-savings from LARC methods relative to SARC methods, with discontinuation rates considered, can be realized within 3 years. PMID:25282161
Crawford, Charles G.; Martin, Jeffrey D.
2017-07-21
In October 2012, the U.S. Geological Survey (USGS) began measuring the concentration of the pesticide fipronil and three of its degradates (desulfinylfipronil, fipronil sulfide, and fipronil sulfone) by a new laboratory method using direct aqueous-injection liquid chromatography tandem mass spectrometry (DAI LC–MS/MS). This method replaced the previous method—in use since 2002—that used gas chromatography/mass spectrometry (GC/MS). The performance of the two methods is not comparable for fipronil and the three degradates. Concentrations of these four chemical compounds determined by the DAI LC–MS/MS method are substantially lower than the GC/MS method. A method was developed to correct for the difference in concentrations obtained by the two laboratory methods based on a methods comparison field study done in 2012. Environmental and field matrix spike samples to be analyzed by both methods from 48 stream sites from across the United States were sampled approximately three times each for this study. These data were used to develop a relation between the two laboratory methods for each compound using regression analysis. The relations were used to calibrate data obtained by the older method to the new method in order to remove any biases attributable to differences in the methods. The coefficients of the equations obtained from the regressions were used to calibrate over 16,600 observations of fipronil, as well as the three degradates determined by the GC/MS method retrieved from the USGS National Water Information System. The calibrated values were then compared to over 7,800 observations of fipronil and to the three degradates determined by the DAI LC–MS/MS method also retrieved from the National Water Information System. The original and calibrated values from the GC/MS method, along with measures of uncertainty in the calibrated values and the original values from the DAI LC–MS/MS method, are provided in an accompanying data release.
24 CFR 291.90 - Sales methods.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Sales methods. 291.90 Section 291....90 Sales methods. HUD will prescribe the terms and conditions for all methods of sale. HUD may, in... following methods of sale: (a) Future REO acquisition method. The Future Real Estate-Owned (REO) acquisition...
24 CFR 291.90 - Sales methods.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Sales methods. 291.90 Section 291....90 Sales methods. HUD will prescribe the terms and conditions for all methods of sale. HUD may, in... following methods of sale: (a) Future REO acquisition method. The Future Real Estate-Owned (REO) acquisition...
24 CFR 291.90 - Sales methods.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Sales methods. 291.90 Section 291....90 Sales methods. HUD will prescribe the terms and conditions for all methods of sale. HUD may, in... following methods of sale: (a) Future REO acquisition method. The Future Real Estate-Owned (REO) acquisition...
24 CFR 291.90 - Sales methods.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Sales methods. 291.90 Section 291....90 Sales methods. HUD will prescribe the terms and conditions for all methods of sale. HUD may, in... following methods of sale: (a) Future REO acquisition method. The Future Real Estate-Owned (REO) acquisition...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-14
... Office 37 CFR Part 42 Transitional Program for Covered Business Method Patents--Definitions of Covered Business Method Patent and Technological Invention; Final Rule #0;#0;Federal Register / Vol. 77 , No. 157... Business Method Patents-- Definitions of Covered Business Method Patent and Technological Invention AGENCY...
24 CFR 291.90 - Sales methods.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Sales methods. 291.90 Section 291....90 Sales methods. HUD will prescribe the terms and conditions for all methods of sale. HUD may, in... following methods of sale: (a) Future REO acquisition method. The Future Real Estate-Owned (REO) acquisition...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method... (analytical method) provided that the chemistry of the method or the determinative technique is not changed... prevent efficient recovery of organic pollutants and prevent the method from meeting QC requirements, the...
A Review of Methods for Missing Data.
ERIC Educational Resources Information Center
Pigott, Therese D.
2001-01-01
Reviews methods for handling missing data in a research study. Model-based methods, such as maximum likelihood using the EM algorithm and multiple imputation, hold more promise than ad hoc methods. Although model-based methods require more specialized computer programs and assumptions about the nature of missing data, these methods are appropriate…
ERIC Educational Resources Information Center
Kitis, Emine; Türkel, Ali
2017-01-01
The aim of this study is to find out Turkish pre-service teachers' views on effectiveness of cluster method as a writing teaching method. The Cluster Method can be defined as a connotative creative writing method. The way the method works is that the person who brainstorms on connotations of a word or a concept in abscence of any kind of…
Assay of fluoxetine hydrochloride by titrimetric and HPLC methods.
Bueno, F; Bergold, A M; Fröehlich, P E
2000-01-01
Two alternative methods were proposed to assay Fluoxetine Hydrochloride: a titrimetric method and another by HPLC using as mobile phase water pH 3.5: acetonitrile (65:35). These methods were applied to the determination of Fluoxetine as such or in formulations (capsules). The titrimetric method is an alternative for pharmacies and small industries. Both methods showed accuracy and precision and are an alternative to the official methods.
1970-01-01
design and experimentation. I. The Shock- Tube Method Smiley [546] introduced the use of shock waves...one of the greatest disadvantages of this technique. Both the unique adaptability of the shock tube method for high -temperature measurement of...Line-Source Flow Method H. The Hot-Wire Thermal Diffusion Column Method I. The Shock- Tube Method J. The Arc Method K. The Ultrasonic Method .
NASA Technical Reports Server (NTRS)
Banyukevich, A.; Ziolkovski, K.
1975-01-01
A number of hybrid methods for solving Cauchy problems are described on the basis of an evaluation of advantages of single and multiple-point numerical integration methods. The selection criterion is the principle of minimizing computer time. The methods discussed include the Nordsieck method, the Bulirsch-Stoer extrapolation method, and the method of recursive Taylor-Steffensen power series.
Comparison of measurement methods for capacitive tactile sensors and their implementation
NASA Astrophysics Data System (ADS)
Tarapata, Grzegorz; Sienkiewicz, Rafał
2015-09-01
This paper presents a review of ideas and implementations of measurement methods utilized for capacity measurements in tactile sensors. The paper describes technical method, charge amplification method, generation and as well integration method. Three selected methods were implemented in dedicated measurement system and utilised for capacitance measurements of ourselves made tactile sensors. The tactile sensors tested in this work were fully fabricated with the inkjet printing technology. The tests result were presented and summarised. The charge amplification method (CDC) was selected as the best method for the measurement of the tactile sensors.
NASA Technical Reports Server (NTRS)
Gottlieb, D.; Turkel, E.
1980-01-01
New methods are introduced for the time integration of the Fourier and Chebyshev methods of solution for dynamic differential equations. These methods are unconditionally stable, even though no matrix inversions are required. Time steps are chosen by accuracy requirements alone. For the Fourier method both leapfrog and Runge-Kutta methods are considered. For the Chebyshev method only Runge-Kutta schemes are tested. Numerical calculations are presented to verify the analytic results. Applications to the shallow water equations are presented.
NASA Technical Reports Server (NTRS)
Krishnamurthy, Thiagarajan
2005-01-01
Response construction methods using Moving Least Squares (MLS), Kriging and Radial Basis Functions (RBF) are compared with the Global Least Squares (GLS) method in three numerical examples for derivative generation capability. Also, a new Interpolating Moving Least Squares (IMLS) method adopted from the meshless method is presented. It is found that the response surface construction methods using the Kriging and RBF interpolation yields more accurate results compared with MLS and GLS methods. Several computational aspects of the response surface construction methods also discussed.
NASA Astrophysics Data System (ADS)
Magdy, Nancy; Ayad, Miriam F.
2015-02-01
Two simple, accurate, precise, sensitive and economic spectrophotometric methods were developed for the simultaneous determination of Simvastatin and Ezetimibe in fixed dose combination products without prior separation. The first method depends on a new chemometrics-assisted ratio spectra derivative method using moving window polynomial least square fitting method (Savitzky-Golay filters). The second method is based on a simple modification for the ratio subtraction method. The suggested methods were validated according to USP guidelines and can be applied for routine quality control testing.
Application of LC/MS/MS Techniques to Development of US ...
This presentation will describe the U.S. EPA’s drinking water and ambient water method development program in relation to the process employed and the typical challenges encountered in developing standardized LC/MS/MS methods for chemicals of emerging concern. The EPA’s Drinking Water Contaminant Candidate List and Unregulated Contaminant Monitoring Regulations, which are the driving forces behind drinking water method development, will be introduced. Three drinking water LC/MS/MS methods (Methods 537, 544 and a new method for nonylphenol) and two ambient water LC/MS/MS methods for cyanotoxins will be described that highlight some of the challenges encountered during development of these methods. This presentation will provide the audience with basic understanding of EPA's drinking water method development program and an introduction to two new ambient water EPA methods.
Helmersson-Karlqvist, Johanna; Flodin, Mats; Havelka, Aleksandra Mandic; Xu, Xiao Yan; Larsson, Anders
2016-09-01
Serum/plasma albumin is an important and widely used laboratory marker and it is important that we measure albumin correctly without bias. We had indications that the immunoturbidimetric method on Cobas c 501 and the bromocresol purple (BCP) method on Architect 16000 differed, so we decided to study these methods more closely. A total of 1,951 patient requests with albumin measured with both the Architect BCP and Cobas immunoturbidimetric methods were extracted from the laboratory system. A comparison with fresh plasma samples was also performed that included immunoturbidimetric and BCP methods on Cobas c 501 and analysis of the international protein calibrator ERM-DA470k/IFCC. The median difference between the Abbott BCP and Roche immunoturbidimetric methods was 3.3 g/l and the Roche method overestimated ERM-DA470k/IFCC by 2.2 g/l. The Roche immunoturbidimetric method gave higher values than the Roche BCP method: y = 1.111x - 0.739, R² = 0.971. The Roche immunoturbidimetric albumin method gives clearly higher values than the Abbott and Roche BCP methods when analyzing fresh patient samples. The differences between the two methods were similar at normal and low albumin levels. © 2016 Wiley Periodicals, Inc.
Manual tracing versus smartphone application (app) tracing: a comparative study.
Sayar, Gülşilay; Kilinc, Delal Dara
2017-11-01
This study aimed to compare the results of conventional manual cephalometric tracing with those acquired with smartphone application cephalometric tracing. The cephalometric radiographs of 55 patients (25 females and 30 males) were traced via the manual and app methods and were subsequently examined with Steiner's analysis. Five skeletal measurements, five dental measurements and two soft tissue measurements were managed based on 21 landmarks. The durations of the performances of the two methods were also compared. SNA (Sella, Nasion, A point angle) and SNB (Sella, Nasion, B point angle) values for the manual method were statistically lower (p < .001) than those for the app method. The ANB value for the manual method was statistically lower than that of app method. L1-NB (°) and upper lip protrusion values for the manual method were statistically higher than those for the app method. Go-GN/SN, U1-NA (°) and U1-NA (mm) values for manual method were statistically lower than those for the app method. No differences between the two methods were found in the L1-NB (mm), occlusal plane to SN, interincisal angle or lower lip protrusion values. Although statistically significant differences were found between the two methods, the cephalometric tracing proceeded faster with the app method than with the manual method.
Contraceptive Method Choice Among Young Adults: Influence of Individual and Relationship Factors.
Harvey, S Marie; Oakley, Lisa P; Washburn, Isaac; Agnew, Christopher R
2018-01-26
Because decisions related to contraceptive behavior are often made by young adults in the context of specific relationships, the relational context likely influences use of contraceptives. Data presented here are from in-person structured interviews with 536 Black, Hispanic, and White young adults from East Los Angeles, California. We collected partner-specific relational and contraceptive data on all sexual partnerships for each individual, on four occasions, over one year. Using three-level multinomial logistic regression models, we examined individual and relationship factors predictive of contraceptive use. Results indicated that both individual and relationship factors predicted contraceptive use, but factors varied by method. Participants reporting greater perceived partner exclusivity and relationship commitment were more likely to use hormonal/long-acting methods only or a less effective method/no method versus condoms only. Those with greater participation in sexual decision making were more likely to use any method over a less effective method/no method and were more likely to use condoms only or dual methods versus a hormonal/long-acting method only. In addition, for women only, those who reported greater relationship commitment were more likely to use hormonal/long-acting methods or a less effective method/no method versus a dual method. In summary, interactive relationship qualities and dynamics (commitment and sexual decision making) significantly predicted contraceptive use.
Nishiyama, Yayoi; Abe, Michiko; Ikeda, Reiko; Uno, Jun; Oguri, Toyoko; Shibuya, Kazutoshi; Maesaki, Shigefumi; Mohri, Shinobu; Yamada, Tsuyoshi; Ishibashi, Hiroko; Hasumi, Yayoi; Abe, Shigeru
2010-01-01
The Japanese Society for Medical Mycology (JSMM) method used for testing the antifungal susceptibility of yeast, the MIC end point for azole antifungal agents, is currently set at IC(80). It was recently shown, however that there is an inconsistency in the MIC value between the JSMM method and the CLSI M27-A2 (CLSI) method, in which the end- point was to read as IC(50). To resolve this discrepancy and reassess the JSMM method, the MIC for three azoles, fluconazole, itraconazole and voriconazole were compared to 5 strains of each of the following Candida species: C. albicans, C. glabrata, C. tropicalis, C. parapsilosis and C. krusei, for a total of 25 comparisons, using the JSMM method, a modified JSMM method, and the CLSI method. The results showed that when the MIC end- point criterion of the JSMM method was changed from IC(80) to IC(50) (the modified JSMM method) , the MIC value was consistent and compatible with the CLSI method. Finally, it should be emphasized that the JSMM method, using a spectrophotometer for MIC measurement, was superior in both stability and reproducibility, as compared to the CLSI method in which growth was assessed by visual observation.
Modified Fully Utilized Design (MFUD) Method for Stress and Displacement Constraints
NASA Technical Reports Server (NTRS)
Patnaik, Surya; Gendy, Atef; Berke, Laszlo; Hopkins, Dale
1997-01-01
The traditional fully stressed method performs satisfactorily for stress-limited structural design. When this method is extended to include displacement limitations in addition to stress constraints, it is known as the fully utilized design (FUD). Typically, the FUD produces an overdesign, which is the primary limitation of this otherwise elegant method. We have modified FUD in an attempt to alleviate the limitation. This new method, called the modified fully utilized design (MFUD) method, has been tested successfully on a number of designs that were subjected to multiple loads and had both stress and displacement constraints. The solutions obtained with MFUD compare favorably with the optimum results that can be generated by using nonlinear mathematical programming techniques. The MFUD method appears to have alleviated the overdesign condition and offers the simplicity of a direct, fully stressed type of design method that is distinctly different from optimization and optimality criteria formulations. The MFUD method is being developed for practicing engineers who favor traditional design methods rather than methods based on advanced calculus and nonlinear mathematical programming techniques. The Integrated Force Method (IFM) was found to be the appropriate analysis tool in the development of the MFUD method. In this paper, the MFUD method and its optimality are presented along with a number of illustrative examples.
Faure, Elodie; Danjou, Aurélie M N; Clavel-Chapelon, Françoise; Boutron-Ruault, Marie-Christine; Dossus, Laure; Fervers, Béatrice
2017-02-24
Environmental exposure assessment based on Geographic Information Systems (GIS) and study participants' residential proximity to environmental exposure sources relies on the positional accuracy of subjects' residences to avoid misclassification bias. Our study compared the positional accuracy of two automatic geocoding methods to a manual reference method. We geocoded 4,247 address records representing the residential history (1990-2008) of 1,685 women from the French national E3N cohort living in the Rhône-Alpes region. We compared two automatic geocoding methods, a free-online geocoding service (method A) and an in-house geocoder (method B), to a reference layer created by manually relocating addresses from method A (method R). For each automatic geocoding method, positional accuracy levels were compared according to the urban/rural status of addresses and time-periods (1990-2000, 2001-2008), using Chi Square tests. Kappa statistics were performed to assess agreement of positional accuracy of both methods A and B with the reference method, overall, by time-periods and by urban/rural status of addresses. Respectively 81.4% and 84.4% of addresses were geocoded to the exact address (65.1% and 61.4%) or to the street segment (16.3% and 23.0%) with methods A and B. In the reference layer, geocoding accuracy was higher in urban areas compared to rural areas (74.4% vs. 10.5% addresses geocoded to the address or interpolated address level, p < 0.0001); no difference was observed according to the period of residence. Compared to the reference method, median positional errors were 0.0 m (IQR = 0.0-37.2 m) and 26.5 m (8.0-134.8 m), with positional errors <100 m for 82.5% and 71.3% of addresses, for method A and method B respectively. Positional agreement of method A and method B with method R was 'substantial' for both methods, with kappa coefficients of 0.60 and 0.61 for methods A and B, respectively. Our study demonstrates the feasibility of geocoding residential addresses in epidemiological studies not initially recorded for environmental exposure assessment, for both recent addresses and residence locations more than 20 years ago. Accuracy of the two automatic geocoding methods was comparable. The in-house method (B) allowed a better control of the geocoding process and was less time consuming.
Comparison of reproducibility of natural head position using two methods.
Khan, Abdul Rahim; Rajesh, R N G; Dinesh, M R; Sanjay, N; Girish, K S; Venkataraghavan, Karthik
2012-01-01
Lateral cephalometric radiographs have become virtually indispensable to orthodontists in the treatment of patients. They are important in orthodontic growth analysis, diagnosis, treatment planning, monitoring of therapy and evaluation of final treatment outcome. The purpose of this study was to evaluate and compare the maximum reproducibility with minimum variation of natural head position using two methods, i.e. the mirror method and the fluid level device method. The study included two sets of 40 lateral cephalograms taken using two methods of obtaining natural head position: (1) The mirror method and (2) fluid level device method, with a time interval of 2 months. Inclusion criteria • Subjects were randomly selected aged between 18 to 26 years Exclusion criteria • History of orthodontic treatment • Any history of respiratory tract problem or chronic mouth breathing • Any congenital deformity • History of traumatically-induced deformity • History of myofacial pain syndrome • Any previous history of head and neck surgery. The result showed that both the methods for obtaining natural head position-the mirror method and fluid level device method were comparable, but maximum reproducibility was more with the fluid level device as shown by the Dahlberg's coefficient and Bland-Altman plot. The minimum variance was seen with the fluid level device method as shown by Precision and Pearson correlation. The mirror method and the fluid level device method used for obtaining natural head position were comparable without any significance, and the fluid level device method was more reproducible and showed less variance when compared to mirror method for obtaining natural head position. Fluid level device method was more reproducible and shows less variance when compared to mirror method for obtaining natural head position.
Visschers, Naomi C A; Hulzebos, Erik H; van Brussel, Marco; Takken, Tim
2015-11-01
The ventilatory anaerobic threshold (VAT) is an important method to assess the aerobic fitness in patients with cardiopulmonary disease. Several methods exist to determine the VAT; however, there is no consensus which of these methods is the most accurate. To compare four different non-invasive methods for the determination of the VAT via respiratory gas exchange analysis during a cardiopulmonary exercise test (CPET). A secondary objective is to determine the interobserver reliability of the VAT. CPET data of 30 children diagnosed with either cystic fibrosis (CF; N = 15) or with a surgically corrected dextro-transposition of the great arteries (asoTGA; N = 15) were included. No significant differences were found between conditions or among testers. The RER = 1 method differed the most compared to the other methods, showing significant higher results in all six variables. The PET-O2 method differed significantly on five of six and four of six exercise variables with the V-slope method and the VentEq method, respectively. The V-slope and the VentEq method differed significantly on one of six exercise variables. Ten of thirteen ICCs that were >0.80 had a 95% CI > 0.70. The RER = 1 method and the V-slope method had the highest number of significant ICCs and 95% CIs. The V-slope method, the ventilatory equivalent method and the PET-O2 method are comparable and reliable methods to determine the VAT during CPET in children with CF or asoTGA. © 2014 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.
Gao, Huilin; Dong, Lihu; Li, Fengri; Zhang, Lianjun
2015-01-01
A total of 89 trees of Korean pine (Pinus koraiensis) were destructively sampled from the plantations in Heilongjiang Province, P.R. China. The sample trees were measured and calculated for the biomass and carbon stocks of tree components (i.e., stem, branch, foliage and root). Both compatible biomass and carbon stock models were developed with the total biomass and total carbon stocks as the constraints, respectively. Four methods were used to evaluate the carbon stocks of tree components. The first method predicted carbon stocks directly by the compatible carbon stocks models (Method 1). The other three methods indirectly predicted the carbon stocks in two steps: (1) estimating the biomass by the compatible biomass models, and (2) multiplying the estimated biomass by three different carbon conversion factors (i.e., carbon conversion factor 0.5 (Method 2), average carbon concentration of the sample trees (Method 3), and average carbon concentration of each tree component (Method 4)). The prediction errors of estimating the carbon stocks were compared and tested for the differences between the four methods. The results showed that the compatible biomass and carbon models with tree diameter (D) as the sole independent variable performed well so that Method 1 was the best method for predicting the carbon stocks of tree components and total. There were significant differences among the four methods for the carbon stock of stem. Method 2 produced the largest error, especially for stem and total. Methods 3 and Method 4 were slightly worse than Method 1, but the differences were not statistically significant. In practice, the indirect method using the mean carbon concentration of individual trees was sufficient to obtain accurate carbon stocks estimation if carbon stocks models are not available. PMID:26659257
Prakash, Jaya; Yalavarthy, Phaneendra K
2013-03-01
Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.
A New Online Calibration Method Based on Lord's Bias-Correction.
He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei
2017-09-01
Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.
Qualitative versus quantitative methods in psychiatric research.
Razafsha, Mahdi; Behforuzi, Hura; Azari, Hassan; Zhang, Zhiqun; Wang, Kevin K; Kobeissy, Firas H; Gold, Mark S
2012-01-01
Qualitative studies are gaining their credibility after a period of being misinterpreted as "not being quantitative." Qualitative method is a broad umbrella term for research methodologies that describe and explain individuals' experiences, behaviors, interactions, and social contexts. In-depth interview, focus groups, and participant observation are among the qualitative methods of inquiry commonly used in psychiatry. Researchers measure the frequency of occurring events using quantitative methods; however, qualitative methods provide a broader understanding and a more thorough reasoning behind the event. Hence, it is considered to be of special importance in psychiatry. Besides hypothesis generation in earlier phases of the research, qualitative methods can be employed in questionnaire design, diagnostic criteria establishment, feasibility studies, as well as studies of attitude and beliefs. Animal models are another area that qualitative methods can be employed, especially when naturalistic observation of animal behavior is important. However, since qualitative results can be researcher's own view, they need to be statistically confirmed, quantitative methods. The tendency to combine both qualitative and quantitative methods as complementary methods has emerged over recent years. By applying both methods of research, scientists can take advantage of interpretative characteristics of qualitative methods as well as experimental dimensions of quantitative methods.
ERIC Educational Resources Information Center
Vir, Dharm
1971-01-01
A survey of teaching methods for farm guidance workers in India, outlining some approaches developed by and used in other nations. Discusses mass educational methods, group educational methods, and the local leadership method. (JB)
Bishop, Felicity L
2015-02-01
To outline some of the challenges of mixed methods research and illustrate how they can be addressed in health psychology research. This study critically reflects on the author's previously published mixed methods research and discusses the philosophical and technical challenges of mixed methods, grounding the discussion in a brief review of methodological literature. Mixed methods research is characterized as having philosophical and technical challenges; the former can be addressed by drawing on pragmatism, the latter by considering formal mixed methods research designs proposed in a number of design typologies. There are important differences among the design typologies which provide diverse examples of designs that health psychologists can adapt for their own mixed methods research. There are also similarities; in particular, many typologies explicitly orient to the technical challenges of deciding on the respective timing of qualitative and quantitative methods and the relative emphasis placed on each method. Characteristics, strengths, and limitations of different sequential and concurrent designs are identified by reviewing five mixed methods projects each conducted for a different purpose. Adapting formal mixed methods designs can help health psychologists address the technical challenges of mixed methods research and identify the approach that best fits the research questions and purpose. This does not obfuscate the need to address philosophical challenges of mixing qualitative and quantitative methods. Statement of contribution What is already known on this subject? Mixed methods research poses philosophical and technical challenges. Pragmatism in a popular approach to the philosophical challenges while diverse typologies of mixed methods designs can help address the technical challenges. Examples of mixed methods research can be hard to locate when component studies from mixed methods projects are published separately. What does this study add? Critical reflections on the author's previously published mixed methods research illustrate how a range of different mixed methods designs can be adapted and applied to address health psychology research questions. The philosophical and technical challenges of mixed methods research should be considered together and in relation to the broader purpose of the research. © 2014 The British Psychological Society.
O'Cathain, Alicia; Murphy, Elizabeth; Nicholl, Jon
2007-06-14
Recently, there has been a surge of international interest in combining qualitative and quantitative methods in a single study--often called mixed methods research. It is timely to consider why and how mixed methods research is used in health services research (HSR). Documentary analysis of proposals and reports of 75 mixed methods studies funded by a research commissioner of HSR in England between 1994 and 2004. Face-to-face semi-structured interviews with 20 researchers sampled from these studies. 18% (119/647) of HSR studies were classified as mixed methods research. In the documentation, comprehensiveness was the main driver for using mixed methods research, with researchers wanting to address a wider range of questions than quantitative methods alone would allow. Interviewees elaborated on this, identifying the need for qualitative research to engage with the complexity of health, health care interventions, and the environment in which studies took place. Motivations for adopting a mixed methods approach were not always based on the intrinsic value of mixed methods research for addressing the research question; they could be strategic, for example, to obtain funding. Mixed methods research was used in the context of evaluation, including randomised and non-randomised designs; survey and fieldwork exploratory studies; and instrument development. Studies drew on a limited number of methods--particularly surveys and individual interviews--but used methods in a wide range of roles. Mixed methods research is common in HSR in the UK. Its use is driven by pragmatism rather than principle, motivated by the perceived deficit of quantitative methods alone to address the complexity of research in health care, as well as other more strategic gains. Methods are combined in a range of contexts, yet the emerging methodological contributions from HSR to the field of mixed methods research are currently limited to the single context of combining qualitative methods and randomised controlled trials. Health services researchers could further contribute to the development of mixed methods research in the contexts of instrument development, survey and fieldwork, and non-randomised evaluations.
New hybrid conjugate gradient methods with the generalized Wolfe line search.
Xu, Xiao; Kong, Fan-Yu
2016-01-01
The conjugate gradient method was an efficient technique for solving the unconstrained optimization problem. In this paper, we made a linear combination with parameters β k of the DY method and the HS method, and putted forward the hybrid method of DY and HS. We also proposed the hybrid of FR and PRP by the same mean. Additionally, to present the two hybrid methods, we promoted the Wolfe line search respectively to compute the step size α k of the two hybrid methods. With the new Wolfe line search, the two hybrid methods had descent property and global convergence property of the two hybrid methods that can also be proved.
Research on the calibration methods of the luminance parameter of radiation luminance meters
NASA Astrophysics Data System (ADS)
Cheng, Weihai; Huang, Biyong; Lin, Fangsheng; Li, Tiecheng; Yin, Dejin; Lai, Lei
2017-10-01
This paper introduces standard diffusion reflection white plate method and integrating sphere standard luminance source method to calibrate the luminance parameter. The paper compares the effects of calibration results by using these two methods through principle analysis and experimental verification. After using two methods to calibrate the same radiation luminance meter, the data obtained verifies the testing results of the two methods are both reliable. The results show that the display value using standard white plate method has fewer errors and better reproducibility. However, standard luminance source method is more convenient and suitable for on-site calibration. Moreover, standard luminance source method has wider range and can test the linear performance of the instruments.
Køppe, Simo; Dammeyer, Jesper
2014-09-01
The evolution of developmental psychology has been characterized by the use of different quantitative and qualitative methods and procedures. But how does the use of methods and procedures change over time? This study explores the change and development of statistical methods used in articles published in Child Development from 1930 to 2010. The methods used in every article in the first issue of every volume were categorized into four categories. Until 1980 relatively simple statistical methods were used. During the last 30 years there has been an explosive use of more advanced statistical methods employed. The absence of statistical methods or use of simple methods had been eliminated.
Social network extraction based on Web: 1. Related superficial methods
NASA Astrophysics Data System (ADS)
Khairuddin Matyuso Nasution, Mahyuddin
2018-01-01
Often the nature of something affects methods to resolve the related issues about it. Likewise, methods to extract social networks from the Web, but involve the structured data types differently. This paper reveals several methods of social network extraction from the same sources that is Web: the basic superficial method, the underlying superficial method, the description superficial method, and the related superficial methods. In complexity we derive the inequalities between methods and so are their computations. In this case, we find that different results from the same tools make the difference from the more complex to the simpler: Extraction of social network by involving co-occurrence is more complex than using occurrences.
Meinertz, J.R.; Stehly, G.R.; Gingerich, W.H.; Greseth, Shari L.
2001-01-01
Chloramine-T is an effective drug for controlling fish mortality caused by bacterial gill disease. As part of the data required for approval of chloramine-T use in aquaculture, depletion of the chloramine-T marker residue (para-toluenesulfonamide; p-TSA) from edible fillet tissue of fish must be characterized. Declaration of p-TSA as the marker residue for chloramine-T in rainbow trout was based on total residue depletion studies using a method that used time consuming and cumbersome techniques. A simple and robust method recently developed is being proposed as a determinative method for p-TSA in fish fillet tissue. The proposed determinative method was evaluated by comparing accuracy and precision data with U.S. Food and Drug Administration criteria and by bridging the method to the former method for chloramine-T residues. The method accuracy and precision fulfilled the criteria for determinative methods; accuracy was 92.6, 93.4, and 94.6% with samples fortified at 0.5X, 1X, and 2X the expected 1000 ng/g tolerance limit for p-TSA, respectively. Method precision with tissue containing incurred p-TSA at a nominal concentration of 1000 ng/g ranged from 0.80 to 8.4%. The proposed determinative method was successfully bridged with the former method. The concentrations of p-TSA developed with the proposed method were not statistically different at p < 0.05 from p-TSA concentrations developed with the former method.
Standard setting: comparison of two methods.
George, Sanju; Haque, M Sayeed; Oyebode, Femi
2006-09-14
The outcome of assessments is determined by the standard-setting method used. There is a wide range of standard-setting methods and the two used most extensively in undergraduate medical education in the UK are the norm-reference and the criterion-reference methods. The aims of the study were to compare these two standard-setting methods for a multiple-choice question examination and to estimate the test-retest and inter-rater reliability of the modified Angoff method. The norm-reference method of standard-setting (mean minus 1 SD) was applied to the 'raw' scores of 78 4th-year medical students on a multiple-choice examination (MCQ). Two panels of raters also set the standard using the modified Angoff method for the same multiple-choice question paper on two occasions (6 months apart). We compared the pass/fail rates derived from the norm reference and the Angoff methods and also assessed the test-retest and inter-rater reliability of the modified Angoff method. The pass rate with the norm-reference method was 85% (66/78) and that by the Angoff method was 100% (78 out of 78). The percentage agreement between Angoff method and norm-reference was 78% (95% CI 69% - 87%). The modified Angoff method had an inter-rater reliability of 0.81-0.82 and a test-retest reliability of 0.59-0.74. There were significant differences in the outcomes of these two standard-setting methods, as shown by the difference in the proportion of candidates that passed and failed the assessment. The modified Angoff method was found to have good inter-rater reliability and moderate test-retest reliability.
Women's Contraceptive Preference-Use Mismatch
He, Katherine; Dalton, Vanessa K.; Zochowski, Melissa K.
2017-01-01
Abstract Background: Family planning research has not adequately addressed women's preferences for different contraceptive methods and whether women's contraceptive experiences match their preferences. Methods: Data were drawn from the Women's Healthcare Experiences and Preferences Study, an Internet survey of 1,078 women aged 18–55 randomly sampled from a national probability panel. Survey items assessed women's preferences for contraceptive methods, match between methods preferred and used, and perceived reasons for mismatch. We estimated predictors of contraceptive preference with multinomial logistic regression models. Results: Among women at risk for pregnancy who responded with their preferred method (n = 363), hormonal methods (non-LARC [long-acting reversible contraception]) were the most preferred method (34%), followed by no method (23%) and LARC (18%). Sociodemographic differences in contraception method preferences were noted (p-values <0.05), generally with minority, married, and older women having higher rates of preferring less effective methods, compared to their counterparts. Thirty-six percent of women reported preference-use mismatch, with the majority preferring more effective methods than those they were using. Rates of match between preferred and usual methods were highest for LARC (76%), hormonal (non-LARC) (65%), and no method (65%). The most common reasons for mismatch were cost/insurance (41%), lack of perceived/actual need (34%), and method-specific preference concerns (19%). Conclusion: While preference for effective contraception was common among this sample of women, we found substantial mismatch between preferred and usual methods, notably among women of lower socioeconomic status and women using less effective methods. Findings may have implications for patient-centered contraceptive interventions. PMID:27710196
2013-01-01
The comparative study of the results of various segmentation methods for the digital images of the follicular lymphoma cancer tissue section is described in this paper. The sensitivity and specificity and some other parameters of the following adaptive threshold methods of segmentation: the Niblack method, the Sauvola method, the White method, the Bernsen method, the Yasuda method and the Palumbo method, are calculated. Methods are applied to three types of images constructed by extraction of the brown colour information from the artificial images synthesized based on counterpart experimentally captured images. This paper presents usefulness of the microscopic image synthesis method in evaluation as well as comparison of the image processing results. The results of thoughtful analysis of broad range of adaptive threshold methods applied to: (1) the blue channel of RGB, (2) the brown colour extracted by deconvolution and (3) the ’brown component’ extracted from RGB allows to select some pairs: method and type of image for which this method is most efficient considering various criteria e.g. accuracy and precision in area detection or accuracy in number of objects detection and so on. The comparison shows that the White, the Bernsen and the Sauvola methods results are better than the results of the rest of the methods for all types of monochromatic images. All three methods segments the immunopositive nuclei with the mean accuracy of 0.9952, 0.9942 and 0.9944 respectively, when treated totally. However the best results are achieved for monochromatic image in which intensity shows brown colour map constructed by colour deconvolution algorithm. The specificity in the cases of the Bernsen and the White methods is 1 and sensitivities are: 0.74 for White and 0.91 for Bernsen methods while the Sauvola method achieves sensitivity value of 0.74 and the specificity value of 0.99. According to Bland-Altman plot the Sauvola method selected objects are segmented without undercutting the area for true positive objects but with extra false positive objects. The Sauvola and the Bernsen methods gives complementary results what will be exploited when the new method of virtual tissue slides segmentation be develop. Virtual Slides The virtual slides for this article can be found here: slide 1: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617947952577 and slide 2: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617948230017. PMID:23531405
Korzynska, Anna; Roszkowiak, Lukasz; Lopez, Carlos; Bosch, Ramon; Witkowski, Lukasz; Lejeune, Marylene
2013-03-25
The comparative study of the results of various segmentation methods for the digital images of the follicular lymphoma cancer tissue section is described in this paper. The sensitivity and specificity and some other parameters of the following adaptive threshold methods of segmentation: the Niblack method, the Sauvola method, the White method, the Bernsen method, the Yasuda method and the Palumbo method, are calculated. Methods are applied to three types of images constructed by extraction of the brown colour information from the artificial images synthesized based on counterpart experimentally captured images. This paper presents usefulness of the microscopic image synthesis method in evaluation as well as comparison of the image processing results. The results of thoughtful analysis of broad range of adaptive threshold methods applied to: (1) the blue channel of RGB, (2) the brown colour extracted by deconvolution and (3) the 'brown component' extracted from RGB allows to select some pairs: method and type of image for which this method is most efficient considering various criteria e.g. accuracy and precision in area detection or accuracy in number of objects detection and so on. The comparison shows that the White, the Bernsen and the Sauvola methods results are better than the results of the rest of the methods for all types of monochromatic images. All three methods segments the immunopositive nuclei with the mean accuracy of 0.9952, 0.9942 and 0.9944 respectively, when treated totally. However the best results are achieved for monochromatic image in which intensity shows brown colour map constructed by colour deconvolution algorithm. The specificity in the cases of the Bernsen and the White methods is 1 and sensitivities are: 0.74 for White and 0.91 for Bernsen methods while the Sauvola method achieves sensitivity value of 0.74 and the specificity value of 0.99. According to Bland-Altman plot the Sauvola method selected objects are segmented without undercutting the area for true positive objects but with extra false positive objects. The Sauvola and the Bernsen methods gives complementary results what will be exploited when the new method of virtual tissue slides segmentation be develop. The virtual slides for this article can be found here: slide 1: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617947952577 and slide 2: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617948230017.
Numerical Grid Generation and Potential Airfoil Analysis and Design
1988-01-01
Gauss- Seidel , SOR and ADI iterative methods e JACOBI METHOD In the Jacobi method each new value of a function is computed entirely from old values...preceding iteration and adding the inhomogeneous (boundary condition) term. * GAUSS- SEIDEL METHOD When we compute I in a Jacobi method, we have already...Gauss- Seidel method. Sufficient condition for p convergence of the Gauss- Seidel method is diagonal-dominance of [A].9W e SUCESSIVE OVER-RELAXATION (SOR
Evaluation of intrinsic respiratory signal determination methods for 4D CBCT adapted for mice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Rachael; Pan, Tinsu, E-mail: tpan@mdanderson.org; Rubinstein, Ashley
Purpose: 4D CT imaging in mice is important in a variety of areas including studies of lung function and tumor motion. A necessary step in 4D imaging is obtaining a respiratory signal, which can be done through an external system or intrinsically through the projection images. A number of methods have been developed that can successfully determine the respiratory signal from cone-beam projection images of humans, however only a few have been utilized in a preclinical setting and most of these rely on step-and-shoot style imaging. The purpose of this work is to assess and make adaptions of several successfulmore » methods developed for humans for an image-guided preclinical radiation therapy system. Methods: Respiratory signals were determined from the projection images of free-breathing mice scanned on the X-RAD system using four methods: the so-called Amsterdam shroud method, a method based on the phase of the Fourier transform, a pixel intensity method, and a center of mass method. The Amsterdam shroud method was modified so the sharp inspiration peaks associated with anesthetized mouse breathing could be detected. Respiratory signals were used to sort projections into phase bins and 4D images were reconstructed. Error and standard deviation in the assignment of phase bins for the four methods compared to a manual method considered to be ground truth were calculated for a range of region of interest (ROI) sizes. Qualitative comparisons were additionally made between the 4D images obtained using each of the methods and the manual method. Results: 4D images were successfully created for all mice with each of the respiratory signal extraction methods. Only minimal qualitative differences were noted between each of the methods and the manual method. The average error (and standard deviation) in phase bin assignment was 0.24 ± 0.08 (0.49 ± 0.11) phase bins for the Fourier transform method, 0.09 ± 0.03 (0.31 ± 0.08) phase bins for the modified Amsterdam shroud method, 0.09 ± 0.02 (0.33 ± 0.07) phase bins for the intensity method, and 0.37 ± 0.10 (0.57 ± 0.08) phase bins for the center of mass method. Little dependence on ROI size was noted for the modified Amsterdam shroud and intensity methods while the Fourier transform and center of mass methods showed a noticeable dependence on the ROI size. Conclusions: The modified Amsterdam shroud, Fourier transform, and intensity respiratory signal methods are sufficiently accurate to be used for 4D imaging on the X-RAD system and show improvement over the existing center of mass method. The intensity and modified Amsterdam shroud methods are recommended due to their high accuracy and low dependence on ROI size.« less
26 CFR 1.167(b)-2 - Declining balance method.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 26 Internal Revenue 2 2014-04-01 2014-04-01 false Declining balance method. 1.167(b)-2 Section 1... Declining balance method. (a) Application of method. Under the declining balance method a uniform rate is.... While salvage is not taken into account in determining the annual allowances under this method, in no...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-05
... Methods: Designation of Three New Equivalent Methods AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of three new equivalent methods for monitoring ambient air quality. SUMMARY... equivalent methods, one for measuring concentrations of PM 2.5 , one for measuring concentrations of PM 10...
40 CFR Appendix A to Part 425 - Potassium Ferricyanide Titration Method
Code of Federal Regulations, 2010 CFR
2010-07-01
... Method A Appendix A to Part 425 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Appendix A to Part 425—Potassium Ferricyanide Titration Method Source The potassium ferricyanide titration method is based on method SLM 4/2 described in “Official Method of Analysis,” Society of Leather Trades...
40 CFR Appendix A to Part 425 - Potassium Ferricyanide Titration Method
Code of Federal Regulations, 2012 CFR
2012-07-01
... Method A Appendix A to Part 425 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED..., App. A Appendix A to Part 425—Potassium Ferricyanide Titration Method Source The potassium ferricyanide titration method is based on method SLM 4/2 described in “Official Method of Analysis,” Society of...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-12
... Methods: Designation of Five New Equivalent Methods AGENCY: Office of Research and Development; Environmental Protection Agency (EPA). ACTION: Notice of the designation of five new equivalent methods for...) has designated, in accordance with 40 CFR Part 53, five new equivalent methods, one for measuring...
40 CFR Appendix A to Part 425 - Potassium Ferricyanide Titration Method
Code of Federal Regulations, 2011 CFR
2011-07-01
... Method A Appendix A to Part 425 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Appendix A to Part 425—Potassium Ferricyanide Titration Method Source The potassium ferricyanide titration method is based on method SLM 4/2 described in “Official Method of Analysis,” Society of Leather Trades...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-16
...: EPA Method Development Update on Drinking Water Testing Methods for Contaminant Candidate List... Division will describe methods currently in development for many CCL contaminants, with an expectation that several of these methods will support future cycles of the Unregulated Contaminant Monitoring Rule (UCMR...
ERIC Educational Resources Information Center
Penhoat, Loick; Sakow, Kostia
1978-01-01
A description of the development and implementation of a method introduced in the Sudan that attempts to relate to Sudanese culture and to motivate students. The relationship between language teaching methods and the total educational system is discussed. (AMH)
NASA Astrophysics Data System (ADS)
Monovasilis, Th.; Kalogiratou, Z.; Simos, T. E.
2013-10-01
In this work we derive symplectic EF/TF RKN methods by symplectic EF/TF PRK methods. Also EF/TF symplectic RKN methods are constructed directly from classical symplectic RKN methods. Several numerical examples will be given in order to decide which is the most favourable implementation.
Standard methods for chemical analysis of steel, cast iron, open-hearth iron, and wrought iron
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1973-01-01
Methods are described for determining manganese, phosphorus, sulfur, selenium, copper, nickel, chromium, vanadium, tungsten, titanium, lead, boron, molybdenum ( alpha -benzoin oxime method), zirconium (cupferron --phosphate method), niobium and tantalum (hydrolysis with perchloric and sulfurous acids (gravimetric, titrimetric, and photometric methods)), and beryllium (oxide method). (DHM)
Detection of coupling delay: A problem not yet solved
NASA Astrophysics Data System (ADS)
Coufal, David; Jakubík, Jozef; Jajcay, Nikola; Hlinka, Jaroslav; Krakovská, Anna; Paluš, Milan
2017-08-01
Nonparametric detection of coupling delay in unidirectionally and bidirectionally coupled nonlinear dynamical systems is examined. Both continuous and discrete-time systems are considered. Two methods of detection are assessed—the method based on conditional mutual information—the CMI method (also known as the transfer entropy method) and the method of convergent cross mapping—the CCM method. Computer simulations show that neither method is generally reliable in the detection of coupling delays. For continuous-time chaotic systems, the CMI method appears to be more sensitive and applicable in a broader range of coupling parameters than the CCM method. In the case of tested discrete-time dynamical systems, the CCM method has been found to be more sensitive, while the CMI method required much stronger coupling strength in order to bring correct results. However, when studied systems contain a strong oscillatory component in their dynamics, results of both methods become ambiguous. The presented study suggests that results of the tested algorithms should be interpreted with utmost care and the nonparametric detection of coupling delay, in general, is a problem not yet solved.
An historical survey of computational methods in optimal control.
NASA Technical Reports Server (NTRS)
Polak, E.
1973-01-01
Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.
NASA Astrophysics Data System (ADS)
Park, E.; Jeong, J.; Choi, J.; Han, W. S.; Yun, S. T.
2016-12-01
Three modified outlier identification methods: the three sigma rule (3s), inter quantile range (IQR) and median absolute deviation (MAD), which take advantage of the ensemble regression method are proposed. For validation purposes, the performance of the methods is compared using simulated and actual groundwater data with a few hypothetical conditions. In the validations using simulated data, all of the proposed methods reasonably identify outliers at a 5% outlier level; whereas, only the IQR method performs well for identifying outliers at a 30% outlier level. When applying the methods to real groundwater data, the outlier identification performance of the IQR method is found to be superior to the other two methods. However, the IQR method is found to have a limitation in the false identification of excessive outliers, which may be supplemented by joint applications with the other methods (i.e., the 3s rule and MAD methods). The proposed methods can be also applied as a potential tool for future anomaly detection by model training based on currently available data.
Overview of paint removal methods
NASA Astrophysics Data System (ADS)
Foster, Terry
1995-04-01
With the introduction of strict environmental regulations governing the use and disposal of methylene chloride and phenols, major components of chemical paint strippers, there have been many new environmentally safe and effective methods of paint removal developed. The new methods developed for removing coatings from aircraft and aircraft components include: mechanical methods using abrasive media such as plastic, wheat starch, walnut shells, ice and dry ice, environmentally safe chemical strippers and paint softeners, and optical methods such as lasers and flash lamps. Each method has its advantages and disadvantages, and some have unique applications. For example, mechanical and abrasive methods can damage sensitive surfaces such as composite materials and strict control of blast parameters and conditions are required. Optical methods can be slow, leaving paint residues, and chemical methods may not remove all of the coating or require special coating formulations to be effective. As an introduction to environmentally safe and effective methods of paint removal, this paper is an overview of the various methods available. The purpose of this overview is to introduce the various paint removal methods available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
More, J. J.; Sorensen, D. C.
1982-02-01
Newton's method plays a central role in the development of numerical techniques for optimization. In fact, most of the current practical methods for optimization can be viewed as variations on Newton's method. It is therefore important to understand Newton's method as an algorithm in its own right and as a key introduction to the most recent ideas in this area. One of the aims of this expository paper is to present and analyze two main approaches to Newton's method for unconstrained minimization: the line search approach and the trust region approach. The other aim is to present some of themore » recent developments in the optimization field which are related to Newton's method. In particular, we explore several variations on Newton's method which are appropriate for large scale problems, and we also show how quasi-Newton methods can be derived quite naturally from Newton's method.« less
[Comparison of two nucleic acid extraction methods for norovirus in oysters].
Yuan, Qiao; Li, Hui; Deng, Xiaoling; Mo, Yanling; Fang, Ling; Ke, Changwen
2013-04-01
To explore a convenient and effective method for norovirus nucleic acid extraction from oysters suitable for long-term viral surveillance. Two methods, namely method A (glycine washing and polyethylene glycol precipitation of the virus followed by silica gel centrifugal column) and method B (protease K digestion followed by application of paramagnetic silicon) were compared for their performance in norovirus nucleic acid extraction from oysters. Real-time RT-PCR was used to detect norovirus in naturally infected oysters and in oysters with induced infection. The two methods yielded comparable positive detection rates for the samples, but the recovery rate of the virus was higher with method B than with method A. Method B is a more convenient and rapid method for norovirus nucleic acid extraction from oysters and suitable for long-term surveillance of norovirus.
NASA Technical Reports Server (NTRS)
Atluri, Satya N.; Shen, Shengping
2002-01-01
In this paper, a very simple method is used to derive the weakly singular traction boundary integral equation based on the integral relationships for displacement gradients. The concept of the MLPG method is employed to solve the integral equations, especially those arising in solid mechanics. A moving Least Squares (MLS) interpolation is selected to approximate the trial functions in this paper. Five boundary integral Solution methods are introduced: direct solution method; displacement boundary-value problem; traction boundary-value problem; mixed boundary-value problem; and boundary variational principle. Based on the local weak form of the BIE, four different nodal-based local test functions are selected, leading to four different MLPG methods for each BIE solution method. These methods combine the advantages of the MLPG method and the boundary element method.
NASA Astrophysics Data System (ADS)
Parand, K.; Nikarya, M.
2017-11-01
In this paper a novel method will be introduced to solve a nonlinear partial differential equation (PDE). In the proposed method, we use the spectral collocation method based on Bessel functions of the first kind and the Jacobian free Newton-generalized minimum residual (JFNGMRes) method with adaptive preconditioner. In this work a nonlinear PDE has been converted to a nonlinear system of algebraic equations using the collocation method based on Bessel functions without any linearization, discretization or getting the help of any other methods. Finally, by using JFNGMRes, the solution of the nonlinear algebraic system is achieved. To illustrate the reliability and efficiency of the proposed method, we solve some examples of the famous Fisher equation. We compare our results with other methods.
Mending the Gap, An Effort to Aid the Transfer of Formal Methods Technology
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly
2009-01-01
Formal methods can be applied to many of the development and verification activities required for civil avionics software. RTCA/DO-178B, Software Considerations in Airborne Systems and Equipment Certification, gives a brief description of using formal methods as an alternate method of compliance with the objectives of that standard. Despite this, the avionics industry at large has been hesitant to adopt formal methods, with few developers have actually used formal methods for certification credit. Why is this so, given the volume of evidence of the benefits of formal methods? This presentation will explore some of the challenges to using formal methods in a certification context and describe the effort by the Formal Methods Subgroup of RTCA SC-205/EUROCAE WG-71 to develop guidance to make the use of formal methods a recognized approach.
Methods for the calculation of axial wave numbers in lined ducts with mean flow
NASA Technical Reports Server (NTRS)
Eversman, W.
1981-01-01
A survey is made of the methods available for the calculation of axial wave numbers in lined ducts. Rectangular and circular ducts with both uniform and non-uniform flow are considered as are ducts with peripherally varying liners. A historical perspective is provided by a discussion of the classical methods for computing attenuation when no mean flow is present. When flow is present these techniques become either impractical or impossible. A number of direct eigenvalue determination schemes which have been used when flow is present are discussed. Methods described are extensions of the classical no-flow technique, perturbation methods based on the no-flow technique, direct integration methods for solution of the eigenvalue equation, an integration-iteration method based on the governing differential equation for acoustic transmission, Galerkin methods, finite difference methods, and finite element methods.
Optimal projection method determination by Logdet Divergence and perturbed von-Neumann Divergence.
Jiang, Hao; Ching, Wai-Ki; Qiu, Yushan; Cheng, Xiao-Qing
2017-12-14
Positive semi-definiteness is a critical property in kernel methods for Support Vector Machine (SVM) by which efficient solutions can be guaranteed through convex quadratic programming. However, a lot of similarity functions in applications do not produce positive semi-definite kernels. We propose projection method by constructing projection matrix on indefinite kernels. As a generalization of the spectrum method (denoising method and flipping method), the projection method shows better or comparable performance comparing to the corresponding indefinite kernel methods on a number of real world data sets. Under the Bregman matrix divergence theory, we can find suggested optimal λ in projection method using unconstrained optimization in kernel learning. In this paper we focus on optimal λ determination, in the pursuit of precise optimal λ determination method in unconstrained optimization framework. We developed a perturbed von-Neumann divergence to measure kernel relationships. We compared optimal λ determination with Logdet Divergence and perturbed von-Neumann Divergence, aiming at finding better λ in projection method. Results on a number of real world data sets show that projection method with optimal λ by Logdet divergence demonstrate near optimal performance. And the perturbed von-Neumann Divergence can help determine a relatively better optimal projection method. Projection method ia easy to use for dealing with indefinite kernels. And the parameter embedded in the method can be determined through unconstrained optimization under Bregman matrix divergence theory. This may provide a new way in kernel SVMs for varied objectives.
Two-dimensional phase unwrapping using robust derivative estimation and adaptive integration.
Strand, Jarle; Taxt, Torfinn
2002-01-01
The adaptive integration (ADI) method for two-dimensional (2-D) phase unwrapping is presented. The method uses an algorithm for noise robust estimation of partial derivatives, followed by a noise robust adaptive integration process. The ADI method can easily unwrap phase images with moderate noise levels, and the resulting images are congruent modulo 2pi with the observed, wrapped, input images. In a quantitative evaluation, both the ADI and the BLS methods (Strand et al.) were better than the least-squares methods of Ghiglia and Romero (GR), and of Marroquin and Rivera (MRM). In a qualitative evaluation, the ADI, the BLS, and a conjugate gradient version of the MRM method (MRMCG), were all compared using a synthetic image with shear, using 115 magnetic resonance images, and using 22 fiber-optic interferometry images. For the synthetic image and the interferometry images, the ADI method gave consistently visually better results than the other methods. For the MR images, the MRMCG method was best, and the ADI method second best. The ADI method was less sensitive to the mask definition and the block size than the BLS method, and successfully unwrapped images with shears that were not marked in the masks. The computational requirements of the ADI method for images of nonrectangular objects were comparable to only two iterations of many least-squares-based methods (e.g., GR). We believe the ADI method provides a powerful addition to the ensemble of tools available for 2-D phase unwrapping.
Williams, C.J.; Heglund, P.J.
2009-01-01
Habitat association models are commonly developed for individual animal species using generalized linear modeling methods such as logistic regression. We considered the issue of grouping species based on their habitat use so that management decisions can be based on sets of species rather than individual species. This research was motivated by a study of western landbirds in northern Idaho forests. The method we examined was to separately fit models to each species and to use a generalized Mahalanobis distance between coefficient vectors to create a distance matrix among species. Clustering methods were used to group species from the distance matrix, and multidimensional scaling methods were used to visualize the relations among species groups. Methods were also discussed for evaluating the sensitivity of the conclusions because of outliers or influential data points. We illustrate these methods with data from the landbird study conducted in northern Idaho. Simulation results are presented to compare the success of this method to alternative methods using Euclidean distance between coefficient vectors and to methods that do not use habitat association models. These simulations demonstrate that our Mahalanobis-distance- based method was nearly always better than Euclidean-distance-based methods or methods not based on habitat association models. The methods used to develop candidate species groups are easily explained to other scientists and resource managers since they mainly rely on classical multivariate statistical methods. ?? 2008 Springer Science+Business Media, LLC.
Analytical difficulties facing today's regulatory laboratories: issues in method validation.
MacNeil, James D
2012-08-01
The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.
Zaki, Rafdzah; Bulgiba, Awang; Ismail, Roshidi; Ismail, Noor Azina
2012-01-01
Accurate values are a must in medicine. An important parameter in determining the quality of a medical instrument is agreement with a gold standard. Various statistical methods have been used to test for agreement. Some of these methods have been shown to be inappropriate. This can result in misleading conclusions about the validity of an instrument. The Bland-Altman method is the most popular method judging by the many citations of the article proposing this method. However, the number of citations does not necessarily mean that this method has been applied in agreement research. No previous study has been conducted to look into this. This is the first systematic review to identify statistical methods used to test for agreement of medical instruments. The proportion of various statistical methods found in this review will also reflect the proportion of medical instruments that have been validated using those particular methods in current clinical practice. Five electronic databases were searched between 2007 and 2009 to look for agreement studies. A total of 3,260 titles were initially identified. Only 412 titles were potentially related, and finally 210 fitted the inclusion criteria. The Bland-Altman method is the most popular method with 178 (85%) studies having used this method, followed by the correlation coefficient (27%) and means comparison (18%). Some of the inappropriate methods highlighted by Altman and Bland since the 1980s are still in use. This study finds that the Bland-Altman method is the most popular method used in agreement research. There are still inappropriate applications of statistical methods in some studies. It is important for a clinician or medical researcher to be aware of this issue because misleading conclusions from inappropriate analyses will jeopardize the quality of the evidence, which in turn will influence quality of care given to patients in the future.
Zaki, Rafdzah; Bulgiba, Awang; Ismail, Roshidi; Ismail, Noor Azina
2012-01-01
Background Accurate values are a must in medicine. An important parameter in determining the quality of a medical instrument is agreement with a gold standard. Various statistical methods have been used to test for agreement. Some of these methods have been shown to be inappropriate. This can result in misleading conclusions about the validity of an instrument. The Bland-Altman method is the most popular method judging by the many citations of the article proposing this method. However, the number of citations does not necessarily mean that this method has been applied in agreement research. No previous study has been conducted to look into this. This is the first systematic review to identify statistical methods used to test for agreement of medical instruments. The proportion of various statistical methods found in this review will also reflect the proportion of medical instruments that have been validated using those particular methods in current clinical practice. Methodology/Findings Five electronic databases were searched between 2007 and 2009 to look for agreement studies. A total of 3,260 titles were initially identified. Only 412 titles were potentially related, and finally 210 fitted the inclusion criteria. The Bland-Altman method is the most popular method with 178 (85%) studies having used this method, followed by the correlation coefficient (27%) and means comparison (18%). Some of the inappropriate methods highlighted by Altman and Bland since the 1980s are still in use. Conclusions This study finds that the Bland-Altman method is the most popular method used in agreement research. There are still inappropriate applications of statistical methods in some studies. It is important for a clinician or medical researcher to be aware of this issue because misleading conclusions from inappropriate analyses will jeopardize the quality of the evidence, which in turn will influence quality of care given to patients in the future. PMID:22662248
Murphy, Thomas; Schwedock, Julie; Nguyen, Kham; Mills, Anna; Jones, David
2015-01-01
New recommendations for the validation of rapid microbiological methods have been included in the revised Technical Report 33 release from the PDA. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This case study applies those statistical methods to accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological methods system being evaluated for water bioburden testing. Results presented demonstrate that the statistical methods described in the PDA Technical Report 33 chapter can all be successfully applied to the rapid microbiological method data sets and gave the same interpretation for equivalence to the standard method. The rapid microbiological method was in general able to pass the requirements of PDA Technical Report 33, though the study shows that there can be occasional outlying results and that caution should be used when applying statistical methods to low average colony-forming unit values. Prior to use in a quality-controlled environment, any new method or technology has to be shown to work as designed by the manufacturer for the purpose required. For new rapid microbiological methods that detect and enumerate contaminating microorganisms, additional recommendations have been provided in the revised PDA Technical Report No. 33. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This paper applies those statistical methods to analyze accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological method system being validated for water bioburden testing. The case study demonstrates that the statistical methods described in the PDA Technical Report No. 33 chapter can be successfully applied to rapid microbiological method data sets and give the same comparability results for similarity or difference as the standard method. © PDA, Inc. 2015.
Huang, Y F; Chang, Z; Bai, J; Zhu, M; Zhang, M X; Wang, M; Zhang, G; Li, X Y; Tong, Y G; Wang, J L; Lu, X X
2017-08-08
Objective: To establish and evaluate the feasibility of a pretreatment method for matrix-assisted laser desorption ionization-time of flight mass spectrometry identification of filamentous fungi developed by the laboratory. Methods: Three hundred and eighty strains of filamentous fungi from January 2014 to December 2016 were recovered and cultured on sabouraud dextrose agar (SDA) plate at 28 ℃ to mature state. Meanwhile, the fungi were cultured in liquid sabouraud medium with a vertical rotation method recommended by Bruker and a horizontal vibration method developed by the laboratory until adequate amount of colonies were observed. For the strains cultured with the three methods, protein was extracted with modified magnetic bead-based extraction method for mass spectrum identification. Results: For 380 fungi strains, it took 3-10 d to culture with SDA culture method, and the ratio of identification of the species and genus was 47% and 81%, respectively; it took 5-7 d to culture with vertical rotation method, and the ratio of identification of the species and genus was 76% and 94%, respectively; it took 1-2 d to culture with horizontal vibration method, and the ratio of identification of the species and genus was 96% and 99%, respectively. For the comparison between horizontal vibration method and SDA culture method comparison, the difference was statistically significant (χ(2)=39.026, P <0.01); for the comparison between horizontal vibration method and vertical rotation method recommended by Bruker, the difference was statistically significant(χ(2)=11.310, P <0.01). Conclusion: The horizontal vibration method and modified magnetic bead-based extraction method developed by the laboratory is superior to the method recommended by Bruker and SDA culture method in terms of the identification capacity for filamentous fungi, which can be applied in clinic.
Development of a practical costing method for hospitals.
Cao, Pengyu; Toyabe, Shin-Ichi; Akazawa, Kouhei
2006-03-01
To realize an effective cost control, a practical and accurate cost accounting system is indispensable in hospitals. In traditional cost accounting systems, the volume-based costing (VBC) is the most popular cost accounting method. In this method, the indirect costs are allocated to each cost object (services or units of a hospital) using a single indicator named a cost driver (e.g., Labor hours, revenues or the number of patients). However, this method often results in rough and inaccurate results. The activity based costing (ABC) method introduced in the mid 1990s can prove more accurate results. With the ABC method, all events or transactions that cause costs are recognized as "activities", and a specific cost driver is prepared for each activity. Finally, the costs of activities are allocated to cost objects by the corresponding cost driver. However, it is much more complex and costly than other traditional cost accounting methods because the data collection for cost drivers is not always easy. In this study, we developed a simplified ABC (S-ABC) costing method to reduce the workload of ABC costing by reducing the number of cost drivers used in the ABC method. Using the S-ABC method, we estimated the cost of the laboratory tests, and as a result, similarly accurate results were obtained with the ABC method (largest difference was 2.64%). Simultaneously, this new method reduces the seven cost drivers used in the ABC method to four. Moreover, we performed an evaluation using other sample data from physiological laboratory department to certify the effectiveness of this new method. In conclusion, the S-ABC method provides two advantages in comparison to the VBC and ABC methods: (1) it can obtain accurate results, and (2) it is simpler to perform. Once we reduce the number of cost drivers by applying the proposed S-ABC method to the data for the ABC method, we can easily perform the cost accounting using few cost drivers after the second round of costing.
NASA Astrophysics Data System (ADS)
Abdel-Halim, Lamia M.; Abd-El Rahman, Mohamed K.; Ramadan, Nesrin K.; EL Sanabary, Hoda F. A.; Salem, Maissa Y.
2016-04-01
A comparative study was developed between two classical spectrophotometric methods (dual wavelength method and Vierordt's method) and two recent methods manipulating ratio spectra (ratio difference method and first derivative of ratio spectra method) for simultaneous determination of Antazoline hydrochloride (AN) and Tetryzoline hydrochloride (TZ) in their combined pharmaceutical formulation and in the presence of benzalkonium chloride as a preservative without preliminary separation. The dual wavelength method depends on choosing two wavelengths for each drug in a way so that the difference in absorbance at those two wavelengths is zero for the other drug. While Vierordt's method, is based upon measuring the absorbance and the absorptivity values of the two drugs at their λmax (248.0 and 219.0 nm for AN and TZ, respectively), followed by substitution in the corresponding Vierordt's equation. Recent methods manipulating ratio spectra depend on either measuring the difference in amplitudes of ratio spectra between 255.5 and 269.5 nm for AN and 220.0 and 273.0 nm for TZ in case of ratio difference method or computing first derivative of the ratio spectra for each drug then measuring the peak amplitude at 250.0 nm for AN and at 224.0 nm for TZ in case of first derivative of ratio spectrophotometry. The specificity of the developed methods was investigated by analyzing different laboratory prepared mixtures of the two drugs. All methods were applied successfully for the determination of the selected drugs in their combined dosage form proving that the classical spectrophotometric methods can still be used successfully in analysis of binary mixture using minimal data manipulation rather than recent methods which require relatively more steps. Furthermore, validation of the proposed methods was performed according to ICH guidelines; accuracy, precision and repeatability are found to be within the acceptable limits. Statistical studies showed that the methods can be competitively applied in quality control laboratories.
Lísa, Miroslav; Cífková, Eva; Khalikova, Maria; Ovčačíková, Magdaléna; Holčapek, Michal
2017-11-24
Lipidomic analysis of biological samples in a clinical research represents challenging task for analytical methods given by the large number of samples and their extreme complexity. In this work, we compare direct infusion (DI) and chromatography - mass spectrometry (MS) lipidomic approaches represented by three analytical methods in terms of comprehensiveness, sample throughput, and validation results for the lipidomic analysis of biological samples represented by tumor tissue, surrounding normal tissue, plasma, and erythrocytes of kidney cancer patients. Methods are compared in one laboratory using the identical analytical protocol to ensure comparable conditions. Ultrahigh-performance liquid chromatography/MS (UHPLC/MS) method in hydrophilic interaction liquid chromatography mode and DI-MS method are used for this comparison as the most widely used methods for the lipidomic analysis together with ultrahigh-performance supercritical fluid chromatography/MS (UHPSFC/MS) method showing promising results in metabolomics analyses. The nontargeted analysis of pooled samples is performed using all tested methods and 610 lipid species within 23 lipid classes are identified. DI method provides the most comprehensive results due to identification of some polar lipid classes, which are not identified by UHPLC and UHPSFC methods. On the other hand, UHPSFC method provides an excellent sensitivity for less polar lipid classes and the highest sample throughput within 10min method time. The sample consumption of DI method is 125 times higher than for other methods, while only 40μL of organic solvent is used for one sample analysis compared to 3.5mL and 4.9mL in case of UHPLC and UHPSFC methods, respectively. Methods are validated for the quantitative lipidomic analysis of plasma samples with one internal standard for each lipid class. Results show applicability of all tested methods for the lipidomic analysis of biological samples depending on the analysis requirements. Copyright © 2017 Elsevier B.V. All rights reserved.
Shirasaki, Osamu; Asou, Yosuke; Takahashi, Yukio
2007-12-01
Owing to fast or stepwise cuff deflation, or measuring at places other than the upper arm, the clinical accuracy of most recent automated sphygmomanometers (auto-BPMs) cannot be validated by one-arm simultaneous comparison, which would be the only accurate validation method based on auscultation. Two main alternative methods are provided by current standards, that is, two-arm simultaneous comparison (method 1) and one-arm sequential comparison (method 2); however, the accuracy of these validation methods might not be sufficient to compensate for the suspicious accuracy in lateral blood pressure (BP) differences (LD) and/or BP variations (BPV) between the device and reference readings. Thus, the Japan ISO-WG for sphygmomanometer standards has been studying a new method that might improve validation accuracy (method 3). The purpose of this study is to determine the appropriateness of method 3 by comparing immunity to LD and BPV with those of the current validation methods (methods 1 and 2). The validation accuracy of the above three methods was assessed in human participants [N=120, 45+/-15.3 years (mean+/-SD)]. An oscillometric automated monitor, Omron HEM-762, was used as the tested device. When compared with the others, methods 1 and 3 showed a smaller intra-individual standard deviation of device error (SD1), suggesting their higher reproducibility of validation. The SD1 by method 2 (P=0.004) significantly correlated with the participant's BP, supporting our hypothesis that the increased SD of device error by method 2 is at least partially caused by essential BPV. Method 3 showed a significantly (P=0.0044) smaller interparticipant SD of device error (SD2), suggesting its higher interparticipant consistency of validation. Among the methods of validation of the clinical accuracy of auto-BPMs, method 3, which showed the highest reproducibility and highest interparticipant consistency, can be proposed as being the most appropriate.
Zou, X H; Zhu, Y P; Ren, G Q; Li, G C; Zhang, J; Zou, L J; Feng, Z B; Li, B H
2017-02-20
Objective: To evaluate the significance of bacteria detection with filter paper method on diagnosis of diabetic foot wound infection. Methods: Eighteen patients with diabetic foot ulcer conforming to the study criteria were hospitalized in Liyuan Hospital Affiliated to Tongji Medical College of Huazhong University of Science and Technology from July 2014 to July 2015. Diabetic foot ulcer wounds were classified according to the University of Texas diabetic foot classification (hereinafter referred to as Texas grade) system, and general condition of patients with wounds in different Texas grade was compared. Exudate and tissue of wounds were obtained, and filter paper method and biopsy method were adopted to detect the bacteria of wounds of patients respectively. Filter paper method was regarded as the evaluation method, and biopsy method was regarded as the control method. The relevance, difference, and consistency of the detection results of two methods were tested. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of filter paper method in bacteria detection were calculated. Receiver operating characteristic (ROC) curve was drawn based on the specificity and sensitivity of filter paper method in bacteria detection of 18 patients to predict the detection effect of the method. Data were processed with one-way analysis of variance and Fisher's exact test. In patients tested positive for bacteria by biopsy method, the correlation between bacteria number detected by biopsy method and that by filter paper method was analyzed with Pearson correlation analysis. Results: (1) There were no statistically significant differences among patients with wounds in Texas grade 1, 2, and 3 in age, duration of diabetes, duration of wound, wound area, ankle brachial index, glycosylated hemoglobin, fasting blood sugar, blood platelet count, erythrocyte sedimentation rate, C-reactive protein, aspartate aminotransferase, serum creatinine, and urea nitrogen (with F values from 0.029 to 2.916, P values above 0.05), while there were statistically significant differences among patients with wounds in Texas grade 1, 2, and 3 in white blood cell count and alanine aminotransferase (with F values 4.688 and 6.833 respectively, P <0.05 or P <0.01). (2) According to the results of biopsy method, 6 patients were tested negative for bacteria, and 12 patients were tested positive for bacteria, among which 10 patients were with bacterial number above 1×10(5)/g, and 2 patients with bacterial number below 1×10(5)/g. According to the results of filter paper method, 8 patients were tested negative for bacteria, and 10 patients were tested positive for bacteria, among which 7 patients were with bacterial number above 1×10(5)/g, and 3 patients with bacterial number below 1×10(5)/g. There were 7 patients tested positive for bacteria both by biopsy method and filter paper method, 8 patients tested negative for bacteria both by biopsy method and filter paper method, and 3 patients tested positive for bacteria by biopsy method but negative by filter paper method. Patients tested negative for bacteria by biopsy method did not tested positive for bacteria by filter paper method. There was directional association between the detection results of two methods ( P =0.004), i. e. if result of biopsy method was positive, result of filter paper method could also be positive. There was no obvious difference in the detection results of two methods ( P =0.250). The consistency between the detection results of two methods was ordinary (Kappa=0.68, P =0.002). (3) The sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of filter paper method in bacteria detection were 70%, 100%, 1.00, 0.73, and 83.3%, respectively. Total area under ROC curve of bacteria detection by filter paper method in 18 patients was 0.919 (with 95% confidence interval 0-1.000, P =0.030). (4) There were 13 strains of bacteria detected by biopsy method, with 5 strains of Acinetobacter baumannii, 5 strains of Staphylococcus aureus, 1 strain of Pseudomonas aeruginosa, 1 strain of Streptococcus bovis, and 1 strain of bird Enterococcus . There were 11 strains of bacteria detected by filter paper method, with 5 strains of Acinetobacter baumannii, 3 strains of Staphylococcus aureus, 1 strain of Pseudomonas aeruginosa, 1 strain of Streptococcus bovis, and 1 strain of bird Enterococcus . Except for Staphylococcus aureus, the sensitivity and specificity of filter paper method in the detection of the other 4 bacteria were all 100%. The consistency between filter paper method and biopsy method in detecting Acinetobacter baumannii was good (Kappa=1.00, P <0.01), while that in detecting Staphylococcus aureus was ordinary (Kappa=0.68, P <0.05). (5) There was no obvious correlation between the bacteria number of wounds detected by filter paper method and that by biopsy method ( r =0.257, P =0.419). There was obvious correlation between the bacteria numbers detected by two methods in wounds with Texas grade 1 and 2 (with r values as 0.999, P values as 0.001). There was no obvious correlation between the bacteria numbers detected by two methods in wounds with Texas grade 3 ( r =-0.053, P =0.947). Conclusions: The detection result of filter paper method is in accordance with that of biopsy method in the determination of bacterial infection, and it is of great importance in the diagnosis of local infection of diabetic foot wound.
A k-space method for large-scale models of wave propagation in tissue.
Mast, T D; Souriau, L P; Liu, D L; Tabei, M; Nachman, A I; Waag, R C
2001-03-01
Large-scale simulation of ultrasonic pulse propagation in inhomogeneous tissue is important for the study of ultrasound-tissue interaction as well as for development of new imaging methods. Typical scales of interest span hundreds of wavelengths; most current two-dimensional methods, such as finite-difference and finite-element methods, are unable to compute propagation on this scale with the efficiency needed for imaging studies. Furthermore, for most available methods of simulating ultrasonic propagation, large-scale, three-dimensional computations of ultrasonic scattering are infeasible. Some of these difficulties have been overcome by previous pseudospectral and k-space methods, which allow substantial portions of the necessary computations to be executed using fast Fourier transforms. This paper presents a simplified derivation of the k-space method for a medium of variable sound speed and density; the derivation clearly shows the relationship of this k-space method to both past k-space methods and pseudospectral methods. In the present method, the spatial differential equations are solved by a simple Fourier transform method, and temporal iteration is performed using a k-t space propagator. The temporal iteration procedure is shown to be exact for homogeneous media, unconditionally stable for "slow" (c(x) < or = c0) media, and highly accurate for general weakly scattering media. The applicability of the k-space method to large-scale soft tissue modeling is shown by simulating two-dimensional propagation of an incident plane wave through several tissue-mimicking cylinders as well as a model chest wall cross section. A three-dimensional implementation of the k-space method is also employed for the example problem of propagation through a tissue-mimicking sphere. Numerical results indicate that the k-space method is accurate for large-scale soft tissue computations with much greater efficiency than that of an analogous leapfrog pseudospectral method or a 2-4 finite difference time-domain method. However, numerical results also indicate that the k-space method is less accurate than the finite-difference method for a high contrast scatterer with bone-like properties, although qualitative results can still be obtained by the k-space method with high efficiency. Possible extensions to the method, including representation of absorption effects, absorbing boundary conditions, elastic-wave propagation, and acoustic nonlinearity, are discussed.
The method of planning the energy consumption for electricity market
NASA Astrophysics Data System (ADS)
Russkov, O. V.; Saradgishvili, S. E.
2017-10-01
The limitations of existing forecast models are defined. The offered method is based on game theory, probabilities theory and forecasting the energy prices relations. New method is the basis for planning the uneven energy consumption of industrial enterprise. Ecological side of the offered method is disclosed. The program module performed the algorithm of the method is described. Positive method tests at the industrial enterprise are shown. The offered method allows optimizing the difference between planned and factual consumption of energy every hour of a day. The conclusion about applicability of the method for addressing economic and ecological challenges is made.
Numerical solution of sixth-order boundary-value problems using Legendre wavelet collocation method
NASA Astrophysics Data System (ADS)
Sohaib, Muhammad; Haq, Sirajul; Mukhtar, Safyan; Khan, Imad
2018-03-01
An efficient method is proposed to approximate sixth order boundary value problems. The proposed method is based on Legendre wavelet in which Legendre polynomial is used. The mechanism of the method is to use collocation points that converts the differential equation into a system of algebraic equations. For validation two test problems are discussed. The results obtained from proposed method are quite accurate, also close to exact solution, and other different methods. The proposed method is computationally more effective and leads to more accurate results as compared to other methods from literature.
Modifications of the PCPT method for HJB equations
NASA Astrophysics Data System (ADS)
Kossaczký, I.; Ehrhardt, M.; Günther, M.
2016-10-01
In this paper we will revisit the modification of the piecewise constant policy timestepping (PCPT) method for solving Hamilton-Jacobi-Bellman (HJB) equations. This modification is called piecewise predicted policy timestepping (PPPT) method and if properly used, it may be significantly faster. We will quickly recapitulate the algorithms of PCPT, PPPT methods and of the classical implicit method and apply them on a passport option pricing problem with non-standard payoff. We will present modifications needed to solve this problem effectively with the PPPT method and compare the performance with the PCPT method and the classical implicit method.
Rapid Method for Sodium Hydroxide/Sodium Peroxide Fusion ...
Technical Fact Sheet Analysis Purpose: Qualitative analysis Technique: Alpha spectrometry Method Developed for: Plutonium-238 and plutonium-239 in water and air filters Method Selected for: SAM lists this method as a pre-treatment technique supporting analysis of refractory radioisotopic forms of plutonium in drinking water and air filters using the following qualitative techniques: • Rapid methods for acid or fusion digestion • Rapid Radiochemical Method for Plutonium-238 and Plutonium 239/240 in Building Materials for Environmental Remediation Following Radiological Incidents. Summary of subject analytical method which will be posted to the SAM website to allow access to the method.
The Importance of Method Selection in Determining Product Integrity for Nutrition Research1234
Mudge, Elizabeth M; Brown, Paula N
2016-01-01
The American Herbal Products Association estimates that there as many as 3000 plant species in commerce. The FDA estimates that there are about 85,000 dietary supplement products in the marketplace. The pace of product innovation far exceeds that of analytical methods development and validation, with new ingredients, matrixes, and combinations resulting in an analytical community that has been unable to keep up. This has led to a lack of validated analytical methods for dietary supplements and to inappropriate method selection where methods do exist. Only after rigorous validation procedures to ensure that methods are fit for purpose should they be used in a routine setting to verify product authenticity and quality. By following systematic procedures and establishing performance requirements for analytical methods before method development and validation, methods can be developed that are both valid and fit for purpose. This review summarizes advances in method selection, development, and validation regarding herbal supplement analysis and provides several documented examples of inappropriate method selection and application. PMID:26980823
ZAHABIUN, Farzaneh; SADJJADI, Seyed Mahmoud; ESFANDIARI, Farideh
2015-01-01
Background: Permanent slide preparation of nematodes especially small ones is time consuming, difficult and they become scarious margins. Regarding this problem, a modified double glass mounting method was developed and compared with classic method. Methods: A total of 209 nematode samples from human and animal origin were fixed and stained with Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) followed by double glass mounting and classic dehydration method using Canada balsam as their mounting media. The slides were evaluated in different dates and times, more than four years. Different photos were made with different magnification during the evaluation time. Results: The double glass mounting method was stable during this time and comparable with classic method. There were no changes in morphologic structures of nematodes using double glass mounting method with well-defined and clear differentiation between different organs of nematodes in this method. Conclusion: Using this method is cost effective and fast for mounting of small nematodes comparing to classic method. PMID:26811729
An evaluation of the efficiency of cleaning methods in a bacon factory
Dempster, J. F.
1971-01-01
The germicidal efficiencies of hot water (140-150° F.) under pressure (method 1), hot water + 2% (w/v) detergent solution (method 2) and hot water + detergent + 200 p.p.m. solution of available chlorine (method 3) were compared at six sites in a bacon factory. Results indicated that sites 1 and 2 (tiled walls) were satisfactorily cleaned by each method. It was therefore considered more economical to clean such surfaces routinely by method 1. However, this method was much less efficient (31% survival of micro-organisms) on site 3 (wooden surface) than methods 2 (7% survival) and 3 (1% survival). Likewise the remaining sites (dehairing machine, black scraper and table) were least efficiently cleaned by method 1. The most satisfactory results were obtained when these surfaces were treated by method 3. Pig carcasses were shown to be contaminated by an improperly cleaned black scraper. Repeated cleaning and sterilizing (method 3) of this equipment reduced the contamination on carcasses from about 70% to less than 10%. PMID:5291745
Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al.
Liu, Zhong-Li; Zhang, Xiu-Lu; Cai, Ling-Cang
2015-09-21
A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curve of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.
Simplified adsorption method for detection of antibodies to Candida albicans germ tubes.
Ponton, J; Quindos, G; Arilla, M C; Mackenzie, D W
1994-01-01
Two modifications that simplify and shorten a method for adsorption of the antibodies against the antigens expressed on both blastospore and germ tube cell wall surfaces (methods 2 and 3) were compared with the original method of adsorption (method 1) to detect anti-Candida albicans germ tube antibodies in 154 serum specimens. Adsorption of the sera by both modified methods resulted in titers very similar to those obtained by the original method. Only 5.2% of serum specimens tested by method 2 and 5.8% of serum specimens tested by method 3 presented greater than one dilution discrepancies in the titers with respect to the titer observed by method 1. When a test based on method 2 was evaluated with sera from patients with invasive candidiasis, the best discriminatory results (sensitivity, 84.6%; specificity, 87.9%; positive predictive value, 75.9%; negative predictive value, 92.7%; efficiency, 86.9%) were obtained when a titer of > or = 1:160 was considered positive. PMID:8126184
A hybrid perturbation Galerkin technique with applications to slender body theory
NASA Technical Reports Server (NTRS)
Geer, James F.; Andersen, Carl M.
1989-01-01
A two-step hybrid perturbation-Galerkin method to solve a variety of applied mathematics problems which involve a small parameter is presented. The method consists of: (1) the use of a regular or singular perturbation method to determine the asymptotic expansion of the solution in terms of the small parameter; (2) construction of an approximate solution in the form of a sum of the perturbation coefficient functions multiplied by (unknown) amplitudes (gauge functions); and (3) the use of the classical Bubnov-Galerkin method to determine these amplitudes. This hybrid method has the potential of overcoming some of the drawbacks of the perturbation method and the Bubnov-Galerkin method when they are applied by themselves, while combining some of the good features of both. The proposed method is applied to some singular perturbation problems in slender body theory. The results obtained from the hybrid method are compared with approximate solutions obtained by other methods, and the degree of applicability of the hybrid method to broader problem areas is discussed.
A hybrid perturbation Galerkin technique with applications to slender body theory
NASA Technical Reports Server (NTRS)
Geer, James F.; Andersen, Carl M.
1987-01-01
A two step hybrid perturbation-Galerkin method to solve a variety of applied mathematics problems which involve a small parameter is presented. The method consists of: (1) the use of a regular or singular perturbation method to determine the asymptotic expansion of the solution in terms of the small parameter; (2) construction of an approximate solution in the form of a sum of the perturbation coefficient functions multiplied by (unknown) amplitudes (gauge functions); and (3) the use of the classical Bubnov-Galerkin method to determine these amplitudes. This hybrid method has the potential of overcoming some of the drawbacks of the perturbation method and the Bubnov-Galerkin method when they are applied by themselves, while combining some of the good features of both. The proposed method is applied to some singular perturbation problems in slender body theory. The results obtained from the hybrid method are compared with approximate solutions obtained by other methods, and the degree of applicability of the hybrid method to broader problem areas is discussed.
NASA Astrophysics Data System (ADS)
Schanz, Martin; Ye, Wenjing; Xiao, Jinyou
2016-04-01
Transient problems can often be solved with transformation methods, where the inverse transformation is usually performed numerically. Here, the discrete Fourier transform in combination with the exponential window method is compared with the convolution quadrature method formulated as inverse transformation. Both are inverse Laplace transforms, which are formally identical but use different complex frequencies. A numerical study is performed, first with simple convolution integrals and, second, with a boundary element method (BEM) for elastodynamics. Essentially, when combined with the BEM, the discrete Fourier transform needs less frequency calculations, but finer mesh compared to the convolution quadrature method to obtain the same level of accuracy. If further fast methods like the fast multipole method are used to accelerate the boundary element method the convolution quadrature method is better, because the iterative solver needs much less iterations to converge. This is caused by the larger real part of the complex frequencies necessary for the calculation, which improves the conditions of system matrix.
Explicit methods in extended phase space for inseparable Hamiltonian problems
NASA Astrophysics Data System (ADS)
Pihajoki, Pauli
2015-03-01
We present a method for explicit leapfrog integration of inseparable Hamiltonian systems by means of an extended phase space. A suitably defined new Hamiltonian on the extended phase space leads to equations of motion that can be numerically integrated by standard symplectic leapfrog (splitting) methods. When the leapfrog is combined with coordinate mixing transformations, the resulting algorithm shows good long term stability and error behaviour. We extend the method to non-Hamiltonian problems as well, and investigate optimal methods of projecting the extended phase space back to original dimension. Finally, we apply the methods to a Hamiltonian problem of geodesics in a curved space, and a non-Hamiltonian problem of a forced non-linear oscillator. We compare the performance of the methods to a general purpose differential equation solver LSODE, and the implicit midpoint method, a symplectic one-step method. We find the extended phase space methods to compare favorably to both for the Hamiltonian problem, and to the implicit midpoint method in the case of the non-linear oscillator.
Recent Advances in the Method of Forces: Integrated Force Method of Structural Analysis
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.
1998-01-01
Stress that can be induced in an elastic continuum can be determined directly through the simultaneous application of the equilibrium equations and the compatibility conditions. In the literature, this direct stress formulation is referred to as the integrated force method. This method, which uses forces as the primary unknowns, complements the popular equilibrium-based stiffness method, which considers displacements as the unknowns. The integrated force method produces accurate stress, displacement, and frequency results even for modest finite element models. This version of the force method should be developed as an alternative to the stiffness method because the latter method, which has been researched for the past several decades, may have entered its developmental plateau. Stress plays a primary role in the development of aerospace and other products, and its analysis is difficult. Therefore, it is advisable to use both methods to calculate stress and eliminate errors through comparison. This paper examines the role of the integrated force method in analysis, animation and design.
Du, Jian; Gay, Melvin C L; Lai, Ching Tat; Trengove, Robert D; Hartmann, Peter E; Geddes, Donna T
2017-02-15
The gravimetric method is considered the gold standard for measuring the fat content of human milk. However, it is labor intensive and requires large volumes of human milk. Other methods, such as creamatocrit and esterified fatty acid assay (EFA), have also been used widely in fat analysis. However, these methods have not been compared concurrently with the gravimetric method. Comparison of the three methods was conducted with human milk of varying fat content. Correlations between these methods were high (r(2)=0.99). Statistical differences (P<0.001) were observed in the overall fat measurements and within each group (low, medium and high fat milk) using the three methods. Overall, stronger correlation with lower mean (4.73g/L) and percentage differences (5.16%) was observed with the creamatocrit than the EFA method when compared to the gravimetric method. Furthermore, the ease of operation and real-time analysis make the creamatocrit method preferable. Copyright © 2016. Published by Elsevier Ltd.
EIT image reconstruction based on a hybrid FE-EFG forward method and the complete-electrode model.
Hadinia, M; Jafari, R; Soleimani, M
2016-06-01
This paper presents the application of the hybrid finite element-element free Galerkin (FE-EFG) method for the forward and inverse problems of electrical impedance tomography (EIT). The proposed method is based on the complete electrode model. Finite element (FE) and element-free Galerkin (EFG) methods are accurate numerical techniques. However, the FE technique has meshing task problems and the EFG method is computationally expensive. In this paper, the hybrid FE-EFG method is applied to take both advantages of FE and EFG methods, the complete electrode model of the forward problem is solved, and an iterative regularized Gauss-Newton method is adopted to solve the inverse problem. The proposed method is applied to compute Jacobian in the inverse problem. Utilizing 2D circular homogenous models, the numerical results are validated with analytical and experimental results and the performance of the hybrid FE-EFG method compared with the FE method is illustrated. Results of image reconstruction are presented for a human chest experimental phantom.
Quirós, Elia; Felicísimo, Angel M; Cuartero, Aurora
2009-01-01
This work proposes a new method to classify multi-spectral satellite images based on multivariate adaptive regression splines (MARS) and compares this classification system with the more common parallelepiped and maximum likelihood (ML) methods. We apply the classification methods to the land cover classification of a test zone located in southwestern Spain. The basis of the MARS method and its associated procedures are explained in detail, and the area under the ROC curve (AUC) is compared for the three methods. The results show that the MARS method provides better results than the parallelepiped method in all cases, and it provides better results than the maximum likelihood method in 13 cases out of 17. These results demonstrate that the MARS method can be used in isolation or in combination with other methods to improve the accuracy of soil cover classification. The improvement is statistically significant according to the Wilcoxon signed rank test.
Zheng, Jinkai; Fang, Xiang; Cao, Yong; Xiao, Hang; He, Lili
2013-01-01
To develop an accurate and convenient method for monitoring the production of citrus-derived bioactive 5-demethylnobiletin from demethylation reaction of nobiletin, we compared surface enhanced Raman spectroscopy (SERS) methods with a conventional HPLC method. Our results show that both the substrate-based and solution-based SERS methods correlated with HPLC method very well. The solution method produced lower root mean square error of calibration and higher correlation coefficient than the substrate method. The solution method utilized an ‘affinity chromatography’-like procedure to separate the reactant nobiletin from the product 5-demthylnobiletin based on their different binding affinity to the silver dendrites. The substrate method was found simpler and faster to collect the SERS ‘fingerprint’ spectra of the samples as no incubation between samples and silver was needed and only trace amount of samples were required. Our results demonstrated that the SERS methods were superior to HPLC method in conveniently and rapidly characterizing and quantifying 5-demethylnobiletin production. PMID:23885986
Flow “Fine” Synthesis: High Yielding and Selective Organic Synthesis by Flow Methods
2015-01-01
Abstract The concept of flow “fine” synthesis, that is, high yielding and selective organic synthesis by flow methods, is described. Some examples of flow “fine” synthesis of natural products and APIs are discussed. Flow methods have several advantages over batch methods in terms of environmental compatibility, efficiency, and safety. However, synthesis by flow methods is more difficult than synthesis by batch methods. Indeed, it has been considered that synthesis by flow methods can be applicable for the production of simple gasses but that it is difficult to apply to the synthesis of complex molecules such as natural products and APIs. Therefore, organic synthesis of such complex molecules has been conducted by batch methods. On the other hand, syntheses and reactions that attain high yields and high selectivities by flow methods are increasingly reported. Flow methods are leading candidates for the next generation of manufacturing methods that can mitigate environmental concerns toward sustainable society. PMID:26337828
Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zhong-Li, E-mail: zl.liu@163.com; Zhang, Xiu-Lu; Cai, Ling-Cang
A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curvemore » of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.« less