Carroll, Regina A; Kodak, Tiffany
2014-01-01
The type of procedure used to measure a target behavior may directly influence the perceived treatment outcomes. In the present study, we examined the influence of different data-analysis procedures on the outcomes of two commonly used treatments on the vocal stereotypy of 2 children with an autism spectrum disorder. In Study 1, we compared an interrupted and uninterrupted data-analysis procedure to measure vocal stereotypy during the implementation of response interruption and redirection (RIRD). The results showed that the interrupted data-analysis procedure overestimated the effectiveness of RIRD. In Study 2, we examined the influence of different data-analysis procedures on the interpretation of the relative effects of 2 different treatments for vocal stereotypy. Specifically, we compared interrupted and uninterrupted data-analysis procedures during the implementation of RIRD and noncontingent reinforcement (NCR) as a treatment for vocal stereotypy. The results showed that, as in Study 1, the interrupted data-analysis procedure overestimated the effectiveness of RIRD; however, this effect was not apparent with NCR. These findings suggest that different types of data analysis can influence the perceived success of a treatment. © Society for the Experimental Analysis of Behavior.
Procedural-support music therapy in the healthcare setting: a cost-effectiveness analysis.
DeLoach Walworth, Darcy
2005-08-01
This comparative analysis examined the cost-effectiveness of music therapy as a procedural support in the pediatric healthcare setting. Many healthcare organizations are actively attempting to reduce the amount of sedation for pediatric patients undergoing various procedures. Patients receiving music therapy-assisted computerized tomography scans ( n = 57), echocardiograms ( n = 92), and other procedures ( n = 17) were included in the analysis. Results of music therapy-assisted procedures indicate successful elimination of patient sedation, reduction in procedural times, and decrease in the number of staff members present for procedures. Implications for nurses and music therapists in the healthcare setting are discussed.
Statistical methodology for the analysis of dye-switch microarray experiments
Mary-Huard, Tristan; Aubert, Julie; Mansouri-Attia, Nadera; Sandra, Olivier; Daudin, Jean-Jacques
2008-01-01
Background In individually dye-balanced microarray designs, each biological sample is hybridized on two different slides, once with Cy3 and once with Cy5. While this strategy ensures an automatic correction of the gene-specific labelling bias, it also induces dependencies between log-ratio measurements that must be taken into account in the statistical analysis. Results We present two original statistical procedures for the statistical analysis of individually balanced designs. These procedures are compared with the usual ML and REML mixed model procedures proposed in most statistical toolboxes, on both simulated and real data. Conclusion The UP procedure we propose as an alternative to usual mixed model procedures is more efficient and significantly faster to compute. This result provides some useful guidelines for the analysis of complex designs. PMID:18271965
Consequences of common data analysis inaccuracies in CNS trauma injury basic research.
Burke, Darlene A; Whittemore, Scott R; Magnuson, David S K
2013-05-15
The development of successful treatments for humans after traumatic brain or spinal cord injuries (TBI and SCI, respectively) requires animal research. This effort can be hampered when promising experimental results cannot be replicated because of incorrect data analysis procedures. To identify and hopefully avoid these errors in future studies, the articles in seven journals with the highest number of basic science central nervous system TBI and SCI animal research studies published in 2010 (N=125 articles) were reviewed for their data analysis procedures. After identifying the most common statistical errors, the implications of those findings were demonstrated by reanalyzing previously published data from our laboratories using the identified inappropriate statistical procedures, then comparing the two sets of results. Overall, 70% of the articles contained at least one type of inappropriate statistical procedure. The highest percentage involved incorrect post hoc t-tests (56.4%), followed by inappropriate parametric statistics (analysis of variance and t-test; 37.6%). Repeated Measures analysis was inappropriately missing in 52.0% of all articles and, among those with behavioral assessments, 58% were analyzed incorrectly. Reanalysis of our published data using the most common inappropriate statistical procedures resulted in a 14.1% average increase in significant effects compared to the original results. Specifically, an increase of 15.5% occurred with Independent t-tests and 11.1% after incorrect post hoc t-tests. Utilizing proper statistical procedures can allow more-definitive conclusions, facilitate replicability of research results, and enable more accurate translation of those results to the clinic.
A BPM calibration procedure using TBT data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, M.J.; Crisp, J.; Prieto, P.
2007-06-01
Accurate BPM calibration is crucial for lattice analysis. It is also reassuring when the calibration can be independently verified. This paper outlines a procedure that can extract BPM calibration information from TBT orbit data. The procedure is developed as an extension to the Turn-By-Turn lattice analysis [1]. Its application to data from both Recycler Ring and Main Injector (MI) at Fermilab have produced very encouraging results. Some specifics in hardware design will be mentioned to contrast that of analysis results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartman, J.S.; Gordon, R.L.; Lessor, D.L.
1981-08-01
Alternate measurement and data analysis procedures are discussed and compared for the application of reflective Nomarski differential interference contrast microscopy for the determination of surface slopes. The discussion includes the interpretation of a previously reported iterative procedure using the results of a detailed optical model and the presentation of a new procedure based on measured image intensity extrema. Surface slope determinations from these procedures are presented and compared with results from a previously reported curve fit analysis of image intensity data. The accuracy and advantages of the different procedures are discussed.
Fuzzy Structures Analysis of Aircraft Panels in NASTRAN
NASA Technical Reports Server (NTRS)
Sparrow, Victor W.; Buehrle, Ralph D.
2001-01-01
This paper concerns an application of the fuzzy structures analysis (FSA) procedures of Soize to prototypical aerospace panels in MSC/NASTRAN, a large commercial finite element program. A brief introduction to the FSA procedures is first provided. The implementation of the FSA methods is then disclosed, and the method is validated by comparison to published results for the forced vibrations of a fuzzy beam. The results of the new implementation show excellent agreement to the benchmark results. The ongoing effort at NASA Langley and Penn State to apply these fuzzy structures analysis procedures to real aircraft panels is then described.
Analysis of half diallel mating designs I: a practical analysis procedure for ANOVA approximation.
G.R. Johnson; J.N. King
1998-01-01
Procedures to analyze half-diallel mating designs using the SAS statistical package are presented. The procedure requires two runs of PROC and VARCOMP and results in estimates of additive and non-additive genetic variation. The procedures described can be modified to work on most statistical software packages which can compute variance component estimates. The...
Tobiszewski, Marek; Orłowski, Aleksander
2015-03-27
The study presents the possibility of multi-criteria decision analysis (MCDA) application when choosing analytical procedures with low environmental impact. A type of MCDA, Preference Ranking Organization Method for Enrichment Evaluations (PROMETHEE), was chosen as versatile tool that meets all the analytical chemists--decision makers requirements. Twenty five analytical procedures for aldrin determination in water samples (as an example) were selected as input alternatives to MCDA analysis. Nine different criteria describing the alternatives were chosen from different groups--metrological, economical and the most importantly--environmental impact. The weights for each criterion were obtained from questionnaires that were sent to experts, giving three different scenarios for MCDA results. The results of analysis show that PROMETHEE is very promising tool to choose the analytical procedure with respect to its greenness. The rankings for all three scenarios placed solid phase microextraction and liquid phase microextraction--based procedures high, while liquid-liquid extraction, solid phase extraction and stir bar sorptive extraction--based procedures were placed low in the ranking. The results show that although some of the experts do not intentionally choose green analytical chemistry procedures, their MCDA choice is in accordance with green chemistry principles. The PROMETHEE ranking results were compared with more widely accepted green analytical chemistry tools--NEMI and Eco-Scale. As PROMETHEE involved more different factors than NEMI, the assessment results were only weakly correlated. Oppositely, the results of Eco-Scale assessment were well-correlated as both methodologies involved similar criteria of assessment. Copyright © 2015 Elsevier B.V. All rights reserved.
Methods of determination of periods in the motion of asteroids
NASA Astrophysics Data System (ADS)
Bien, R.; Schubart, J.
Numerical techniques for the analysis of fundamental periods in asteroidal motion are evaluated. The specific techniques evaluated were: the periodogram analysis procedure of Wundt (1980); Stumpff's (1937) system of algebraic transformations; and Labrouste's procedure. It is shown that the Labrouste procedure permitted sufficient isolation of single oscillations from the quasi-periodic process of asteroidal motion. The procedure was applied to the analysis of resonance in the motion of Trojan-type and Hilda-type asteroids, and some preliminary results are discussed.
An Inexpensive Electrodeposition Device and Its Use in a Quantitative Analysis Laboratory Exercise
ERIC Educational Resources Information Center
Parker, Richard H.
2011-01-01
An experimental procedure, using an apparatus that is easy to construct, was developed to incorporate a quantitative electrogravimetric determination of the solution nickel content into an undergraduate or advanced high school quantitative analysis laboratory. This procedure produces results comparable to the procedure used for the gravimetric…
Multidisciplinary design optimization using multiobjective formulation techniques
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Pagaldipti, Narayanan S.
1995-01-01
This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.
An Improved Qualitative Analysis Procedure for Aluminum Subgroup Cations.
ERIC Educational Resources Information Center
Kistner, C. R.; Robinson, Patricia J.
1983-01-01
Describes a procedure for the qualitative analysis of aluminum subgroup cations designed to avoid failure to obtain lead or barium chromate precipitates or failure to report aluminum hydroxide when present (due to staining). Provides a flow chart and step-by-step explanation for the new procedure, indicating significantly improved student results.…
40 CFR 258.53 - Ground-water sampling and analysis requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... include consistent sampling and analysis procedures that are designed to ensure monitoring results that... testing period. If a multiple comparisons procedure is used, the Type I experiment wise error rate for...
40 CFR 257.23 - Ground-water sampling and analysis requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... and analysis procedures that are designed to ensure monitoring results that provide an accurate... procedure is used, the Type I experiment wise error rate for each testing period shall be no less than 0.05...
Detailed analysis of CAMS procedures for phase 3 using ground truth inventories
NASA Technical Reports Server (NTRS)
Carnes, J. G.
1979-01-01
The results of a study of Procedure 1 as used during LACIE Phase 3 are presented. The study was performed by comparing the Procedure 1 classification results with digitized ground-truth inventories. The proportion estimation accuracy, dot labeling accuracy, and clustering effectiveness are discussed.
Lincoln, Tricia A.; Horan-Ross, Debra A.; McHale, Michael R.; Lawrence, Gregory B.
2009-01-01
The laboratory for analysis of low-ionic-strength water at the U.S. Geological Survey (USGS) Water Science Center in Troy, N.Y., analyzes samples collected by USGS projects throughout the Northeast. The laboratory's quality-assurance program is based on internal and interlaboratory quality-assurance samples and quality-control procedures that were developed to ensure proper sample collection, processing, and analysis. The quality-assurance and quality-control data were stored in the laboratory's Lab Master data-management system, which provides efficient review, compilation, and plotting of data. This report presents and discusses results of quality-assurance and quality control samples analyzed from July 2001 through June 2003. Results for the quality-control samples for 19 analytical procedures were evaluated for bias and precision. Control charts indicate that data for six of the analytical procedures were occasionally biased for either high-concentration or low-concentration samples but were within control limits; these procedures were: acid-neutralizing capacity, chloride, magnesium, nitrate (ion chromatography), potassium, and sodium. The calcium procedure was biased throughout the analysis period for the high-concentration sample, but was within control limits. The total monomeric aluminum and fluoride procedures were biased throughout the analysis period for the low-concentration sample, but were within control limits. The total aluminum, pH, specific conductance, and sulfate procedures were biased for the high-concentration and low-concentration samples, but were within control limits. Results from the filter-blank and analytical-blank analyses indicate that the procedures for 16 of 18 analytes were within control limits, although the concentrations for blanks were occasionally outside the control limits. The data-quality objective was not met for the dissolved organic carbon or specific conductance procedures. Sampling and analysis precision are evaluated herein in terms of the coefficient of variation obtained for triplicate samples in the procedures for 18 of the 21 analytes. At least 90 percent of the samples met data-quality objectives for all procedures except total monomeric aluminum (83 percent of samples met objectives), total aluminum (76 percent of samples met objectives), ammonium (73 percent of samples met objectives), dissolved organic carbon (86 percent of samples met objectives), and nitrate (81 percent of samples met objectives). The data-quality objective was not met for the nitrite procedure. Results of the USGS interlaboratory Standard Reference Sample (SRS) Project indicated satisfactory or above data quality over the time period, with most performance ratings for each sample in the good-to-excellent range. The N-sample (nutrient constituents) analysis had one unsatisfactory rating for the ammonium procedure in one study. The T-sample (trace constituents) analysis had one unsatisfactory rating for the magnesium procedure and one marginal rating for the potassium procedure in one study and one unsatisfactory rating for the sodium procedure in another. Results of Environment Canada's National Water Research Institute (NWRI) program indicated that at least 90 percent of the samples met data-quality objectives for 10 of the 14 analytes; the exceptions were acid-neutralizing capacity, ammonium, dissolved organic carbon, and sodium. Data-quality objectives were not met in 37 percent of samples analyzed for acid-neutralizing capacity, 28 percent of samples analyzed for dissolved organic carbon, and 30 percent of samples analyzed for sodium. Results indicate a positive bias for the ammonium procedure in one study and a negative bias in another. Results from blind reference-sample analyses indicated that data-quality objectives were met by at least 90 percent of the samples analyzed for calcium, chloride, magnesium, pH, potassium, and sodium. Data-quality objectives were met by 78 percent of
Lincoln, Tricia A.; Horan-Ross, Debra A.; McHale, Michael R.; Lawrence, Gregory B.
2006-01-01
The laboratory for analysis of low-ionic-strength water at the U.S. Geological Survey (USGS) Water Science Center in Troy, N.Y., analyzes samples collected by USGS projects throughout the Northeast. The laboratory's quality-assurance program is based on internal and interlaboratory quality-assurance samples and quality-control procedures that were developed to ensure proper sample collection, processing, and analysis. The quality-assurance and quality-control data were stored in the laboratory's LabMaster data-management system, which provides efficient review, compilation, and plotting of data. This report presents and discusses results of quality-assurance and quality-control samples analyzed from July 1999 through June 2001. Results for the quality-control samples for 18 analytical procedures were evaluated for bias and precision. Control charts indicate that data for eight of the analytical procedures were occasionally biased for either high-concentration or low-concentration samples but were within control limits; these procedures were: acid-neutralizing capacity, total monomeric aluminum, total aluminum, calcium, chloride and nitrate (ion chromatography and colormetric method) and sulfate. The total aluminum and dissolved organic carbon procedures were biased throughout the analysis period for the high-concentration sample, but were within control limits. The calcium and specific conductance procedures were biased throughout the analysis period for the low-concentration sample, but were within control limits. The magnesium procedure was biased for the high-concentration and low concentration samples, but was within control limits. Results from the filter-blank and analytical-blank analyses indicate that the procedures for 14 of 15 analytes were within control limits, although the concentrations for blanks were occasionally outside the control limits. The data-quality objective was not met for dissolved organic carbon. Sampling and analysis precision are evaluated herein in terms of the coefficient of variation obtained for triplicate samples in the procedures for 17 of the 18 analytes. At least 90 percent of the samples met data-quality objectives for all analytes except ammonium (81 percent of samples met objectives), chloride (75 percent of samples met objectives), and sodium (86 percent of samples met objectives). Results of the USGS interlaboratory Standard Reference Sample (SRS) Project indicated good data quality over the time period, with most ratings for each sample in the good to excellent range. The P-sample (low-ionic-strength constituents) analysis had one satisfactory rating for the specific conductance procedure in one study. The T-sample (trace constituents) analysis had one satisfactory rating for the aluminum procedure in one study and one unsatisfactory rating for the sodium procedure in another. The remainder of the samples had good or excellent ratings for each study. Results of Environment Canada's National Water Research Institute (NWRI) program indicated that at least 89 percent of the samples met data-quality objectives for 10 of the 14 analytes; the exceptions were ammonium, total aluminum, dissolved organic carbon, and sodium. Results indicate a positive bias for the ammonium procedure in all studies. Data-quality objectives were not met in 50 percent of samples analyzed for total aluminum, 38 percent of samples analyzed for dissolved organic carbon, and 27 percent of samples analyzed for sodium. Results from blind reference-sample analyses indicated that data-quality objectives were met by at least 91 percent of the samples analyzed for calcium, chloride, fluoride, magnesium, pH, potassium, and sulfate. Data-quality objectives were met by 75 percent of the samples analyzed for sodium and 58 percent of the samples analyzed for specific conductance.
Crop Identification Technology Assessment for Remote Sensing (CITARS)
NASA Technical Reports Server (NTRS)
Bauer, M. E.; Cary, T. K.; Davis, B. J.; Swain, P. H.
1975-01-01
The results of classifications and experiments performed for the Crop Identification Technology Assessment for Remote Sensing (CITARS) project are summarized. Fifteen data sets were classified using two analysis procedures. One procedure used class weights while the other assumed equal probabilities of occurrence for all classes. In addition, 20 data sets were classified using training statistics from another segment or date. The results of both the local and non-local classifications in terms of classification and proportion estimation are presented. Several additional experiments are described which were performed to provide additional understanding of the CITARS results. These experiments investigated alternative analysis procedures, training set selection and size, effects of multitemporal registration, the spectral discriminability of corn, soybeans, and other, and analysis of aircraft multispectral data.
Vinklárková, Bára; Chromý, Vratislav; Šprongl, Luděk; Bittová, Miroslava; Rikanová, Milena; Ohnútková, Ivana; Žaludová, Lenka
2015-01-01
To select a Kjeldahl procedure suitable for the determination of total protein in reference materials used in laboratory medicine, we reviewed in our previous article Kjeldahl methods adopted by clinical chemistry and found an indirect two-step analysis by total Kjeldahl nitrogen corrected for its nonprotein nitrogen and a direct analysis made on isolated protein precipitates. In this article, we compare both procedures on various reference materials. An indirect Kjeldahl method gave falsely lower results than a direct analysis. Preliminary performance parameters qualify the direct Kjeldahl analysis as a suitable primary reference procedure for the certification of total protein in reference laboratories.
Analysis of helicopter noise data using international helicopter noise certification procedures
DOT National Transportation Integrated Search
1986-03-31
This report documents the results of a Federal Aviation Administration (FAA) noise measurement flight test program involving seven helicopters and established noise levels using the basic testing, reduction and analysis procedures specified by the In...
Gerson, Lauren; Stouch, Bruce; Lobonţiu, Adrian
2018-01-01
The TIF procedure has emerged as an endoscopic treatment for patients with refractory gastro-esophageal reflux disease (GERD). Previous systematic reviews of the TIF procedure conflated findings from studies with modalities that do not reflect the current 2.0 procedure technique or refined data-backed patient selection criteria. A meta-analysis was conducted using data only from randomized studies that assessed the TIF 2.0 procedure compared to a control. The purpose of the meta-analysis was to determine the efficacy and long-term outcomes associated with performance of the TIF 2.0 procedure in patients with chronic long-term refractory GERD on optimized PPI therapy, including esophageal pH, PPI utilization and quality of life. Methods: Three prospective research questions were predicated on the outcomes of the TIF procedure compared to patients who received PPI therapy or sham, concomitant treatment for GERD, and the patient-reported quality of life. Event rates were calculated using the random effect model. Since the time of follow-up post-TIF procedure was variable, analysis was performed to incorporate the time of follow-up for each individual patient at the 3-year time point. Results: Results from this meta-analysis, including data from 233 patients, demonstrated that TIF subjects at 3 years had improved esophageal pH, a decrease in PPI utilization, and improved quality of life. Conclusions: In a meta-analysis of randomized, controlled trials (RCTs), the TIF procedure data for patients with GERD refractory to PPI's produces significant changes, compared with sham or PPI therapy, in esophageal pH, decreased PPI utilization, and improved quality of life. Celsius.
Microbial ecology laboratory procedures manual NASA/MSFC
NASA Technical Reports Server (NTRS)
Huff, Timothy L.
1990-01-01
An essential part of the efficient operation of any microbiology laboratory involved in sample analysis is a standard procedures manual. The purpose of this manual is to provide concise and well defined instructions on routine technical procedures involving sample analysis and methods for monitoring and maintaining quality control within the laboratory. Of equal importance is the safe operation of the laboratory. This manual outlines detailed procedures to be followed in the microbial ecology laboratory to assure safety, analytical control, and validity of results.
The use of cognitive task analysis to improve instructional descriptions of procedures.
Clark, Richard E; Pugh, Carla M; Yates, Kenneth A; Inaba, Kenji; Green, Donald J; Sullivan, Maura E
2012-03-01
Surgical training relies heavily on the ability of expert surgeons to provide complete and accurate descriptions of a complex procedure. However, research from a variety of domains suggests that experts often omit critical information about the judgments, analysis, and decisions they make when solving a difficult problem or performing a complex task. In this study, we compared three methods for capturing surgeons' descriptions of how to perform the procedure for inserting a femoral artery shunt (unaided free-recall, unaided free-recall with simulation, and cognitive task analysis methods) to determine which method produced more accurate and complete results. Cognitive task analysis was approximately 70% more complete and accurate than free-recall and or free-recall during a simulation of the procedure. Ten expert trauma surgeons at a major urban trauma center were interviewed separately and asked to describe how to perform an emergency shunt procedure. Four surgeons provided an unaided free-recall description of the shunt procedure, five surgeons provided an unaided free-recall description of the procedure using visual aids and surgical instruments (simulation), and one (chosen randomly) was interviewed using cognitive task analysis (CTA) methods. An 11th vascular surgeon approved the final CTA protocol. The CTA interview with only one expert surgeon resulted in significantly greater accuracy and completeness of the descriptions compared with the unaided free-recall interviews with multiple expert surgeons. Surgeons in the unaided group omitted nearly 70% of necessary decision steps. In the free-recall group, heavy use of simulation improved surgeons' completeness when describing the steps of the procedure. CTA significantly increases the completeness and accuracy of surgeons' instructional descriptions of surgical procedures. In addition, simulation during unaided free-recall interviews may improve the completeness of interview data. Copyright © 2012 Elsevier Inc. All rights reserved.
Castellazzi, Giovanni; D'Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro
2015-07-28
In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation.
Characteristics of health interventions: a systematic analysis of the Austrian Procedure Catalogue.
Neururer, Sabrina B; Pfeiffer, Karl-Peter
2012-01-01
The Austrian Procedure Catalogue contains 1,500 codes for health interventions used for performance-oriented hospital financing in Austria. It offers a multiaxial taxonomy. The aim of this study is to identify characteristics of medical procedures. Therefore a definition analysis followed by a typological analysis was conducted. Search strings were generated out of code descriptions regarding the heart, large vessels and cardiovascular system. Their definitions were looked up in the Pschyrembel Clinical Dictionary and documented. Out of these definitions, types which represent characteristics of health interventions were abstracted. The three axes of the Austrian Procedure Catalogue were approved as well as new, relevant information identified. The results are the foundation of a further enhancement of the Austrian Procedure Catalogue.
Risk analysis of computer system designs
NASA Technical Reports Server (NTRS)
Vallone, A.
1981-01-01
Adverse events during implementation can affect final capabilities, schedule and cost of a computer system even though the system was accurately designed and evaluated. Risk analysis enables the manager to forecast the impact of those events and to timely ask for design revisions or contingency plans before making any decision. This paper presents a structured procedure for an effective risk analysis. The procedure identifies the required activities, separates subjective assessments from objective evaluations, and defines a risk measure to determine the analysis results. The procedure is consistent with the system design evaluation and enables a meaningful comparison among alternative designs.
A procedure is presented that uses a vacuum distillation/gas chromatography/mass spectrometry system for analysis of problematic matrices of volatile organic compounds. The procedure compensates for matrix effects and provides both analytical results and confidence intervals from...
Lincoln, Tricia A.; Horan-Ross, Debra A.; McHale, Michael R.; Lawrence, Gregory B.
2009-01-01
The laboratory for analysis of low-ionic-strength water at the U.S. Geological Survey (USGS) Water Science Center in Troy, N.Y., analyzes samples collected by USGS projects throughout the Northeast. The laboratory's quality-assurance program is based on internal and interlaboratory quality-assurance samples and quality-control procedures that were developed to ensure proper sample collection, processing, and analysis. The quality-assurance and quality-control data were stored in the laboratory's Lab Master data-management system, which provides efficient review, compilation, and plotting of data. This report presents and discusses results of quality-assurance and quality control samples analyzed from July 2003 through June 2005. Results for the quality-control samples for 20 analytical procedures were evaluated for bias and precision. Control charts indicate that data for five of the analytical procedures were occasionally biased for either high-concentration or low-concentration samples but were within control limits; these procedures were: acid-neutralizing capacity, total monomeric aluminum, pH, silicon, and sodium. Seven of the analytical procedures were biased throughout the analysis period for the high-concentration sample, but were within control limits; these procedures were: dissolved organic carbon, chloride, nitrate (ion chromatograph), nitrite, silicon, sodium, and sulfate. The calcium and magnesium procedures were biased throughout the analysis period for the low-concentration sample, but were within control limits. The total aluminum and specific conductance procedures were biased for the high-concentration and low-concentration samples, but were within control limits. Results from the filter-blank and analytical-blank analyses indicate that the procedures for 17 of 18 analytes were within control limits, although the concentrations for blanks were occasionally outside the control limits. The data-quality objective was not met for dissolved organic carbon. Sampling and analysis precision are evaluated herein in terms of the coefficient of variation obtained for triplicate samples in the procedures for 18 of the 22 analytes. At least 85 percent of the samples met data-quality objectives for all analytes except total monomeric aluminum (82 percent of samples met objectives), total aluminum (77 percent of samples met objectives), chloride (80 percent of samples met objectives), fluoride (76 percent of samples met objectives), and nitrate (ion chromatograph) (79 percent of samples met objectives). The ammonium and total dissolved nitrogen did not meet the data-quality objectives. Results of the USGS interlaboratory Standard Reference Sample (SRS) Project indicated good data quality over the time period, with ratings for each sample in the satisfactory, good, and excellent ranges or less than 10 percent error. The P-sample (low-ionic-strength constituents) analysis had one marginal and two unsatisfactory ratings for the chloride procedure. The T-sample (trace constituents)analysis had two unsatisfactory ratings and one high range percent error for the aluminum procedure. The N-sample (nutrient constituents) analysis had one marginal rating for the nitrate procedure. Results of Environment Canada's National Water Research Institute (NWRI) program indicated that at least 84 percent of the samples met data-quality objectives for 11 of the 14 analytes; the exceptions were ammonium, total aluminum, and acid-neutralizing capacity. The ammonium procedure did not meet data quality objectives in all studies. Data-quality objectives were not met in 23 percent of samples analyzed for total aluminum and 45 percent of samples analyzed acid-neutralizing capacity. Results from blind reference-sample analyses indicated that data-quality objectives were met by at least 86 percent of the samples analyzed for calcium, chloride, fluoride, magnesium, pH, potassium, sodium, and sulfate. Data-quality objectives were not met by samples analyzed for fluoride.
Müller, Jana Annina; Wendt, Dorothea; Kollmeier, Birger; Brand, Thomas
2016-01-01
The aim of this study was to validate a procedure for performing the audio-visual paradigm introduced by Wendt et al. (2015) with reduced practical challenges. The original paradigm records eye fixations using an eye tracker and calculates the duration of sentence comprehension based on a bootstrap procedure. In order to reduce practical challenges, we first reduced the measurement time by evaluating a smaller measurement set with fewer trials. The results of 16 listeners showed effects comparable to those obtained when testing the original full measurement set on a different collective of listeners. Secondly, we introduced electrooculography as an alternative technique for recording eye movements. The correlation between the results of the two recording techniques (eye tracker and electrooculography) was r = 0.97, indicating that both methods are suitable for estimating the processing duration of individual participants. Similar changes in processing duration arising from sentence complexity were found using the eye tracker and the electrooculography procedure. Thirdly, the time course of eye fixations was estimated with an alternative procedure, growth curve analysis, which is more commonly used in recent studies analyzing eye tracking data. The results of the growth curve analysis were compared with the results of the bootstrap procedure. Both analysis methods show similar processing durations. PMID:27764125
NASA Technical Reports Server (NTRS)
Gossard, Myron L
1952-01-01
An iterative transformation procedure suggested by H. Wielandt for numerical solution of flutter and similar characteristic-value problems is presented. Application of this procedure to ordinary natural-vibration problems and to flutter problems is shown by numerical examples. Comparisons of computed results with experimental values and with results obtained by other methods of analysis are made.
Castellazzi, Giovanni; D’Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro
2015-01-01
In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation. PMID:26225978
Model prototype utilization in the analysis of fault tolerant control and data processing systems
NASA Astrophysics Data System (ADS)
Kovalev, I. V.; Tsarev, R. Yu; Gruzenkin, D. V.; Prokopenko, A. V.; Knyazkov, A. N.; Laptenok, V. D.
2016-04-01
The procedure assessing the profit of control and data processing system implementation is presented in the paper. The reasonability of model prototype creation and analysis results from the implementing of the approach of fault tolerance provision through the inclusion of structural and software assessment redundancy. The developed procedure allows finding the best ratio between the development cost and the analysis of model prototype and earnings from the results of this utilization and information produced. The suggested approach has been illustrated by the model example of profit assessment and analysis of control and data processing system.
Effectiveness of internet-based affect induction procedures: A systematic review and meta-analysis.
Ferrer, Rebecca A; Grenen, Emily G; Taber, Jennifer M
2015-12-01
Procedures used to induce affect in a laboratory are effective and well-validated. Given recent methodological and technological advances in Internet research, it is important to determine whether affect can be effectively induced using Internet methodology. We conducted a meta-analysis and systematic review of prior research that has used Internet-based affect induction procedures, and examined potential moderators of the effectiveness of affect induction procedures. Twenty-six studies were included in final analyses, with 89 independent effect sizes. Affect induction procedures effectively induced general positive affect, general negative affect, fear, disgust, anger, sadness, and guilt, but did not significantly induce happiness. Contamination of other nontarget affect did not appear to be a major concern. Video inductions resulted in greater effect sizes. Overall, results indicate that affect can be effectively induced in Internet studies, suggesting an important venue for the acceleration of affective science. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Manolov, Rumen; Jamieson, Matthew; Evans, Jonathan J; Sierra, Vicenta
2015-09-01
Single-case data analysis still relies heavily on visual inspection, and, at the same time, it is not clear to what extent the results of different quantitative procedures converge in identifying an intervention effect and its magnitude when applied to the same data; this is the type of evidence provided here for two procedures. One of the procedures, included due to the importance of providing objective criteria to visual analysts, is a visual aid fitting and projecting split-middle trend while taking into account data variability. The other procedure converts several different metrics into probabilities making their results comparable. In the present study, we expore to what extend these two procedures coincide in the magnitude of intervention effect taking place in a set of studies stemming from a recent meta-analysis. The procedures concur to a greater extent with the values of the indices computed and with each other and, to a lesser extent, with our own visual analysis. For distinguishing smaller from larger effects, the probability-based approach seems somewhat better suited. Moreover, the results of the field test suggest that the latter is a reasonably good mechanism for translating different metrics into similar labels. User friendly R code is provided for promoting the use of the visual aid, together with a quantification based on nonoverlap and the label provided by the probability approach. © The Author(s) 2015.
Medical microbiological analysis of Apollo-Soyuz test project crewmembers
NASA Technical Reports Server (NTRS)
Taylor, G. R.; Zaloguev, S. N.
1976-01-01
The procedures and results of the Microbial Exchange Experiment (AR-002) of the Apollo-Soyuz Test Project are described. Included in the discussion of procedural aspects are methods and materials, in-flight microbial specimen collection, and preliminary analysis of microbial specimens. Medically important microorganisms recovered from both Apollo and Soyuz crewmen are evaluated.
Rossi, Pierre; Gillet, François; Rohrbach, Emmanuelle; Diaby, Nouhou; Holliger, Christof
2009-01-01
The variability of terminal restriction fragment polymorphism analysis applied to complex microbial communities was assessed statistically. Recent technological improvements were implemented in the successive steps of the procedure, resulting in a standardized procedure which provided a high level of reproducibility. PMID:19749066
Meta-Analysis of Criterion Validity for Curriculum-Based Measurement in Written Language
ERIC Educational Resources Information Center
Romig, John Elwood; Therrien, William J.; Lloyd, John W.
2017-01-01
We used meta-analysis to examine the criterion validity of four scoring procedures used in curriculum-based measurement of written language. A total of 22 articles representing 21 studies (N = 21) met the inclusion criteria. Results indicated that two scoring procedures, correct word sequences and correct minus incorrect sequences, have acceptable…
Contact stresses in meshing spur gear teeth: Use of an incremental finite element procedure
NASA Technical Reports Server (NTRS)
Hsieh, Chih-Ming; Huston, Ronald L.; Oswald, Fred B.
1992-01-01
Contact stresses in meshing spur gear teeth are examined. The analysis is based upon an incremental finite element procedure that simultaneously determines the stresses in the contact region between the meshing teeth. The teeth themselves are modeled by two dimensional plain strain elements. Friction effects are included, with the friction forces assumed to obey Coulomb's law. The analysis assumes that the displacements are small and that the tooth materials are linearly elastic. The analysis procedure is validated by comparing its results with those for the classical two contacting semicylinders obtained from the Hertz method. Agreement is excellent.
Structural Analysis and Testing of an Erectable Truss for Precision Segmented Reflector Application
NASA Technical Reports Server (NTRS)
Collins, Timothy J.; Fichter, W. B.; Adams, Richard R.; Javeed, Mehzad
1995-01-01
This paper describes analysis and test results obtained at Langley Research Center (LaRC) on a doubly curved testbed support truss for precision reflector applications. Descriptions of test procedures and experimental results that expand upon previous investigations are presented. A brief description of the truss is given, and finite-element-analysis models are described. Static-load and vibration test procedures are discussed, and experimental results are shown to be repeatable and in generally good agreement with linear finite-element predictions. Truss structural performance (as determined by static deflection and vibration testing) is shown to be predictable and very close to linear. Vibration test results presented herein confirm that an anomalous mode observed during initial testing was due to the flexibility of the truss support system. Photogrammetric surveys with two 131-in. reference scales show that the root-mean-square (rms) truss-surface accuracy is about 0.0025 in. Photogrammetric measurements also indicate that the truss coefficient of thermal expansion (CTE) is in good agreement with that predicted by analysis. A detailed description of the photogrammetric procedures is included as an appendix.
Nicholls, C; Karim, K; Piletsky, S; Saini, S; Setford, S
2006-01-15
The preparation of a molecularly imprinted polymer (MIP) for pentachlorophenol is described together with two alternative reporter derivatives for use in a displacement imprinted polymer receptor analysis (DIPRA) format procedure. In this procedure, alternative reporter molecules were rebound to the synthetic receptor sites and their displacement by the target analyte was employed as the basis of a simple procedure for the measurement of chlorophenols in water and packaging material samples. Water samples were extracted using the standard procedure (EPA 528) and a detection limit of 0.5 microg l(-1) was achieved using the DIPRA detection method, with good agreement between the displacement technique and GC-ECD analysis. A variety of packaging materials, extracted using a buffered detergent solution were also analysed using the DIPRA procedure and showed good agreement with GC results. In addition, investigation of the cross-reactivity of a range of pesticides and materials commonly encountered in environmental analysis indicated the procedure gave good discrimination between pesticides bearing a chlorophenolic moiety and other materials. The procedure is considered highly suitable for use as a rapid field-test method or for incorporation into a test kit device.
Actuarial analysis of surgical results: rationale and method.
Grunkemeier, G L; Starr, A
1977-11-01
The use of time-related methods of statistical analysis is essential for valid evaluation of the long-term results of a surgical procedure. Accurate comparison of two procedures or two prosthetic devices is possible only when the length of follow-up is properly accounted for. The purpose of this report is to make the technical aspects of the acturial, or life table, method easily accessible to the surgeon, with emphasis on the motivation for and the rationale behind it. This topic is illustrated in terms of heart valve prostheses, a field that is rapidly developing. Both the authors and readers of articles must be aware that controversies surrounding the relative merits of various prosthetic designs or operative procedures can be settled only if proper time-related methods of analysis are utilized.
Numerical solution of quadratic matrix equations for free vibration analysis of structures
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1975-01-01
This paper is concerned with the efficient and accurate solution of the eigenvalue problem represented by quadratic matrix equations. Such matrix forms are obtained in connection with the free vibration analysis of structures, discretized by finite 'dynamic' elements, resulting in frequency-dependent stiffness and inertia matrices. The paper presents a new numerical solution procedure of the quadratic matrix equations, based on a combined Sturm sequence and inverse iteration technique enabling economical and accurate determination of a few required eigenvalues and associated vectors. An alternative procedure based on a simultaneous iteration procedure is also described when only the first few modes are the usual requirement. The employment of finite dynamic elements in conjunction with the presently developed eigenvalue routines results in a most significant economy in the dynamic analysis of structures.
Vibration Signature Analysis of a Faulted Gear Transmission System
NASA Technical Reports Server (NTRS)
Choy, F. K.; Huang, S.; Zakrajsek, J. J.; Handschuh, R. F.; Townsend, D. P.
1994-01-01
A comprehensive procedure in predicting faults in gear transmission systems under normal operating conditions is presented. Experimental data was obtained from a spiral bevel gear fatigue test rig at NASA Lewis Research Center. Time synchronous averaged vibration data was recorded throughout the test as the fault progressed from a small single pit to severe pitting over several teeth, and finally tooth fracture. A numerical procedure based on the Winger-Ville distribution was used to examine the time averaged vibration data. Results from the Wigner-Ville procedure are compared to results from a variety of signal analysis techniques which include time domain analysis methods and frequency analysis methods. Using photographs of the gear tooth at various stages of damage, the limitations and accuracy of the various techniques are compared and discussed. Conclusions are drawn from the comparison of the different approaches as well as the applicability of the Wigner-Ville method in predicting gear faults.
Development of Multiobjective Optimization Techniques for Sonic Boom Minimization
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.
1996-01-01
A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.
Davis, Barbara J; Schmidt, Jonathan; Bowman, Lynn G; Boelter, Eric W
2012-01-01
Current research provides few suggestions for modifications to functional analysis procedures to accommodate low rate, high intensity problem behavior. This study examined the results of the extended duration functional analysis procedures of Kahng, Abt, and Schonbachler (2001) with six children admitted to an inpatient hospital for the treatment of severe problem behavior. Results of initial functional analyses (Iwata, Dorsey, Slifer, Bauman, & Richman, 1982/1994) were inconclusive for all children because of low levels of responding. The altered functional analyses, which changed multiple variables including the duration of the functional analysis (i.e., 6 or 7 hrs), yielded clear behavioral functions for all six participants. These results add additional support for the utility of an altered analysis of low rate, high intensity problem behavior when standard functional analyses do not yield differentiated results. PMID:23326628
Davis, Barbara J; Kahng, Sungwoo; Schmidt, Jonathan; Bowman, Lynn G; Boelter, Eric W
2012-01-01
Current research provides few suggestions for modifications to functional analysis procedures to accommodate low rate, high intensity problem behavior. This study examined the results of the extended duration functional analysis procedures of Kahng, Abt, and Schonbachler (2001) with six children admitted to an inpatient hospital for the treatment of severe problem behavior. Results of initial functional analyses (Iwata, Dorsey, Slifer, Bauman, & Richman, 1982/1994) were inconclusive for all children because of low levels of responding. The altered functional analyses, which changed multiple variables including the duration of the functional analysis (i.e., 6 or 7 hrs), yielded clear behavioral functions for all six participants. These results add additional support for the utility of an altered analysis of low rate, high intensity problem behavior when standard functional analyses do not yield differentiated results.
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Sullivan, T. L.
1974-01-01
An approximate computational procedure is described for the analysis of angleplied laminates with residual nonlinear strains. The procedure consists of a combination of linear composite mechanics and incremental linear laminate theory. The procedure accounts for initial nonlinear strains, unloading, and in-situ matrix orthotropic nonlinear behavior. The results obtained in applying the procedure to boron/aluminum angleplied laminates show that this is a convenient means to accurately predict the initial tangent properties of angleplied laminates in which the matrix has been strained nonlinearly by the lamination residual stresses. The procedure predicted initial tangent properties results which were in good agreement with measured data obtained from boron/aluminum angleplied laminates.
Macyszyn, Luke; Attiah, Mark; Ma, Tracy S; Ali, Zarina; Faught, Ryan; Hossain, Alisha; Man, Karen; Patel, Hiren; Sobota, Rosanna; Zager, Eric L; Stein, Sherman C
2017-05-01
OBJECTIVE Moyamoya disease (MMD) is a chronic cerebrovascular disease that can lead to devastating neurological outcomes. Surgical intervention is the definitive treatment, with direct, indirect, and combined revascularization procedures currently employed by surgeons. The optimal surgical approach, however, remains unclear. In this decision analysis, the authors compared the effectiveness of revascularization procedures in both adult and pediatric patients with MMD. METHODS A comprehensive literature search was performed for studies of MMD. Using complication and success rates from the literature, the authors constructed a decision analysis model for treatment using a direct and indirect revascularization technique. Utility values for the various outcomes and complications were extracted from the literature examining preferences in similar clinical conditions. Sensitivity analysis was performed. RESULTS A structured literature search yielded 33 studies involving 4197 cases. Cases were divided into adult and pediatric populations. These were further subdivided into 3 different treatment groups: indirect, direct, and combined revascularization procedures. In the pediatric population at 5- and 10-year follow-up, there was no significant difference between indirect and combination procedures, but both were superior to direct revascularization. In adults at 4-year follow-up, indirect was superior to direct revascularization. CONCLUSIONS In the absence of factors that dictate a specific approach, the present decision analysis suggests that direct revascularization procedures are inferior in terms of quality-adjusted life years in both adults at 4 years and children at 5 and 10 years postoperatively, respectively. These findings were statistically significant (p < 0.001 in all cases), suggesting that indirect and combination procedures may offer optimal results at long-term follow-up.
Dehesh, Tania; Zare, Najaf; Ayatollahi, Seyyed Mohammad Taghi
2015-01-01
Univariate meta-analysis (UM) procedure, as a technique that provides a single overall result, has become increasingly popular. Neglecting the existence of other concomitant covariates in the models leads to loss of treatment efficiency. Our aim was proposing four new approximation approaches for the covariance matrix of the coefficients, which is not readily available for the multivariate generalized least square (MGLS) method as a multivariate meta-analysis approach. We evaluated the efficiency of four new approaches including zero correlation (ZC), common correlation (CC), estimated correlation (EC), and multivariate multilevel correlation (MMC) on the estimation bias, mean square error (MSE), and 95% probability coverage of the confidence interval (CI) in the synthesis of Cox proportional hazard models coefficients in a simulation study. Comparing the results of the simulation study on the MSE, bias, and CI of the estimated coefficients indicated that MMC approach was the most accurate procedure compared to EC, CC, and ZC procedures. The precision ranking of the four approaches according to all above settings was MMC ≥ EC ≥ CC ≥ ZC. This study highlights advantages of MGLS meta-analysis on UM approach. The results suggested the use of MMC procedure to overcome the lack of information for having a complete covariance matrix of the coefficients.
Thermoelastic analysis of solar cell arrays and their material properties
NASA Technical Reports Server (NTRS)
Salama, M. A.; Rowe, W. M.; Yasui, R. K.
1973-01-01
A thermoelastic stress analysis procedure is reported for predicting the thermally induced stresses and failures in silicon solar cell arrays. A prerequisite for the analysis is the characterization of the temperature-dependent thermal and mechanical properties of the solar cell materials. Extensive material property testing was carried out in the temperature range -200 to +200 C for the filter glass, P- and N-type silicon, interconnector metals, solder, and several candidate silicone rubber adhesives. The analysis procedure is applied to several solar cell array design configurations. Results of the analysis indicate the optimum design configuration, with respect to compatible materials, effect of the solder coating, and effect of the interconnector geometry. Good agreement was found between results of the analysis and the test program.
Isolating the Effects of Training Using Simple Regression Analysis: An Example of the Procedure.
ERIC Educational Resources Information Center
Waugh, C. Keith
This paper provides a case example of simple regression analysis, a forecasting procedure used to isolate the effects of training from an identified extraneous variable. This case example focuses on results of a three-day sales training program to improve bank loan officers' knowledge, skill-level, and attitude regarding solicitation and sale of…
NASA Technical Reports Server (NTRS)
Stein, M.; Housner, J. D.
1978-01-01
A numerical analysis developed for the buckling of rectangular orthotropic layered panels under combined shear and compression is described. This analysis uses a central finite difference procedure based on trigonometric functions instead of using the conventional finite differences which are based on polynomial functions. Inasmuch as the buckle mode shape is usually trigonometric in nature, the analysis using trigonometric finite differences can be made to exhibit a much faster convergence rate than that using conventional differences. Also, the trigonometric finite difference procedure leads to difference equations having the same form as conventional finite differences; thereby allowing available conventional finite difference formulations to be converted readily to trigonometric form. For two-dimensional problems, the procedure introduces two numerical parameters into the analysis. Engineering approaches for the selection of these parameters are presented and the analysis procedure is demonstrated by application to several isotropic and orthotropic panel buckling problems. Among these problems is the shear buckling of stiffened isotropic and filamentary composite panels in which the stiffener is broken. Results indicate that a break may degrade the effect of the stiffener to the extent that the panel will not carry much more load than if the stiffener were absent.
Failure mode and effects analysis: an empirical comparison of failure mode scoring procedures.
Ashley, Laura; Armitage, Gerry
2010-12-01
To empirically compare 2 different commonly used failure mode and effects analysis (FMEA) scoring procedures with respect to their resultant failure mode scores and prioritization: a mathematical procedure, where scores are assigned independently by FMEA team members and averaged, and a consensus procedure, where scores are agreed on by the FMEA team via discussion. A multidisciplinary team undertook a Healthcare FMEA of chemotherapy administration. This included mapping the chemotherapy process, identifying and scoring failure modes (potential errors) for each process step, and generating remedial strategies to counteract them. Failure modes were scored using both an independent mathematical procedure and a team consensus procedure. Almost three-fifths of the 30 failure modes generated were scored differently by the 2 procedures, and for just more than one-third of cases, the score discrepancy was substantial. Using the Healthcare FMEA prioritization cutoff score, almost twice as many failure modes were prioritized by the consensus procedure than by the mathematical procedure. This is the first study to empirically demonstrate that different FMEA scoring procedures can score and prioritize failure modes differently. It found considerable variability in individual team members' opinions on scores, which highlights the subjective and qualitative nature of failure mode scoring. A consensus scoring procedure may be most appropriate for FMEA as it allows variability in individuals' scores and rationales to become apparent and to be discussed and resolved by the team. It may also yield team learning and communication benefits unlikely to result from a mathematical procedure.
NASA Technical Reports Server (NTRS)
Safford, Robert R.; Jackson, Andrew E.; Swart, William W.; Barth, Timothy S.
1994-01-01
Successful ground processing at KSC requires that flight hardware and ground support equipment conform to specifications at tens of thousands of checkpoints. Knowledge of conformance is an essential requirement for launch. That knowledge of conformance at every requisite point does not, however, enable identification of past problems with equipment, or potential problem areas. This paper describes how the introduction of Statistical Process Control and Process Capability Analysis identification procedures into existing shuttle processing procedures can enable identification of potential problem areas and candidates for improvements to increase processing performance measures. Results of a case study describing application of the analysis procedures to Thermal Protection System processing are used to illustrate the benefits of the approaches described in the paper.
ERIC Educational Resources Information Center
Kirk, Emily R.; Becker, Jennifer A.; Skinner, Christopher H., Fearrington, Jamie Yarbr; McCane-Bowling, Sara J.; Amburn, Christie; Luna, Elisa; Greear, Corinne
2010-01-01
Teacher referrals for consultation resulted in two independent teams collecting evidence that allowed for a treatment component evaluation of color wheel (CW) procedures and/or interdependent group-oriented reward (IGOR) procedures on inappropriate vocalizations in one third- and one first-grade classroom. Both studies involved the application of…
The purpose of this SOP is to outline the archive/custody guidelines used by the NHEXAS Arizona research project. This procedure was followed to maintain and locate samples, extracts, tracings and hard copy results after laboratory analysis during the Arizona NHEXAS project and ...
NASA Technical Reports Server (NTRS)
Bauer, M. E.; Cary, T. K.; Davis, B. J.; Swain, P. H.
1975-01-01
The results of classifications and experiments for the crop identification technology assessment for remote sensing are summarized. Using two analysis procedures, 15 data sets were classified. One procedure used class weights while the other assumed equal probabilities of occurrence for all classes. Additionally, 20 data sets were classified using training statistics from another segment or date. The classification and proportion estimation results of the local and nonlocal classifications are reported. Data also describe several other experiments to provide additional understanding of the results of the crop identification technology assessment for remote sensing. These experiments investigated alternative analysis procedures, training set selection and size, effects of multitemporal registration, spectral discriminability of corn, soybeans, and other, and analyses of aircraft multispectral data.
Schmidmaier, Ralf; Eiber, Stephan; Ebersbach, Rene; Schiller, Miriam; Hege, Inga; Holzer, Matthias; Fischer, Martin R
2013-02-22
Medical knowledge encompasses both conceptual (facts or "what" information) and procedural knowledge ("how" and "why" information). Conceptual knowledge is known to be an essential prerequisite for clinical problem solving. Primarily, medical students learn from textbooks and often struggle with the process of applying their conceptual knowledge to clinical problems. Recent studies address the question of how to foster the acquisition of procedural knowledge and its application in medical education. However, little is known about the factors which predict performance in procedural knowledge tasks. Which additional factors of the learner predict performance in procedural knowledge? Domain specific conceptual knowledge (facts) in clinical nephrology was provided to 80 medical students (3rd to 5th year) using electronic flashcards in a laboratory setting. Learner characteristics were obtained by questionnaires. Procedural knowledge in clinical nephrology was assessed by key feature problems (KFP) and problem solving tasks (PST) reflecting strategic and conditional knowledge, respectively. Results in procedural knowledge tests (KFP and PST) correlated significantly with each other. In univariate analysis, performance in procedural knowledge (sum of KFP+PST) was significantly correlated with the results in (1) the conceptual knowledge test (CKT), (2) the intended future career as hospital based doctor, (3) the duration of clinical clerkships, and (4) the results in the written German National Medical Examination Part I on preclinical subjects (NME-I). After multiple regression analysis only clinical clerkship experience and NME-I performance remained independent influencing factors. Performance in procedural knowledge tests seems independent from the degree of domain specific conceptual knowledge above a certain level. Procedural knowledge may be fostered by clinical experience. More attention should be paid to the interplay of individual clinical clerkship experiences and structured teaching of procedural knowledge and its assessment in medical education curricula.
Mechanism of failure of the Cabrol procedure: A computational fluid dynamic analysis.
Poullis, M; Pullan, M
2015-12-01
Sudden failure of the Cabrol graft is common and frequently fatal. We utilised the technique of computational fluid dynamic (CFD) analysis to evaluate the mechanism of failure and potentially improve on the design of the Cabrol procedure. CFD analysis of the classic Cabrol procedure and a number of its variants was performed. Results from this analysis was utilised to generate further improved geometric options for the Cabrol procedure. These were also subjected to CFD analysis. All current Cabrol and variations of the Cabrol procedure are predicated by CFD analysis to be prone to graft thrombosis, secondary to stasis around the right coronary artery button. The right coronary artery flow characteristics were found to be the dominant reason for Cabrol graft failure. A simple modification of the Cabrol geometry is predicated to virtually eliminate any areas of blood stasis, and graft failure. Modification of the Cabrol graft geometry, due to CFD analysis may help reduce the incidence of graft thrombosis. A C shaped Cabrol graft with the right coronary button anastomosed to its side along its course from the aorta to the left coronary button is predicted to have the least thrombotic tendency. Clinical correlation is needed. Copyright © 2015 Elsevier Ltd. All rights reserved.
Exploratory Bifactor Analysis of the WJ-III Cognitive in Adulthood via the Schmid-Leiman Procedure
ERIC Educational Resources Information Center
Dombrowski, Stefan C.
2014-01-01
The Woodcock-Johnson-III cognitive in the adult time period (age 20 to 90 plus) was analyzed using exploratory bifactor analysis via the Schmid-Leiman orthogonalization procedure. The results of this study suggested possible overfactoring, a different factor structure from that posited in the Technical Manual and a lack of invariance across both…
NASA Astrophysics Data System (ADS)
Clerici, Aldo; Perego, Susanna; Tellini, Claudio; Vescovi, Paolo
2006-08-01
Among the many GIS based multivariate statistical methods for landslide susceptibility zonation, the so called “Conditional Analysis method” holds a special place for its conceptual simplicity. In fact, in this method landslide susceptibility is simply expressed as landslide density in correspondence with different combinations of instability-factor classes. To overcome the operational complexity connected to the long, tedious and error prone sequence of commands required by the procedure, a shell script mainly based on the GRASS GIS was created. The script, starting from a landslide inventory map and a number of factor maps, automatically carries out the whole procedure resulting in the construction of a map with five landslide susceptibility classes. A validation procedure allows to assess the reliability of the resulting model, while the simple mean deviation of the density values in the factor class combinations, helps to evaluate the goodness of landslide density distribution. The procedure was applied to a relatively small basin (167 km2) in the Italian Northern Apennines considering three landslide types, namely rotational slides, flows and complex landslides, for a total of 1,137 landslides, and five factors, namely lithology, slope angle and aspect, elevation and slope/bedding relations. The analysis of the resulting 31 different models obtained combining the five factors, confirms the role of lithology, slope angle and slope/bedding relations in influencing slope stability.
A Geometry Based Infra-structure for Computational Analysis and Design
NASA Technical Reports Server (NTRS)
Haimes, Robert
1997-01-01
The computational steps traditionally taken for most engineering analysis (CFD, structural analysis, and etc.) are: Surface Generation - usually by employing a CAD system; Grid Generation - preparing the volume for the simulation; Flow Solver - producing the results at the specified operational point; and Post-processing Visualization - interactively attempting to understand the results For structural analysis, integrated systems can be obtained from a number of commercial vendors. For CFD, these steps have worked well in the past for simple steady-state simulations at the expense of much user interaction. The data was transmitted between phases via files. Specifically the problems with this procedure are: (1) File based. Information flows from one step to the next via data files with formats specified for that procedure. (2) 'Good' Geometry. A bottleneck in getting results from a solver is the construction of proper geometry to be fed to the grid generator. With 'good' geometry a grid can be constructed in tens of minutes (even with a complex configuration) using unstructured techniques. (3) One-Way communication. All information travels on from one phase to the next. Until this process can be automated, more complex problems such as multi-disciplinary analysis or using the above procedure for design becomes prohibitive.
The pitfalls of hair analysis for toxicants in clinical practice: three case reports.
Frisch, Melissa; Schwartz, Brian S
2002-01-01
Hair analysis is used to assess exposure to heavy metals in patients presenting with nonspecific symptoms and is a commonly used procedure in patients referred to our clinic. We are frequently called on to evaluate patients who have health-related concerns as a result of hair analysis. Three patients first presented to outside physicians with nonspecific, multisystemic symptoms. A panel of analytes was measured in hair, and one or more values were interpreted as elevated. As a result of the hair analysis and other unconventional diagnostic tests, the patients presented to us believing they suffered from metal toxicity. In this paper we review the clinical efficacy of this procedure within the context of a patient population with somatic disorders and no clear risk factors for metal intoxication. We also review limitations of hair analysis in this setting; these limitations include patient factors such as low pretest probability of disease and test factors such as the lack of validation of analytic techniques, the inability to discern between exogenous contaminants and endogenous toxicants in hair, the variability of analytic procedures, low interlaboratory reliability, and the increased likelihood of false positive test results in the measurement of panels of analytes. PMID:11940463
Applications of ERTS-1 data to landscape change in eastern Tennessee
NASA Technical Reports Server (NTRS)
Rehder, J. B. (Principal Investigator)
1973-01-01
The author has identified the following significant results. The analysis of landscape change in eastern Tennessee from ERTS-1 data is being derived from three avenues of experimentation and analysis: (1) a multi-stage sampling procedure utilizing ground and aircraft imagery for ground truth and control; (2) a densitometric and computer analytical experiment for the analysis of gray tone signatures and comparisons for landscape change detection and monitoring; and (3) an ERTS image enhancement procedure for the detection and analysis of photomorphic regions. Significant results include: maps of strip mining changes and forest inventory, watershed identification and delimitation, and agricultural regions derived from spring plowing patterns appearing on the ERTS-1 imagery.
Domienik-Andrzejewska, Joanna; Ciraj-Bjelac, Olivera; Askounis, Panagiotis; Covens, Peter; Dragusin, Octavian; Jacob, Sophie; Farah, Jad; Gianicolo, Emilio; Padovani, Renato; Teles, Pedro; Widmark, Anders; Struelens, Lara
2018-05-21
This paper investigates over five decades of work practices in interventional cardiology, with an emphasis on radiation protection. The analysis is based on data from more than 400 cardiologists from various European countries recruited for a EURALOC study and collected in the period from 2014 to 2016. Information on the types of procedures performed and their annual mean number, fluoroscopy time, access site choice, x-ray units and radiation protection means used was collected using an occupational questionnaire. Based on the specific European data, changes in each parameter have been analysed over decades, while country-specific data analysis has allowed us to determine the differences in local practices. In particular, based on the collected data, the typical workload of a European cardiologist working in a haemodynamic room and an electrophysiology room was specified for various types of procedures. The results showed that when working in a haemodynamic room, a transparent ceiling-suspended lead shield or lead glasses are necessary in order to remain below the recommended eye lens dose limit of 20 mSv. Moreover, the analysis revealed that new, more complex cardiac procedures such as chronic total occlusion, valvuloplasty and pulmonary vein isolation for atrial fibrillation ablation might contribute substantially to annual doses, although they are relatively rarely performed. The results revealed that considerable progress has been made in the use of radiation protection tools. While their use in electrophysiology procedures is not generic, the situation in haemodynamic procedures is rather encouraging, as ceiling-suspended shields are used in 90% of cases, while the combination of ceiling shield and lead glasses is noted in more than 40% of the procedures. However, we find that still 7% of haemodynamic procedures are performed without any radiation protection tools.
2013-01-01
Background Medical knowledge encompasses both conceptual (facts or “what” information) and procedural knowledge (“how” and “why” information). Conceptual knowledge is known to be an essential prerequisite for clinical problem solving. Primarily, medical students learn from textbooks and often struggle with the process of applying their conceptual knowledge to clinical problems. Recent studies address the question of how to foster the acquisition of procedural knowledge and its application in medical education. However, little is known about the factors which predict performance in procedural knowledge tasks. Which additional factors of the learner predict performance in procedural knowledge? Methods Domain specific conceptual knowledge (facts) in clinical nephrology was provided to 80 medical students (3rd to 5th year) using electronic flashcards in a laboratory setting. Learner characteristics were obtained by questionnaires. Procedural knowledge in clinical nephrology was assessed by key feature problems (KFP) and problem solving tasks (PST) reflecting strategic and conditional knowledge, respectively. Results Results in procedural knowledge tests (KFP and PST) correlated significantly with each other. In univariate analysis, performance in procedural knowledge (sum of KFP+PST) was significantly correlated with the results in (1) the conceptual knowledge test (CKT), (2) the intended future career as hospital based doctor, (3) the duration of clinical clerkships, and (4) the results in the written German National Medical Examination Part I on preclinical subjects (NME-I). After multiple regression analysis only clinical clerkship experience and NME-I performance remained independent influencing factors. Conclusions Performance in procedural knowledge tests seems independent from the degree of domain specific conceptual knowledge above a certain level. Procedural knowledge may be fostered by clinical experience. More attention should be paid to the interplay of individual clinical clerkship experiences and structured teaching of procedural knowledge and its assessment in medical education curricula. PMID:23433202
ERIC Educational Resources Information Center
Dombrowski, Stefan C.; Watkins, Marley W.; Brogan, Michael J.
2009-01-01
This study investigated the factor structure of the Reynolds Intellectual Assessment Scales (RIAS) using rigorous exploratory factor analytic and factor extraction procedures. The results of this study indicate that the RIAS is a single factor test. Despite these results, higher order factor analysis using the Schmid-Leiman procedure indicates…
NASA Technical Reports Server (NTRS)
Van Dongen, H. P.; Olofsen, E.; VanHartevelt, J. H.; Kruyt, E. W.; Dinges, D. F. (Principal Investigator)
1999-01-01
Periodogram analysis of unequally spaced time-series, as part of many biological rhythm investigations, is complicated. The mathematical framework is scattered over the literature, and the interpretation of results is often debatable. In this paper, we show that the Lomb-Scargle method is the appropriate tool for periodogram analysis of unequally spaced data. A unique procedure of multiple period searching is derived, facilitating the assessment of the various rhythms that may be present in a time-series. All relevant mathematical and statistical aspects are considered in detail, and much attention is given to the correct interpretation of results. The use of the procedure is illustrated by examples, and problems that may be encountered are discussed. It is argued that, when following the procedure of multiple period searching, we can even benefit from the unequal spacing of a time-series in biological rhythm research.
Improved Equivalent Linearization Implementations Using Nonlinear Stiffness Evaluation
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Muravyov, Alexander A.
2001-01-01
This report documents two new implementations of equivalent linearization for solving geometrically nonlinear random vibration problems of complicated structures. The implementations are given the acronym ELSTEP, for "Equivalent Linearization using a STiffness Evaluation Procedure." Both implementations of ELSTEP are fundamentally the same in that they use a novel nonlinear stiffness evaluation procedure to numerically compute otherwise inaccessible nonlinear stiffness terms from commercial finite element programs. The commercial finite element program MSC/NASTRAN (NASTRAN) was chosen as the core of ELSTEP. The FORTRAN implementation calculates the nonlinear stiffness terms and performs the equivalent linearization analysis outside of NASTRAN. The Direct Matrix Abstraction Program (DMAP) implementation performs these operations within NASTRAN. Both provide nearly identical results. Within each implementation, two error minimization approaches for the equivalent linearization procedure are available - force and strain energy error minimization. Sample results for a simply supported rectangular plate are included to illustrate the analysis procedure.
Foltran, Fabiana A; Silva, Luciana C C B; Sato, Tatiana O; Coury, Helenice J C G
2013-01-01
The recording of human movement is an essential requirement for biomechanical, clinical, and occupational analysis, allowing assessment of postural variation, occupational risks, and preventive programs in physical therapy and rehabilitation. The flexible electrogoniometer (EGM), considered a reliable and accurate device, is used for dynamic recordings of different joints. Despite these advantages, the EGM is susceptible to measurement errors, known as crosstalk. There are two known types of crosstalk: crosstalk due to sensor rotation and inherent crosstalk. Correction procedures have been proposed to correct these errors; however no study has used both procedures in clinical measures for wrist movements with the aim to optimize the correction. To evaluate the effects of mathematical correction procedures on: 1) crosstalk due to forearm rotation, 2) inherent sensor crosstalk; and 3) the combination of these two procedures. 43 healthy subjects had their maximum range of motion of wrist flexion/extension and ulnar/radials deviation recorded by EGM. The results were analyzed descriptively, and procedures were compared by differences. There was no significant difference in measurements before and after the application of correction procedures (P<0.05). Furthermore, the differences between the correction procedures were less than 5° in most cases, having little impact on the measurements. Considering the time-consuming data analysis, the specific technical knowledge involved, and the inefficient results, the correction procedures are not recommended for wrist recordings by EGM.
Contact Stress Analysis of Spiral Bevel Gears Using Finite Element Analysis
NASA Technical Reports Server (NTRS)
Bibel, G. D.; Kumar, A; Reddy, S.; Handschuh, R.
1995-01-01
A procedure is presented for performing three-dimensional stress analysis of spiral bevel gears in mesh using the finite element method. The procedure involves generating a finite element model by solving equations that identify tooth surface coordinates. Coordinate transformations are used to orientate the gear and pinion for gear meshing. Contact boundary conditions are simulated with gap elements. A solution technique for correct orientation of the gap elements is given. Example models and results are presented.
Salas, Rosa Ana; Pleite, Jorge
2013-01-01
We propose a specific procedure to compute the inductance of a toroidal ferrite core as a function of the excitation current. The study includes the linear, intermediate and saturation regions. The procedure combines the use of Finite Element Analysis in 2D and experimental measurements. Through the two dimensional (2D) procedure we are able to achieve convergence, a reduction of computational cost and equivalent results to those computed by three dimensional (3D) simulations. The validation is carried out by comparing 2D, 3D and experimental results. PMID:28809283
The purpose of this SOP is to outline the archive/custody guidelines used by the Arizona Border Study. This procedure was followed to maintain and locate samples, extracts, tracings and hard copy results after laboratory analysis during the Arizona NHEXAS project and the Border ...
Siegrist, Michael; Connor, Melanie; Keller, Carmen
2012-08-01
In 2005, Swiss citizens endorsed a moratorium on gene technology, resulting in the prohibition of the commercial cultivation of genetically modified crops and the growth of genetically modified animals until 2013. However, scientific research was not affected by this moratorium, and in 2008, GMO field experiments were conducted that allowed us to examine the factors that influence their acceptance by the public. In this study, trust and confidence items were analyzed using principal component analysis. The analysis revealed the following three factors: "economy/health and environment" (value similarity based trust), "trust and honesty of industry and scientists" (value similarity based trust), and "competence" (confidence). The results of a regression analysis showed that all the three factors significantly influenced the acceptance of GM field experiments. Furthermore, risk communication scholars have suggested that fairness also plays an important role in the acceptance of environmental hazards. We, therefore, included measures for outcome fairness and procedural fairness in our model. However, the impact of fairness may be moderated by moral conviction. That is, fairness may be significant for people for whom GMO is not an important issue, but not for people for whom GMO is an important issue. The regression analysis showed that, in addition to the trust and confidence factors, moral conviction, outcome fairness, and procedural fairness were significant predictors. The results suggest that the influence of procedural fairness is even stronger for persons having high moral convictions compared with persons having low moral convictions. © 2012 Society for Risk Analysis.
Comparison of normalization methods for differential gene expression analysis in RNA-Seq experiments
Maza, Elie; Frasse, Pierre; Senin, Pavel; Bouzayen, Mondher; Zouine, Mohamed
2013-01-01
In recent years, RNA-Seq technologies became a powerful tool for transcriptome studies. However, computational methods dedicated to the analysis of high-throughput sequencing data are yet to be standardized. In particular, it is known that the choice of a normalization procedure leads to a great variability in results of differential gene expression analysis. The present study compares the most widespread normalization procedures and proposes a novel one aiming at removing an inherent bias of studied transcriptomes related to their relative size. Comparisons of the normalization procedures are performed on real and simulated data sets. Real RNA-Seq data sets analyses, performed with all the different normalization methods, show that only 50% of significantly differentially expressed genes are common. This result highlights the influence of the normalization step on the differential expression analysis. Real and simulated data sets analyses give similar results showing 3 different groups of procedures having the same behavior. The group including the novel method named “Median Ratio Normalization” (MRN) gives the lower number of false discoveries. Within this group the MRN method is less sensitive to the modification of parameters related to the relative size of transcriptomes such as the number of down- and upregulated genes and the gene expression levels. The newly proposed MRN method efficiently deals with intrinsic bias resulting from relative size of studied transcriptomes. Validation with real and simulated data sets confirmed that MRN is more consistent and robust than existing methods. PMID:26442135
A procedure to estimate proximate analysis of mixed organic wastes.
Zaher, U; Buffiere, P; Steyer, J P; Chen, S
2009-04-01
In waste materials, proximate analysis measuring the total concentration of carbohydrate, protein, and lipid contents from solid wastes is challenging, as a result of the heterogeneous and solid nature of wastes. This paper presents a new procedure that was developed to estimate such complex chemical composition of the waste using conventional practical measurements, such as chemical oxygen demand (COD) and total organic carbon. The procedure is based on mass balance of macronutrient elements (carbon, hydrogen, nitrogen, oxygen, and phosphorus [CHNOP]) (i.e., elemental continuity), in addition to the balance of COD and charge intensity that are applied in mathematical modeling of biological processes. Knowing the composition of such a complex substrate is crucial to study solid waste anaerobic degradation. The procedure was formulated to generate the detailed input required for the International Water Association (London, United Kingdom) Anaerobic Digestion Model number 1 (IWA-ADM1). The complex particulate composition estimated by the procedure was validated with several types of food wastes and animal manures. To make proximate analysis feasible for validation, the wastes were classified into 19 types to allow accurate extraction and proximate analysis. The estimated carbohydrates, proteins, lipids, and inerts concentrations were highly correlated to the proximate analysis; correlation coefficients were 0.94, 0.88, 0.99, and 0.96, respectively. For most of the wastes, carbohydrate was the highest fraction and was estimated accurately by the procedure over an extended range with high linearity. For wastes that are rich in protein and fiber, the procedure was even more consistent compared with the proximate analysis. The new procedure can be used for waste characterization in solid waste treatment design and optimization.
Recent developments in nickel electrode analysis
NASA Technical Reports Server (NTRS)
Whiteley, Richard V.; Daman, M. E.; Kaiser, E. Q.
1991-01-01
Three aspects of nickel electrode analysis for Nickel-Hydrogen and Nickel-Cadmium battery cell applications are addressed: (1) the determination of active material; (2) charged state nickel (as NiOOH + CoOOH); and (3) potassium ion content in the electrode. Four deloading procedures are compared for completeness of active material removal, and deloading conditions for efficient active material analyses are established. Two methods for charged state nickel analysis are compared: the current NASA procedure and a new procedure based on the oxidation of sodium oxalate by the charged material. Finally, a method for determining potassium content in an electrode sample by flame photometry is presented along with analytical results illustrating differences in potassium levels from vendor to vendor and the effects of stress testing on potassium content in the electrode. The relevance of these analytical procedures to electrode performance is reviewed.
Cost Analysis of an Office-based Surgical Suite
LaBove, Gabrielle
2016-01-01
Introduction: Operating costs are a significant part of delivering surgical care. Having a system to analyze these costs is imperative for decision making and efficiency. We present an analysis of surgical supply, labor and administrative costs, and remuneration of procedures as a means for a practice to analyze their cost effectiveness; this affects the quality of care based on the ability to provide services. The costs of surgical care cannot be estimated blindly as reconstructive and cosmetic procedures have different percentages of overhead. Methods: A detailed financial analysis of office-based surgical suite costs for surgical procedures was determined based on company contract prices and average use of supplies. The average time spent on scheduling, prepping, and doing the surgery was factored using employee rates. Results: The most expensive, minor procedure supplies are suture needles. The 4 most common procedures from the most expensive to the least are abdominoplasty, breast augmentation, facelift, and lipectomy. Conclusions: Reconstructive procedures require a greater portion of collection to cover costs. Without the adjustment of both patient and insurance remuneration in the practice, the ability to provide quality care will be increasingly difficult. PMID:27536482
A triangular thin shell finite element: Nonlinear analysis. [structural analysis
NASA Technical Reports Server (NTRS)
Thomas, G. R.; Gallagher, R. H.
1975-01-01
Aspects of the formulation of a triangular thin shell finite element which pertain to geometrically nonlinear (small strain, finite displacement) behavior are described. The procedure for solution of the resulting nonlinear algebraic equations combines a one-step incremental (tangent stiffness) approach with one iteration in the Newton-Raphson mode. A method is presented which permits a rational estimation of step size in this procedure. Limit points are calculated by means of a superposition scheme coupled to the incremental side of the solution procedure while bifurcation points are calculated through a process of interpolation of the determinants of the tangent-stiffness matrix. Numerical results are obtained for a flat plate and two curved shell problems and are compared with alternative solutions.
Equilibrium paths analysis of materials with rheological properties by using the chaos theory
NASA Astrophysics Data System (ADS)
Bednarek, Paweł; Rządkowski, Jan
2018-01-01
The numerical equilibrium path analysis of the material with random rheological properties by using standard procedures and specialist computer programs was not successful. The proper solution for the analysed heuristic model of the material was obtained on the base of chaos theory elements and neural networks. The paper deals with mathematical reasons of used computer programs and also are elaborated the properties of the attractor used in analysis. There are presented results of conducted numerical analysis both in a numerical and in graphical form for the used procedures.
Lincoln, Tricia A.; Horan-Ross, Debra A.; McHale, Michael R.; Lawrence, Gregory B.
2009-01-01
The laboratory for analysis of low-ionic-strength water at the U.S. Geological Survey (USGS) Water Science Center in Troy, N.Y., analyzes samples collected by USGS projects throughout the Northeast. The laboratory's quality-assurance program is based on internal and interlaboratory quality-assurance samples and quality-control procedures that were developed to ensure proper sample collection, processing, and analysis. The quality-assurance and quality-control data were stored in the laboratory's Lab Master data-management system, which provides efficient review, compilation, and plotting of data. This report presents and discusses results of quality-assurance and quality control samples analyzed from July 2005 through June 2007. Results for the quality-control samples for 19 analytical procedures were evaluated for bias and precision. Control charts indicate that data for eight of the analytical procedures were occasionally biased for either high-concentration or low-concentration samples but were within control limits; these procedures were: total aluminum, calcium, magnesium, nitrate (colorimetric method), potassium, silicon, sodium, and sulfate. Eight of the analytical procedures were biased throughout the analysis period for the high-concentration sample, but were within control limits; these procedures were: total aluminum, calcium, dissolved organic carbon, chloride, nitrate (ion chromatograph), potassium, silicon, and sulfate. The magnesium and pH procedures were biased throughout the analysis period for the low-concentration sample, but were within control limits. The acid-neutralizing capacity, total monomeric aluminum, nitrite, and specific conductance procedures were biased for the high-concentration and low-concentration samples, but were within control limits. Results from the filter-blank and analytical-blank analyses indicated that the procedures for 16 of 17 analytes were within control limits, although the concentrations for blanks were occasionally outside the control limits. The data-quality objective was not met for dissolved organic carbon. Sampling and analysis precision are evaluated herein in terms of the coefficient of variation obtained for triplicate samples in the procedures for 18 of the 21 analytes. At least 93 percent of the samples met data-quality objectives for all analytes except acid-neutralizing capacity (85 percent of samples met objectives), total monomeric aluminum (83 percent of samples met objectives), total aluminum (85 percent of samples met objectives), and chloride (85 percent of samples met objectives). The ammonium and total dissolved nitrogen did not meet the data-quality objectives. Results of the USGS interlaboratory Standard Reference Sample (SRS) Project met the Troy Laboratory data-quality objectives for 87 percent of the samples analyzed. The P-sample (low-ionic-strength constituents) analysis had two outliers each in two studies. The T-sample (trace constituents) analysis and the N-sample (nutrient constituents) analysis had one outlier each in two studies. Results of Environment Canada's National Water Research Institute (NWRI) program indicated that at least 85 percent of the samples met data-quality objectives for 11 of the 14 analytes; the exceptions were acid-neutralizing capacity, total aluminum and ammonium. Data-quality objectives were not met in 41 percent of samples analyzed for acid-neutralizing capacity, 50 percent of samples analyzed for total aluminum, and 44 percent of samples analyzed for ammonium. Results from blind reference-sample analyses indicated that data-quality objectives were met by at least 86 percent of the samples analyzed for calcium, magnesium, pH, potassium, and sodium. Data-quality objectives were met by 76 percent of the samples analyzed for chloride, 80 percent of the samples analyzed for specific conductance, and 77 percent of the samples analyzed for sulfate.
Using normalization 3D model for automatic clinical brain quantative analysis and evaluation
NASA Astrophysics Data System (ADS)
Lin, Hong-Dun; Yao, Wei-Jen; Hwang, Wen-Ju; Chung, Being-Tau; Lin, Kang-Ping
2003-05-01
Functional medical imaging, such as PET or SPECT, is capable of revealing physiological functions of the brain, and has been broadly used in diagnosing brain disorders by clinically quantitative analysis for many years. In routine procedures, physicians manually select desired ROIs from structural MR images and then obtain physiological information from correspondent functional PET or SPECT images. The accuracy of quantitative analysis thus relies on that of the subjectively selected ROIs. Therefore, standardizing the analysis procedure is fundamental and important in improving the analysis outcome. In this paper, we propose and evaluate a normalization procedure with a standard 3D-brain model to achieve precise quantitative analysis. In the normalization process, the mutual information registration technique was applied for realigning functional medical images to standard structural medical images. Then, the standard 3D-brain model that shows well-defined brain regions was used, replacing the manual ROIs in the objective clinical analysis. To validate the performance, twenty cases of I-123 IBZM SPECT images were used in practical clinical evaluation. The results show that the quantitative analysis outcomes obtained from this automated method are in agreement with the clinical diagnosis evaluation score with less than 3% error in average. To sum up, the method takes advantage of obtaining precise VOIs, information automatically by well-defined standard 3-D brain model, sparing manually drawn ROIs slice by slice from structural medical images in traditional procedure. That is, the method not only can provide precise analysis results, but also improve the process rate for mass medical images in clinical.
Pelios, L; Morren, J; Tesch, D; Axelrod, S
1999-01-01
Self-injurious behavior (SIB) and aggression have been the concern of researchers because of the serious impact these behaviors have on individuals' lives. Despite the plethora of research on the treatment of SIB and aggressive behavior, the reported findings have been inconsistent regarding the effectiveness of reinforcement-based versus punishment-based procedures. We conducted a literature review to determine whether a trend could be detected in researchers' selection of reinforcement-based procedures versus punishment-based procedures, particularly since the introduction of functional analysis to behavioral assessment. The data are consistent with predictions made in the past regarding the potential impact of functional analysis methodology. Specifically, the findings indicate that, once maintaining variables for problem behavior are identified, experimenters tend to choose reinforcement-based procedures rather than punishment-based procedures as treatment for both SIB and aggressive behavior. Results indicated an increased interest in studies on the treatment of SIB and aggressive behavior, particularly since 1988. PMID:10396771
Evaluation of flaws in carbon steel piping. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zahoor, A.; Gamble, R.M.; Mehta, H.S.
1986-10-01
The objective of this program was to develop flaw evaluation procedures and allowable flaw sizes for ferritic piping used in light water reactor (LWR) power generation facilities. The program results provide relevant ASME Code groups with the information necessary to define flaw evaluation procedures, allowable flaw sizes, and their associated bases for Section XI of the code. Because there are several possible flaw-related failure modes for ferritic piping over the LWR operating temperature range, three analysis methods were employed to develop the evaluation procedures. These include limit load analysis for plastic collapse, elastic plastic fracture mechanics (EPFM) analysis for ductilemore » tearing, and linear elastic fracture mechanics (LEFM) analysis for non ductile crack extension. To ensure the appropriate analysis method is used in an evaluation, a step by step procedure also is provided to identify the relevant acceptance standard or procedure on a case by case basis. The tensile strength and toughness properties required to complete the flaw evaluation for any of the three analysis methods are included in the evaluation procedure. The flaw evaluation standards are provided in tabular form for the plastic collapse and ductile tearing modes, where the allowable part through flaw depth is defined as a function of load and flaw length. For non ductile crack extension, linear elastic fracture mechanics analysis methods, similar to those in Appendix A of Section XI, are defined. Evaluation flaw sizes and procedures are developed for both longitudinal and circumferential flaw orientations and normal/upset and emergency/faulted operating conditions. The tables are based on margins on load of 2.77 and 1.39 for circumferential flaws and 3.0 and 1.5 for longitudinal flaws for normal/upset and emergency/faulted conditions, respectively.« less
Computer code for off-design performance analysis of radial-inflow turbines with rotor blade sweep
NASA Technical Reports Server (NTRS)
Meitner, P. L.; Glassman, A. J.
1983-01-01
The analysis procedure of an existing computer program was extended to include rotor blade sweep, to model the flow more accurately at the rotor exit, and to provide more detail to the loss model. The modeling changes are described and all analysis equations and procedures are presented. Program input and output are described and are illustrated by an example problem. Results obtained from this program and from a previous program are compared with experimental data.
Contact stress analysis of spiral bevel gears using nonlinear finite element static analysis
NASA Technical Reports Server (NTRS)
Bibel, G. D.; Kumar, A.; Reddy, S.; Handschuh, R.
1993-01-01
A procedure is presented for performing three-dimensional stress analysis of spiral bevel gears in mesh using the finite element method. The procedure involves generating a finite element model by solving equations that identify tooth surface coordinates. Coordinate transformations are used to orientate the gear and pinion for gear meshing. Contact boundary conditions are simulated with gap elements. A solution technique for correct orientation of the gap elements is given. Example models and results are presented.
Kuhn-Tucker optimization based reliability analysis for probabilistic finite elements
NASA Technical Reports Server (NTRS)
Liu, W. K.; Besterfield, G.; Lawrence, M.; Belytschko, T.
1988-01-01
The fusion of probability finite element method (PFEM) and reliability analysis for fracture mechanics is considered. Reliability analysis with specific application to fracture mechanics is presented, and computational procedures are discussed. Explicit expressions for the optimization procedure with regard to fracture mechanics are given. The results show the PFEM is a very powerful tool in determining the second-moment statistics. The method can determine the probability of failure or fracture subject to randomness in load, material properties and crack length, orientation, and location.
Lee, O-Sung; Ahn, Soyeon; Ahn, Jin Hwan; Teo, Seow Hui; Lee, Yong Seuk
2018-02-01
The purpose of this systematic review and meta-analysis was to evaluate the efficacy of concurrent cartilage procedures during high tibial osteotomy (HTO) for medial compartment osteoarthritis (OA) by comparing the outcomes of studies that directly compared the use of HTO plus concurrent cartilage procedures versus HTO alone. Results that are possible to be compared in more than two articles were presented as forest plots. A 95% confidence interval was calculated for each effect size, and we calculated the I 2 statistic, which presents the percentage of total variation attributable to the heterogeneity among studies. The random effects model was used to calculate the effect size. Seven articles were included to the final analysis. Case groups were composed of HTO without concurrent procedures and control groups were composed of HTO with concurrent procedures such as marrow stimulation procedure, mesenchymal stem cell transplantation, and injection. The case group showed a higher hospital for special surgery score and mean difference was 4.10 [I 2 80.8%, 95% confidence interval (CI) - 9.02 to 4.82]. Mean difference of the mechanical femorotibial angle in five studies was 0.08° (I 2 0%, 95% CI - 0.26 to 0.43). However, improved arthroscopic, histologic, and MRI results were reported in the control group. Our analysis support that concurrent procedures during HTO for medial compartment OA have little beneficial effect regarding clinical and radiological outcomes. However, they might have some beneficial effects in terms of arthroscopic, histologic, and MRI findings even though the quality of healed cartilage is not good as that of original cartilage. Therefore, until now, concurrent procedures for medial compartment OA have been considered optional. Nevertheless, no conclusions can be drawn for younger patients with focal cartilage defects and concomitant varus deformity. This question needs to be addressed separately.
Guo, Pi; Zeng, Fangfang; Hu, Xiaomin; Zhang, Dingmei; Zhu, Shuming; Deng, Yu; Hao, Yuantao
2015-01-01
Objectives In epidemiological studies, it is important to identify independent associations between collective exposures and a health outcome. The current stepwise selection technique ignores stochastic errors and suffers from a lack of stability. The alternative LASSO-penalized regression model can be applied to detect significant predictors from a pool of candidate variables. However, this technique is prone to false positives and tends to create excessive biases. It remains challenging to develop robust variable selection methods and enhance predictability. Material and methods Two improved algorithms denoted the two-stage hybrid and bootstrap ranking procedures, both using a LASSO-type penalty, were developed for epidemiological association analysis. The performance of the proposed procedures and other methods including conventional LASSO, Bolasso, stepwise and stability selection models were evaluated using intensive simulation. In addition, methods were compared by using an empirical analysis based on large-scale survey data of hepatitis B infection-relevant factors among Guangdong residents. Results The proposed procedures produced comparable or less biased selection results when compared to conventional variable selection models. In total, the two newly proposed procedures were stable with respect to various scenarios of simulation, demonstrating a higher power and a lower false positive rate during variable selection than the compared methods. In empirical analysis, the proposed procedures yielding a sparse set of hepatitis B infection-relevant factors gave the best predictive performance and showed that the procedures were able to select a more stringent set of factors. The individual history of hepatitis B vaccination, family and individual history of hepatitis B infection were associated with hepatitis B infection in the studied residents according to the proposed procedures. Conclusions The newly proposed procedures improve the identification of significant variables and enable us to derive a new insight into epidemiological association analysis. PMID:26214802
Fort Dix Remedial Investigation/Feasibility Study for MAG-1 Area
1994-01-01
by PID headspace results or odor ), samples should be diluted to bring the target compound concentrations within the instrument calibration range...Conductivity Testing ................... 2-38 2.9 ANALYTICAL PROCEDURES FOR FIELD SCREENING SAMPLES .. 2-38 2.9.1 Volatile Organic Compounds ...ANALYSIS OF VOLATILE ORGANIC COMPOUNDS BY FIELD GAS CHROMATOGRAPHY - STANDARD OPERATING PROCEDURE APPENDIX B RDX EXPLOSIVES FIELD TEST KIT PROCEDURES
Shreve, Elizabeth A.; Downs, Aimee C.
2005-01-01
This report describes laboratory procedures used by the U.S. Geological Survey Kentucky Water Science Center Sediment Laboratory for the processing and analysis of fluvial-sediment samples for concentration of sand and finer material. The report details the processing of a sediment sample through the laboratory from receiving the sediment sample, through the analytical process, to compiling results of the requested analysis. Procedures for preserving sample integrity, calibrating and maintaining of laboratory and field instruments and equipment, analyzing samples, internal quality assurance and quality control, and validity of the sediment-analysis results also are described. The report includes a list of references cited and a glossary of sediment and quality-assurance terms.
Sajnóg, Adam; Hanć, Anetta; Barałkiewicz, Danuta
2018-05-15
Analysis of clinical specimens by imaging techniques allows to determine the content and distribution of trace elements on the surface of the examined sample. In order to obtain reliable results, the developed procedure should be based not only on the properly prepared sample and performed calibration. It is also necessary to carry out all phases of the procedure in accordance with the principles of chemical metrology whose main pillars are the use of validated analytical methods, establishing the traceability of the measurement results and the estimation of the uncertainty. This review paper discusses aspects related to sampling, preparation and analysis of clinical samples by laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) with emphasis on metrological aspects, i.e. selected validation parameters of the analytical method, the traceability of the measurement result and the uncertainty of the result. This work promotes the introduction of metrology principles for chemical measurement with emphasis to the LA-ICP-MS which is the comparative method that requires studious approach to the development of the analytical procedure in order to acquire reliable quantitative results. Copyright © 2018 Elsevier B.V. All rights reserved.
Garbarino, John R.; Hoffman, Gerald L.
1999-01-01
A hydrochloric acid in-bottle digestion procedure is used to partially digest wholewater samples prior to determining recoverable elements by various analytical methods. The use of hydrochloric acid is problematic for some methods of analysis because of spectral interference. The inbottle digestion procedure has been modified to eliminate such interference by using nitric acid instead of hydrochloric acid in the digestion. Implications of this modification are evaluated by comparing results for a series of synthetic whole-water samples. Results are also compared with those obtained by using U.S. Environmental Protection Agency (1994) (USEPA) Method 200.2 total-recoverable digestion procedure. Percentage yields that use the nitric acid inbottle digestion procedure are within 10 percent of the hydrochloric acid in-bottle yields for 25 of the 26 elements determined in two of the three synthetic whole-water samples tested. Differences in percentage yields for the third synthetic whole-water sample were greater than 10 percent for 16 of the 26 elements determined. The USEPA method was the most rigorous for solubilizing elements from particulate matter in all three synthetic whole-water samples. Nevertheless, the variability in the percentage yield by using the USEPA digestion procedure was generally greater than the in-bottle digestion procedure, presumably because of the difficulty in controlling the digestion conditions accurately.
A spin column-free approach to sodium hydroxide-based glycan permethylation.
Hu, Yueming; Borges, Chad R
2017-07-24
Glycan permethylation was introduced as a tool to facilitate the study of glycans in 1903. Since that time, permethylation procedures have been continually modified to improve permethylation efficiency and qualitative applicability. Typically, however, either laborious preparation steps or cumbersome and uneconomical spin columns have been needed to obtain decent permethylation yields on small glycan samples. Here we describe a spin column-free (SCF) glycan permethylation procedure that is applicable to both O- and N-linked glycans and can be employed upstream to intact glycan analysis by MALDI-MS, ESI-MS, or glycan linkage analysis by GC-MS. The SCF procedure involves neutralization of NaOH beads by acidified phosphate buffer, which eliminates the risk of glycan oxidative degradation and avoids the use of spin columns. Optimization of the new permethylation procedure provided high permethylation efficiency for both hexose (>98%) and HexNAc (>99%) residues-yields which were comparable to (or better than) those of some widely-used spin column-based procedures. A light vs. heavy labelling approach was employed to compare intact glycan yields from a popular spin-column based approach to the SCF approach. Recovery of intact N-glycans was significantly better with the SCF procedure (p < 0.05), but overall yield of O-glycans was similar or slightly diminished (p < 0.05 for tetrasaccharides or smaller). When the SCF procedure was employed upstream to hydrolysis, reduction and acetylation for glycan linkage analysis of pooled glycans from unfractionated blood plasma, analytical reproducibility was on par with that from previous spin column-based "glycan node" analysis results. When applied to blood plasma samples from stage III-IV breast cancer patients (n = 20) and age-matched controls (n = 20), the SCF procedure facilitated identification of three glycan nodes with significantly different distributions between the cases and controls (ROC c-statistics > 0.75; p < 0.01). In summary, the SCF permethylation procedure expedites and economizes both intact glycan analysis and linkage analysis of glycans from whole biospecimens.
A spin column-free approach to sodium hydroxide-based glycan permethylation†
Hu, Yueming; Borges, Chad R.
2018-01-01
Glycan permethylation was introduced as a tool to facilitate the study of glycans in 1903. Since that time, permethylation procedures have been continually modified to improve permethylation efficiency and qualitative applicability. Typically, however, either laborious preparation steps or cumbersome and uneconomical spin columns have been needed to obtain decent permethylation yields on small glycan samples. Here we describe a spin column-free (SCF) glycan permethylation procedure that is applicable to both O- and N-linked glycans and can be employed upstream to intact glycan analysis by MALDI-MS, ESI-MS, or glycan linkage analysis by GC-MS. The SCF procedure involves neutralization of NaOH beads by acidified phosphate buffer, which eliminates the risk of glycan oxidative degradation and avoids the use of spin columns. Optimization of the new permethylation procedure provided high permethylation efficiency for both hexose (>98%) and HexNAc (>99%) residues—yields which were comparable to (or better than) those of some widely-used spin column-based procedures. A light vs. heavy labelling approach was employed to compare intact glycan yields from a popular spin-column based approach to the SCF approach. Recovery of intact N-glycans was significantly better with the SCF procedure (p < 0.05), but overall yield of O-glycans was similar or slightly diminished (p < 0.05 for tetrasaccharides or smaller). When the SCF procedure was employed upstream to hydrolysis, reduction and acetylation for glycan linkage analysis of pooled glycans from unfractionated blood plasma, analytical reproducibility was on par with that from previous spin column-based “glycan node” analysis results. When applied to blood plasma samples from stage III–IV breast cancer patients (n = 20) and age-matched controls (n = 20), the SCF procedure facilitated identification of three glycan nodes with significantly different distributions between the cases and controls (ROC c-statistics > 0.75; p < 0.01). In summary, the SCF permethylation procedure expedites and economizes both intact glycan analysis and linkage analysis of glycans from whole biospecimens. PMID:28635997
Parabolic Dish Concentrator (PDC-1)
NASA Technical Reports Server (NTRS)
Dennison, E. W.; Argoud, M. J.
1984-01-01
The design, construction, and installation of the Parabolic Dish Concentrator, Type 1 (PDC-1) has been one of the most significant JPL concentrator projects because of the knowledge gained about this type of concentrator and the development of design, testing, and analysis procedures which are applicable to all solar concentrator projects. The need for these procedures was more clearly understood during the testing period which started with the prototype panel evaluation and ended with the performance characterization of the completed concentrator. For each phase of the test program, practical test procedures were required and these procedures defined the mathematical analysis which was essential for successful concentrator development. The concentrator performance appears to be limited only by the distortions resulting from thermal gradients through the reflecting panels. Simple optical testing can be extremely effective, but comprehensive mechanical and optical analysis is essential for cost effective solar concentrator development.
2014-01-01
Background The purpose of this analysis was to determine whether in office diagnostic needle arthroscopy (Visionscope Imaging System [VSI]) can provide for improved diagnostic assessment and; more cost effective care. Methods Data on arthroscopy procedures in the US for deep seated pathology in the knee and shoulder were used (Calendar Year 2012). These procedures represent approximately 25-30% of all arthroscopic procedures performed annually. Sensitivities, specificities, positive predictive, and negative predictive values for MRI analysis of this deep seated pathology from systematic reviews and meta-analyses were used in assessing for false positive and false negative MRI findings. The costs of performing diagnostic and surgical arthroscopy procedures (using 2013 Medicare reimbursement amounts); costs associated with false negative findings; and the costs for treating associated complications arising from diagnostic and therapeutic arthroscopy procedures were then assessed. Results In patients presenting with medial meniscal pathology (ICD9CM diagnosis 836.0 over 540,000 procedures in CY 2012); use of the VSI system in place of MRI assessment (standard of care) resulted in a net cost savings to the system of $151 million. In patients presenting with rotator cuff pathology (ICD9CM 840.4 over 165,000 procedures in CY2012); use of VSI in place of MRI similarly saved $59 million. These savings were realized along with more appropriate care as; fewer patients were exposed to higher risk surgical arthroscopic procedures. Conclusions The use of an in-office arthroscopy system can: possibly save the US healthcare system money; shorten the diagnostic odyssey for patients; potentially better prepare clinicians for arthroscopic surgery (when needed) and; eliminate unnecessary outpatient arthroscopy procedures, which commonly result in surgical intervention. PMID:24885678
NASA Technical Reports Server (NTRS)
Carnes, J. G.; Baird, J. E. (Principal Investigator)
1980-01-01
The classification procedure utilized in making crop proportion estimates for corn and soybeans using remotely sensed data was evaluated. The procedure was derived during the transition year of the Large Area Crop Inventory Experiment. Analysis of variance techniques were applied to classifications performed by 3 groups of analysts who processed 25 segments selected from 4 agrophysical units (APU's). Group and APU effects were assessed to determine factors which affected the quality of the classifications. The classification results were studied to determine the effectiveness of the procedure in producing corn and soybeans proportion estimates.
Kodak, Tiffany; Campbell, Vincent; Bergmann, Samantha; LeBlanc, Brittany; Kurtz-Nelson, Eva; Cariveau, Tom; Haq, Shaji; Zemantic, Patricia; Mahon, Jacob
2016-09-01
Prior research shows that learners have idiosyncratic responses to error-correction procedures during instruction. Thus, assessments that identify error-correction strategies to include in instruction can aid practitioners in selecting individualized, efficacious, and efficient interventions. The current investigation conducted an assessment to compare 5 error-correction procedures that have been evaluated in the extant literature and are common in instructional practice for children with autism spectrum disorder (ASD). Results showed that the assessment identified efficacious and efficient error-correction procedures for all participants, and 1 procedure was efficient for 4 of the 5 participants. To examine the social validity of error-correction procedures, participants selected among efficacious and efficient interventions in a concurrent-chains assessment. We discuss the results in relation to prior research on error-correction procedures and current instructional practices for learners with ASD. © 2016 Society for the Experimental Analysis of Behavior.
A new procedure for calculating contact stresses in gear teeth
NASA Technical Reports Server (NTRS)
Somprakit, Paisan; Huston, Ronald L.
1991-01-01
A numerical procedure for evaluating and monitoring contact stresses in meshing gear teeth is discussed. The procedure is intended to extend the range of applicability and to improve the accuracy of gear contact stress analysis. The procedure is based upon fundamental solution from the theory of elasticity. It is an iterative numerical procedure. The method is believed to have distinct advantages over the classical Hertz method, the finite-element method, and over existing approaches with the boundary element method. Unlike many classical contact stress analyses, friction effects and sliding are included. Slipping and sticking in the contact region are studied. Several examples are discussed. The results are in agreement with classical results. Applications are presented for spur gears.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zdarek, J.; Pecinka, L.
Leak-before-break (LBB) analysis of WWER type reactors in the Czech and Sloval Republics is summarized in this paper. Legislative bases, required procedures, and validation and verification of procedures are discussed. A list of significant issues identified during the application of LBB analysis is presented. The results of statistical evaluation of crack length characteristics are presented and compared for the WWER 440 Type 230 and 213 reactors and for the WWER 1000 Type 302, 320 and 338 reactors.
Seamans, David P.; Louka, Boshra F.; Fortuin, F. David; Patel, Bhavesh M.; Sweeney, John P.; Lanza, Louis A.; DeValeria, Patrick A.; Ezrre, Kim M.; Ramakrishna, Harish
2016-01-01
Background: The surgical and procedural specialties are continually evolving their methods to include more complex and technically difficult cases. These cases can be longer and incorporate multiple teams in a different model of operating room synergy. Patients are frequently older, with comorbidities adding to the complexity of these cases. Recording of this environment has become more feasible recently with advancement in video and audio capture systems often used in the simulation realm. Aims: We began using live capture to record a new procedure shortly after starting these cases in our institution. This has provided continued assessment and evaluation of live procedures. The goal of this was to improve human factors and situational challenges by review and debriefing. Setting and Design: B-Line Medical's LiveCapture video system was used to record successive transcatheter aortic valve replacement (TAVR) procedures in our cardiac catheterization/laboratory. An illustrative case is used to discuss analysis and debriefing of the case using this system. Results and Conclusions: An illustrative case is presented that resulted in long-term changes to our approach of these cases. The video capture documented rare events during one of our TAVR procedures. Analysis and debriefing led to definitive changes in our practice. While there are hurdles to the use of this technology in every institution, the role for the ongoing use of video capture, analysis, and debriefing may play an important role in the future of patient safety and human factors analysis in the operating environment. PMID:27762242
NASA Astrophysics Data System (ADS)
Ivković, M.; Konjević, N.
2017-05-01
In this work we summarize, analyze and critically evaluate experimental procedures and results of LIBS electron number density plasma characterization using as examples Stark broadened Si I and Si II line profiles. Selected publications are covering the time period from very beginning of silicon LIBS studies until the end of the year 2015. To perform the analysis of experimental LIBS data, the testing of available semiclassical theoretical Stark broadening parameters for Si I and Si II lines was accomplished first. This is followed by the description of experimental setups, results and details of experimental procedure relevant for the line shape analysis of spectral lines used for plasma characterization. Although most of results and conclusions of this analysis are related to the application of silicon lines for LIBS characterization they are of general importance and may be applied to other elements and different low-temperature plasma sources. The analysis of experimental procedures used for LIBS diagnostics from emission profiles of non-hydrogenic spectral lines is carried out in the following order: the influence of laser ablation and crater formation, spatial and temporal plasma observation, line self-absorption and experimental profile deconvolution, the contribution of ion broadening in comparison with electron impacts contributions to the line width in case of neutral atom line and some other aspects of line shape analysis are considered. The application of Stark shift for LIBS diagnostics is demonstrated and discussed. Finally, the recommendations for an improvement of experimental procedures for LIBS electron number density plasma characterization are offered.
Advanced superposition methods for high speed turbopump vibration analysis
NASA Technical Reports Server (NTRS)
Nielson, C. E.; Campany, A. D.
1981-01-01
The small, high pressure Mark 48 liquid hydrogen turbopump was analyzed and dynamically tested to determine the cause of high speed vibration at an operating speed of 92,400 rpm. This approaches the design point operating speed of 95,000 rpm. The initial dynamic analysis in the design stage and subsequent further analysis of the rotor only dynamics failed to predict the vibration characteristics found during testing. An advanced procedure for dynamics analysis was used in this investigation. The procedure involves developing accurate dynamic models of the rotor assembly and casing assembly by finite element analysis. The dynamically instrumented assemblies are independently rap tested to verify the analytical models. The verified models are then combined by modal superposition techniques to develop a completed turbopump model where dynamic characteristics are determined. The results of the dynamic testing and analysis obtained are presented and methods of moving the high speed vibration characteristics to speeds above the operating range are recommended. Recommendations for use of these advanced dynamic analysis procedures during initial design phases are given.
Error analysis of multi-needle Langmuir probe measurement technique.
Barjatya, Aroh; Merritt, William
2018-04-01
Multi-needle Langmuir probe is a fairly new instrument technique that has been flown on several recent sounding rockets and is slated to fly on a subset of QB50 CubeSat constellation. This paper takes a fundamental look into the data analysis procedures used for this instrument to derive absolute electron density. Our calculations suggest that while the technique remains promising, the current data analysis procedures could easily result in errors of 50% or more. We present a simple data analysis adjustment that can reduce errors by at least a factor of five in typical operation.
Error analysis of multi-needle Langmuir probe measurement technique
NASA Astrophysics Data System (ADS)
Barjatya, Aroh; Merritt, William
2018-04-01
Multi-needle Langmuir probe is a fairly new instrument technique that has been flown on several recent sounding rockets and is slated to fly on a subset of QB50 CubeSat constellation. This paper takes a fundamental look into the data analysis procedures used for this instrument to derive absolute electron density. Our calculations suggest that while the technique remains promising, the current data analysis procedures could easily result in errors of 50% or more. We present a simple data analysis adjustment that can reduce errors by at least a factor of five in typical operation.
A novel procedure on next generation sequencing data analysis using text mining algorithm.
Zhao, Weizhong; Chen, James J; Perkins, Roger; Wang, Yuping; Liu, Zhichao; Hong, Huixiao; Tong, Weida; Zou, Wen
2016-05-13
Next-generation sequencing (NGS) technologies have provided researchers with vast possibilities in various biological and biomedical research areas. Efficient data mining strategies are in high demand for large scale comparative and evolutional studies to be performed on the large amounts of data derived from NGS projects. Topic modeling is an active research field in machine learning and has been mainly used as an analytical tool to structure large textual corpora for data mining. We report a novel procedure to analyse NGS data using topic modeling. It consists of four major procedures: NGS data retrieval, preprocessing, topic modeling, and data mining using Latent Dirichlet Allocation (LDA) topic outputs. The NGS data set of the Salmonella enterica strains were used as a case study to show the workflow of this procedure. The perplexity measurement of the topic numbers and the convergence efficiencies of Gibbs sampling were calculated and discussed for achieving the best result from the proposed procedure. The output topics by LDA algorithms could be treated as features of Salmonella strains to accurately describe the genetic diversity of fliC gene in various serotypes. The results of a two-way hierarchical clustering and data matrix analysis on LDA-derived matrices successfully classified Salmonella serotypes based on the NGS data. The implementation of topic modeling in NGS data analysis procedure provides a new way to elucidate genetic information from NGS data, and identify the gene-phenotype relationships and biomarkers, especially in the era of biological and medical big data. The implementation of topic modeling in NGS data analysis provides a new way to elucidate genetic information from NGS data, and identify the gene-phenotype relationships and biomarkers, especially in the era of biological and medical big data.
NASA Astrophysics Data System (ADS)
Girault, Isabelle; d'Ham, Cedric; Ney, Muriel; Sanchez, Eric; Wajeman, Claire
2012-04-01
Many studies have stressed students' lack of understanding of experiments in laboratories. Some researchers suggest that if students design all or parts of entire experiment, as part of an inquiry-based approach, it would overcome certain difficulties. It requires that a procedure be written for experimental design. The aim of this paper is to describe the characteristics of a procedure in science laboratories, in an educational context. As a starting point, this paper proposes a model in the form of a hierarchical task diagram that gives the general structure of any procedure. This model allows both the analysis of existing procedures and the design of a new inquiry-based approach. The obtained characteristics are further organized into criteria that can help both teachers and students assess a procedure during and after its writing. These results are obtained through two different sets of data. First, the characteristics of procedures are established by analysing laboratory manuals. This allows the organization and type of information in procedures to be defined. This analysis reveals that students are seldom asked to write a full procedure, but sometimes have to specify tasks within a procedure. Secondly, iterative interviews are undertaken with teachers. This leads to the list of criteria to evaluate the procedure.
Do Mouthwashes Really Kill Bacteria?
ERIC Educational Resources Information Center
Corner, Thomas R.
1984-01-01
Errors in determining the effectiveness of mouthwashes, disinfectants, and other household products as antibacterial agents may result from using broth cultures and/or irregularly shaped bits of filter paper. Presents procedures for a better technique and, for advanced students, two additional procedures for introducing quantitative analysis into…
Evaluation of modal pushover-based scaling of one component of ground motion: Tall buildings
Kalkan, Erol; Chopra, Anil K.
2012-01-01
Nonlinear response history analysis (RHA) is now increasingly used for performance-based seismic design of tall buildings. Required for nonlinear RHAs is a set of ground motions selected and scaled appropriately so that analysis results would be accurate (unbiased) and efficient (having relatively small dispersion). This paper evaluates accuracy and efficiency of recently developed modal pushover–based scaling (MPS) method to scale ground motions for tall buildings. The procedure presented explicitly considers structural strength and is based on the standard intensity measure (IM) of spectral acceleration in a form convenient for evaluating existing structures or proposed designs for new structures. Based on results presented for two actual buildings (19 and 52 stories, respectively), it is demonstrated that the MPS procedure provided a highly accurate estimate of the engineering demand parameters (EDPs), accompanied by significantly reduced record-to-record variability of the responses. In addition, the MPS procedure is shown to be superior to the scaling procedure specified in the ASCE/SEI 7-05 document.
Kawaguchi, Migaku; Takatsu, Akiko
2009-08-01
A candidate reference measurement procedure involving isotope dilution coupled with gas chromatography-mass spectrometry (GC-MS) has been developed and critically evaluated. An isotopically labeled internal standard, cortisol-d(2), was added to a serum sample. After equilibration, solid-phase extractions (SPE) for sample preparation and derivatization with heptafluorobutyric anhydride (HFBA) were performed for GC-MS analysis. The limit of detection (LOD) and the limit of quantification (LOQ) were 5 and 20 ng g(-1), respectively. The recovery of the added cortisol ranged from 99.8 to 101.0%. Excellent precision was obtained with a within-day variation (RSD) of 0.7% for GC-MS analysis. The accuracy of the measurement was evaluated by comparing of results of this reference measurement procedure on lyophilized human serum reference materials for cortisol (European Reference Materials (ERM)-DA 192) as Certified Reference Materials (CRMs). The results of this method for total cortisol agreed with the certified values within some uncertainty. This method, which demonstrates simply, easy, good accuracy, high precision, and is free from interferences from structural analogues, qualifies as a reference measurement procedure.
Two Paradoxes in Linear Regression Analysis.
Feng, Ge; Peng, Jing; Tu, Dongke; Zheng, Julia Z; Feng, Changyong
2016-12-25
Regression is one of the favorite tools in applied statistics. However, misuse and misinterpretation of results from regression analysis are common in biomedical research. In this paper we use statistical theory and simulation studies to clarify some paradoxes around this popular statistical method. In particular, we show that a widely used model selection procedure employed in many publications in top medical journals is wrong. Formal procedures based on solid statistical theory should be used in model selection.
Mital, A
1999-01-01
Manual handling of materials continues to be a hazardous activity, leading to a very significant number of severe overexertion injuries. Designing jobs that are within the physical capabilities of workers is one approach ergonomists have adopted to redress this problem. As a result, several job design procedures have been developed over the years. However, these procedures are limited to designing or evaluating only pure lifting jobs or only the lifting aspect of a materials handling job. This paper describes a general procedure that may be used to design or analyse materials handling jobs that involve several different kinds of activities (e.g. lifting, lowering, carrying, pushing, etc). The job design/analysis procedure utilizes an elemental approach (breaking the job into elements) and relies on databases provided in A Guide to Manual Materials Handling to compute associated risk factors. The use of the procedure is demonstrated with the help of two case studies.
Bio-Oil Analysis Laboratory Procedures | Bioenergy | NREL
Bio-Oil Analysis Laboratory Procedures Bio-Oil Analysis Laboratory Procedures NREL develops standard procedures have been validated and allow for reliable bio-oil analysis. Procedures Determination different hydroxyl groups (-OH) in pyrolysis bio-oil: aliphatic-OH, phenolic-OH, and carboxylic-OH. Download
Core, Cynthia; Brown, Janean W; Larsen, Michael D; Mahshie, James
2014-01-01
The objectives of this research were to determine whether an adapted version of a Hybrid Visual Habituation procedure could be used to assess speech perception of phonetic and prosodic features of speech (vowel height, lexical stress, and intonation) in individual pre-school-age children who use cochlear implants. Nine children ranging in age from 3;4 to 5;5 participated in this study. Children were prelingually deaf and used cochlear implants and had no other known disabilities. Children received two speech feature tests using an adaptation of a Hybrid Visual Habituation procedure. Seven of the nine children demonstrated perception of at least one speech feature using this procedure using results from a Bayesian linear regression analysis. At least one child demonstrated perception of each speech feature using this assessment procedure. An adapted version of the Hybrid Visual Habituation Procedure with an appropriate statistical analysis provides a way to assess phonetic and prosodicaspects of speech in pre-school-age children who use cochlear implants.
Neutron radiative capture methods for surface elemental analysis
Trombka, J.I.; Senftle, F.; Schmadebeck, R.
1970-01-01
Both an accelerator and a 252Cf neutron source have been used to induce characteristic gamma radiation from extended soil samples. To demonstrate the method, measurements of the neutron-induced radiative capture and activation gamma rays have been made with both Ge(Li) and NaI(Tl) detectors, Because of the possible application to space flight geochemical analysis, it is believed that NaI(Tl) detectors must be used. Analytical procedures have been developed to obtain both qualitative and semiquantitative results from an interpretation of the measured NaI(Tl) pulse-height spectrum. Experiment results and the analytic procedure are presented. ?? 1970.
Optimisation of nasal swab analysis by liquid scintillation counting.
Dai, Xiongxin; Liblong, Aaron; Kramer-Tremblay, Sheila; Priest, Nicholas; Li, Chunsheng
2012-06-01
When responding to an emergency radiological incident, rapid methods are needed to provide the physicians and radiation protection personnel with an early estimation of possible internal dose resulting from the inhalation of radionuclides. This information is needed so that appropriate medical treatment and radiological protection control procedures can be implemented. Nasal swab analysis, which employs swabs swiped inside a nostril followed by liquid scintillation counting of alpha and beta activity on the swab, could provide valuable information to quickly identify contamination of the affected population. In this study, various parameters (such as alpha/beta discrimination, swab materials, counting time and volume of scintillation cocktail etc) were evaluated in order to optimise the effectiveness of the nasal swab analysis method. An improved nasal swab procedure was developed by replacing cotton swabs with polyurethane-tipped swabs. Liquid scintillation counting was performed using a Hidex 300SL counter with alpha/beta pulse shape discrimination capability. Results show that the new method is more reliable than existing methods using cotton swabs and effectively meets the analysis requirements for screening personnel in an emergency situation. This swab analysis procedure is also applicable to wipe tests of surface contamination to minimise the source self-absorption effect on liquid scintillation counting.
Lincoln, Tricia A.; Horan-Ross, Debra A.; McHale, Michael R.; Lawrence, Gregory B.
2006-01-01
The laboratory for analysis of low-ionic-strength water at the U.S. Geological Survey (USGS) Water Science Center in Troy, N.Y., analyzes samples collected by USGS projects throughout the Northeast. The laboratory's quality-assurance program is based on internal and interlaboratory quality-assurance samples and quality-control procedures that were developed to ensure proper sample collection, processing, and analysis. The quality-assurance/quality-control data for the time period addressed in this report were stored in the laboratory's SAS data-management system, which provides efficient review, compilation, and plotting of data. This report presents and discusses results of quality-assurance and quality- control samples analyzed from July 1997 through June 1999. Results for the quality-control samples for 18 analytical procedures were evaluated for bias and precision. Control charts indicate that data for eight of the analytical procedures were occasionally biased for either high-concentration and (or) low-concentration samples but were within control limits; these procedures were: acid-neutralizing capacity, total monomeric aluminum, total aluminum, ammonium, calcium, chloride, specific conductance, and sulfate. The data from the potassium and sodium analytical procedures are insufficient for evaluation. Results from the filter-blank and analytical-blank analyses indicate that the procedures for 11 of 13 analytes were within control limits, although the concentrations for blanks were occasionally outside the control limits. Blank analysis results for chloride showed that 22 percent of blanks did not meet data-quality objectives and results for dissolved organic carbon showed that 31 percent of the blanks did not meet data-quality objectives. Sampling and analysis precision are evaluated herein in terms of the coefficient of variation obtained for triplicate samples in the procedures for 14 of the 18 analytes. At least 90 percent of the samples met data-quality objectives for all analytes except total aluminum (70 percent of samples met objectives) and potassium (83 percent of samples met objectives). Results of the USGS interlaboratory Standard Reference Sample (SRS) Project indicated good data quality for most constituents over the time period. The P-sample (low-ionic-strength constituents) analysis had good ratings in two of these studies and a satisfactory rating in the third. The results of the T-sample (trace constituents) analysis indicated high data quality with good ratings in all three studies. The N-sample (nutrient constituents) studies had one each of excellent, good, and satisfactory ratings. Results of Environment Canada's National Water Research Institute (NWRI) program indicated that at least 80 percent of the samples met data-quality objectives for 9 of the 13 analytes; the exceptions were dissolved organic carbon, ammonium, chloride, and specific conductance. Data-quality objectives were not met for dissolved organic carbon in two NWRI studies, but all of the samples were within control limits for the last study. Data-quality objectives were not met in 41 percent of samples analyzed for ammonium, 25 percent of samples analyzed for chloride, and 30 percent of samples analyzed for specific conductance. Results from blind reference-sample analyses indicated that data-quality objectives were met by at least 84 percent of the samples analyzed for calcium, chloride, magnesium, pH, and potassium. Data-quality objectives were met by 73 percent of those analyzed for sulfate. The data-quality objective was not met for sodium. The data are insufficient for evaluation of the specific conductance results.
Kawashiri, Masa-aki; Sakata, Kenji; Uchiyama, Katsuharru; Konno, Tetsuo; Namura, Masanobu; Mizuno, Sumio; Tatami, Ryozo; Kanaya, Honin; Nitta, Yutaka; Michishita, Ichiro; Hirase, Hiroaki; Ueda, Kosei; Aoyama, Takashi; Okeie, Kazuyasu; Haraki, Tatsuo; Mori, Kiyoo; Araki, Tsutomu; Minamoto, Masaharu; Oiwake, Hisanori; Ino, Hidekazu; Hayashi, Kenshi; Yamagishi, Masakazu
2014-04-01
Whether the lesion morphology and associated interventional procedures for the left main coronary artery disease (LMCA) could affect clinical outcome is still controversial. Therefore, we examined the impact of lesion morphology and associated procedures on clinical and angiographic outcomes of stenting for the LMCA. Among 7,660 patients with coronary intervention registered, we analyzed early angiographic results of 228 patients (179 men, mean age 69.4 years) concerned with LMCA lesions. In 121 out of 228 patients having long-term angiographic results, we examined the occurrence of major adverse coronary events (MACE) particularly in terms of the presence of acute coronary syndrome (ACS), the kind of stents, bear metal or drug eluting, the lesion morphology and associated procedures. Early angiographic success rate of LMCA stenting was 100 %, and clinical success rate was 94.3 %. During follow-up period for 3 years, MACE was observed in 17 patients. Under these conditions, multiple stenting (p < 0.01) and complicated procedures such as such as Y-stent, T-stent and crush stent (p < 0.01) were listed as risks for MACE, although there was no statistical difference in kinds of stent. Multivariate analysis demonstrated the significant disadvantage of complicated procedures using the bear metal stent on the occurrence of MACE (p < 0.01). These results demonstrate that the complicated procedures have great impact on clinical and angiographic outcomes after stenting for LMCA lesions, and suggest the simple procedure with a single stent for LMCA lesions in the present cohort. Whether the presence of ACS can affect the prognosis should further be sought.
A combined qualitative and quantitative procedure for the chemical analysis of urinary calculi
Hodgkinson, A.
1971-01-01
A better understanding of the physico-chemical principles underlying the formation of calculus has led to a need for more precise information on the chemical composition of stones. A combined qualitative and quantitative procedure for the chemical analysis of urinary calculi which is suitable for routine use is presented. The procedure involves five simple qualitative tests followed by the quantitative determination of calcium, magnesium, inorganic phosphate, and oxalate. These data are used to calculate the composition of the stone in terms of calcium oxalate, apatite, and magnesium ammonium phosphate. Analytical results and derived values for five representative types of calculi are presented. PMID:5551382
Comparison of Traditional and Trial-Based Methodologies for Conducting Functional Analyses
ERIC Educational Resources Information Center
LaRue, Robert H.; Lenard, Karen; Weiss, Mary Jane; Bamond, Meredith; Palmieri, Mark; Kelley, Michael E.
2010-01-01
Functional analysis represents a sophisticated and empirically supported functional assessment procedure. While these procedures have garnered considerable empirical support, they are often underused in clinical practice. Safety risks resulting from the evocation of maladaptive behavior and the length of time required to conduct functional…
Contact stresses in gear teeth: A new method of analysis
NASA Technical Reports Server (NTRS)
Somprakit, Paisan; Huston, Ronald L.; Oswald, Fred B.
1991-01-01
A new, innovative procedure called point load superposition for determining the contact stresses in mating gear teeth. It is believed that this procedure will greatly extend both the range of applicability and the accuracy of gear contact stress analysis. Point load superposition is based upon fundamental solutions from the theory of elasticity. It is an iterative numerical procedure which has distinct advantages over the classical Hertz method, the finite element method, and over existing applications with the boundary element method. Specifically, friction and sliding effects, which are either excluded from or difficult to study with the classical methods, are routinely handled with the new procedure. Presented here are the basic theory and the algorithms. Several examples are given. Results are consistent with those of the classical theories. Applications to spur gears are discussed.
Rhee, Peter C; Fischer, Michelle M; Rhee, Laura S; McMillan, Ha; Johnson, Anthony E
2017-03-01
Wide-awake, local anesthesia, no tourniquet (WALANT) hand surgery was developed to improve access to hand surgery care while optimizing medical resources. Hand surgery in the clinic setting may result in substantial cost savings for the United States Military Health Care System (MHS) and provide a safe alternative to performing similar procedures in the operating room. A prospective cohort study was performed on the first 100 consecutive clinic-based WALANT hand surgery procedures performed at a military medical center from January 2014 to September 2015 by a single hand surgeon. Cost savings analysis was performed by using the Medical Expense and Performance Reporting System, the standard cost accounting system for the MHS, to compare procedures performed in the clinic versus the operating room during the study period. A study specific questionnaire was obtained for 66 procedures to evaluate the patient's experience. For carpal tunnel release (n = 34) and A1 pulley release (n = 33), there were 85% and 70% cost savings by having the procedures performed in clinic under WALANT compared with the main operating room, respectively. During the study period, carpal tunnel release, A1 pulley release, and de Quervain release performed in the clinic instead of the operating room amounted to $393,100 in cost savings for the MHS. There were no adverse events during the WALANT procedure. A clinic-based WALANT hand surgery program at a military medical center results in considerable cost savings for the MHS. Economic/Decision Analysis IV. Copyright © 2017 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
Analysis of Slug Tests in Formations of High Hydraulic Conductivity
Butler, J.J.; Garnett, E.J.; Healey, J.M.
2003-01-01
A new procedure is presented for the analysis of slug tests performed in partially penetrating wells in formations of high hydraulic conductivity. This approach is a simple, spreadsheet-based implementation of existing models that can be used for analysis of tests from confined or unconfined aquifers. Field examples of tests exhibiting oscillatory and nonoscillatory behavior are used to illustrate the procedure and to compare results with estimates obtained using alternative approaches. The procedure is considerably simpler than recently proposed methods for this hydrogeologic setting. Although the simplifications required by the approach can introduce error into hydraulic-conductivity estimates, this additional error becomes negligible when appropriate measures are taken in the field. These measures are summarized in a set of practical field guidelines for slug tests in highly permeable aquifers.
Jędrkiewicz, Renata; Tsakovski, Stefan; Lavenu, Aurore; Namieśnik, Jacek; Tobiszewski, Marek
2018-02-01
Novel methodology for grouping and ranking with application of self-organizing maps and multicriteria decision analysis is presented. The dataset consists of 22 objects that are analytical procedures applied to furan determination in food samples. They are described by 10 variables, referred to their analytical performance, environmental and economic aspects. Multivariate statistics analysis allows to limit the amount of input data for ranking analysis. Assessment results show that the most beneficial procedures are based on microextraction techniques with GC-MS final determination. It is presented how the information obtained from both tools complement each other. The applicability of combination of grouping and ranking is also discussed. Copyright © 2017 Elsevier B.V. All rights reserved.
Analysis of slug tests in formations of high hydraulic conductivity.
Butler, James J; Garnett, Elizabeth J; Healey, John M
2003-01-01
A new procedure is presented for the analysis of slug tests performed in partially penetrating wells in formations of high hydraulic conductivity. This approach is a simple, spreadsheet-based implementation of existing models that can be used for analysis of tests from confined or unconfined aquifers. Field examples of tests exhibiting oscillatory and nonoscillatory behavior are used to illustrate the procedure and to compare results with estimates obtained using alternative approaches. The procedure is considerably simpler than recently proposed methods for this hydrogeologic setting. Although the simplifications required by the approach can introduce error into hydraulic-conductivity estimates, this additional error becomes negligible when appropriate measures are taken in the field. These measures are summarized in a set of practical field guidelines for slug tests in highly permeable aquifers.
AGR-1 Thermocouple Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeff Einerson
2012-05-01
This report documents an effort to analyze measured and simulated data obtained in the Advanced Gas Reactor (AGR) fuel irradiation test program conducted in the INL's Advanced Test Reactor (ATR) to support the Next Generation Nuclear Plant (NGNP) R&D program. The work follows up on a previous study (Pham and Einerson, 2010), in which statistical analysis methods were applied for AGR-1 thermocouple data qualification. The present work exercises the idea that, while recognizing uncertainties inherent in physics and thermal simulations of the AGR-1 test, results of the numerical simulations can be used in combination with the statistical analysis methods tomore » further improve qualification of measured data. Additionally, the combined analysis of measured and simulation data can generate insights about simulation model uncertainty that can be useful for model improvement. This report also describes an experimental control procedure to maintain fuel target temperature in the future AGR tests using regression relationships that include simulation results. The report is organized into four chapters. Chapter 1 introduces the AGR Fuel Development and Qualification program, AGR-1 test configuration and test procedure, overview of AGR-1 measured data, and overview of physics and thermal simulation, including modeling assumptions and uncertainties. A brief summary of statistical analysis methods developed in (Pham and Einerson 2010) for AGR-1 measured data qualification within NGNP Data Management and Analysis System (NDMAS) is also included for completeness. Chapters 2-3 describe and discuss cases, in which the combined use of experimental and simulation data is realized. A set of issues associated with measurement and modeling uncertainties resulted from the combined analysis are identified. This includes demonstration that such a combined analysis led to important insights for reducing uncertainty in presentation of AGR-1 measured data (Chapter 2) and interpretation of simulation results (Chapter 3). The statistics-based simulation-aided experimental control procedure described for the future AGR tests is developed and demonstrated in Chapter 4. The procedure for controlling the target fuel temperature (capsule peak or average) is based on regression functions of thermocouple readings and other relevant parameters and accounting for possible changes in both physical and thermal conditions and in instrument performance.« less
da Costa, Márcia Gisele Santos; Santos, Marisa da Silva; Sarti, Flávia Mori; Senna, Kátia Marie Simões e.; Tura, Bernardo Rangel; Goulart, Marcelo Correia
2014-01-01
Objectives The study performs a cost-effectiveness analysis of procedures for atrial septal defects occlusion, comparing conventional surgery to septal percutaneous implant. Methods A model of analytical decision was structured with symmetric branches to estimate cost-effectiveness ratio between the procedures. The decision tree model was based on evidences gathered through meta-analysis of literature, and validated by a panel of specialists. The lower number of surgical procedures performed for atrial septal defects occlusion at each branch was considered as the effectiveness outcome. Direct medical costs and probabilities for each event were inserted in the model using data available from Brazilian public sector database system and information extracted from the literature review, using micro-costing technique. Sensitivity analysis included price variations of percutaneous implant. Results The results obtained from the decision model demonstrated that the percutaneous implant was more cost effective in cost-effectiveness analysis at a cost of US$8,936.34 with a reduction in the probability of surgery occurrence in 93% of the cases. Probability of atrial septal communication occlusion and cost of the implant are the determinant factors of cost-effectiveness ratio. Conclusions The proposal of a decision model seeks to fill a void in the academic literature. The decision model proposed includes the outcomes that present major impact in relation to the overall costs of the procedure. The atrial septal defects occlusion using percutaneous implant reduces the physical and psychological distress to the patients in relation to the conventional surgery, which represent intangible costs in the context of economic evaluation. PMID:25302806
Two Paradoxes in Linear Regression Analysis
FENG, Ge; PENG, Jing; TU, Dongke; ZHENG, Julia Z.; FENG, Changyong
2016-01-01
Summary Regression is one of the favorite tools in applied statistics. However, misuse and misinterpretation of results from regression analysis are common in biomedical research. In this paper we use statistical theory and simulation studies to clarify some paradoxes around this popular statistical method. In particular, we show that a widely used model selection procedure employed in many publications in top medical journals is wrong. Formal procedures based on solid statistical theory should be used in model selection. PMID:28638214
32 CFR 989.37 - Procedures for analysis abroad.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 32 National Defense 6 2011-07-01 2011-07-01 false Procedures for analysis abroad. 989.37 Section... PROTECTION ENVIRONMENTAL IMPACT ANALYSIS PROCESS (EIAP) § 989.37 Procedures for analysis abroad. Procedures for analysis of environmental actions abroad are contained in 32 CFR part 187. That directive provides...
A study for high accuracy measurement of residual stress by deep hole drilling technique
NASA Astrophysics Data System (ADS)
Kitano, Houichi; Okano, Shigetaka; Mochizuki, Masahito
2012-08-01
The deep hole drilling technique (DHD) received much attention in recent years as a method for measuring through-thickness residual stresses. However, some accuracy problems occur when residual stress evaluation is performed by the DHD technique. One of the reasons is that the traditional DHD evaluation formula applies to the plane stress condition. The second is that the effects of the plastic deformation produced in the drilling process and the deformation produced in the trepanning process are ignored. In this study, a modified evaluation formula, which is applied to the plane strain condition, is proposed. In addition, a new procedure is proposed which can consider the effects of the deformation produced in the DHD process by investigating the effects in detail by finite element (FE) analysis. Then, the evaluation results obtained by the new procedure are compared with that obtained by traditional DHD procedure by FE analysis. As a result, the new procedure evaluates the residual stress fields better than the traditional DHD procedure when the measuring object is thick enough that the stress condition can be assumed as the plane strain condition as in the model used in this study.
A Simultaneous Analysis Problem for Advanced General Chemistry Laboratories.
ERIC Educational Resources Information Center
Leary, J. J.; Gallaher, T. N.
1983-01-01
Oxidation of magnesium metal in air has been used as an introductory experiment for determining the formula of a compound. The experiment described employs essentially the same laboratory procedure but is significantly more advanced in terms of information sought. Procedures and sample calculations/results are provided. (JN)
An analysis of ratings: A guide to RMRATE
Thomas C. Brown; Terry C. Daniel; Herbert W. Schroeder; Glen E. Brink
1990-01-01
This report describes RMRATE, a computer program for analyzing rating judgments. RMRATE scales ratings using several scaling procedures, and compares the resulting scale values. The scaling procedures include the median and simple mean, standardized values, scale values based on Thurstone's Law of Categorical Judgment, and regression-based values. RMRATE also...
Inferential Procedures for Correlation Coefficients Corrected for Attenuation.
ERIC Educational Resources Information Center
Hakstian, A. Ralph; And Others
1988-01-01
A model and computation procedure based on classical test score theory are presented for determination of a correlation coefficient corrected for attenuation due to unreliability. Delta and Monte Carlo method applications are discussed. A power analysis revealed no serious loss in efficiency resulting from correction for attentuation. (TJH)
Kinetic Fluorescence Experiment for the Determination of Thiamine.
ERIC Educational Resources Information Center
Bower, Nathan W.
1982-01-01
Background information, procedures, and typical results are provided for an experiment which integrates principles of fluorescent and kinetic analysis. In the procedure, mecuric chloride is used as a selective oxidizing agent for converting thiamine to thiochrome. The experiment can be completed in a two-hour laboratory period. (Author/JN)
Element-by-element Solution Procedures for Nonlinear Structural Analysis
NASA Technical Reports Server (NTRS)
Hughes, T. J. R.; Winget, J. M.; Levit, I.
1984-01-01
Element-by-element approximate factorization procedures are proposed for solving the large finite element equation systems which arise in nonlinear structural mechanics. Architectural and data base advantages of the present algorithms over traditional direct elimination schemes are noted. Results of calculations suggest considerable potential for the methods described.
Inter-laboratory comparison of the in vivo comet assay including three image analysis systems.
Plappert-Helbig, Ulla; Guérard, Melanie
2015-12-01
To compare the extent of potential inter-laboratory variability and the influence of different comet image analysis systems, in vivo comet experiments were conducted using the genotoxicants ethyl methanesulfonate and methyl methanesulfonate. Tissue samples from the same animals were processed and analyzed-including independent slide evaluation by image analysis-in two laboratories with extensive experience in performing the comet assay. The analysis revealed low inter-laboratory experimental variability. Neither the use of different image analysis systems, nor the staining procedure of DNA (propidium iodide vs. SYBR® Gold), considerably impacted the results or sensitivity of the assay. In addition, relatively high stability of the staining intensity of propidium iodide-stained slides was found in slides that were refrigerated for over 3 months. In conclusion, following a thoroughly defined protocol and standardized routine procedures ensures that the comet assay is robust and generates comparable results between different laboratories. © 2015 Wiley Periodicals, Inc.
Preparation And Analysis Of Specimens Of Ablative Materials
NASA Technical Reports Server (NTRS)
Solomon, William C.
1994-01-01
Procedure for chemical analysis of specimens of silicone-based ablative thermal-insulation materials SLA-561 and MA25 involves acid digestion of specimens to prepare them for analysis by inductively-coupled-plasma/atomic-emission spectroscopy (ICP/AES). In comparison with atomic-absorption spectroscopy (AAS), ICP/AES is faster and more accurate than AAS. Results of analyses stored in data base, used to trace variations in concentrations of chemical elements in materials during long-term storage, and used in timely manner in investigations of failures. Acid-digestion portion of procedure applied to other thermal-insulation materials containing room-temperature-vulcanizing silicones and enables instrumental analysis of these materials.
Effects of computer-based training on procedural modifications to standard functional analyses.
Schnell, Lauren K; Sidener, Tina M; DeBar, Ruth M; Vladescu, Jason C; Kahng, SungWoo
2018-01-01
Few studies have evaluated methods for training decision-making when functional analysis data are undifferentiated. The current study evaluated computer-based training to teach 20 graduate students to arrange functional analysis conditions, analyze functional analysis data, and implement procedural modifications. Participants were exposed to training materials using interactive software during a 1-day session. Following the training, mean scores on the posttest, novel cases probe, and maintenance probe increased for all participants. These results replicate previous findings during a 1-day session and include a measure of participant acceptability of the training. Recommendations for future research on computer-based training and functional analysis are discussed. © 2017 Society for the Experimental Analysis of Behavior.
Modal-pushover-based ground-motion scaling procedure
Kalkan, Erol; Chopra, Anil K.
2011-01-01
Earthquake engineering is increasingly using nonlinear response history analysis (RHA) to demonstrate the performance of structures. This rigorous method of analysis requires selection and scaling of ground motions appropriate to design hazard levels. This paper presents a modal-pushover-based scaling (MPS) procedure to scale ground motions for use in a nonlinear RHA of buildings. In the MPS method, the ground motions are scaled to match to a specified tolerance, a target value of the inelastic deformation of the first-mode inelastic single-degree-of-freedom (SDF) system whose properties are determined by the first-mode pushover analysis. Appropriate for first-mode dominated structures, this approach is extended for structures with significant contributions of higher modes by considering elastic deformation of second-mode SDF systems in selecting a subset of the scaled ground motions. Based on results presented for three actual buildings-4, 6, and 13-story-the accuracy and efficiency of the MPS procedure are established and its superiority over the ASCE/SEI 7-05 scaling procedure is demonstrated.
NASA Technical Reports Server (NTRS)
Nguyen, Truong X.; Ely, Jay J.; Koppen, Sandra V.
2001-01-01
This paper describes the implementation of mode-stirred method for susceptibility testing according to the current DO-160D standard. Test results on an Engine Data Processor using the implemented procedure and the comparisons with the standard anechoic test results are presented. The comparison experimentally shows that the susceptibility thresholds found in mode-stirred method are consistently higher than anechoic. This is consistent with the recent statistical analysis finding by NIST that the current calibration procedure overstates field strength by a fixed amount. Once the test results are adjusted for this value, the comparisons with the anechoic results are excellent. The results also show that test method has excellent chamber to chamber repeatability. Several areas for improvements to the current procedure are also identified and implemented.
Quantitative trait Loci analysis using the false discovery rate.
Benjamini, Yoav; Yekutieli, Daniel
2005-10-01
False discovery rate control has become an essential tool in any study that has a very large multiplicity problem. False discovery rate-controlling procedures have also been found to be very effective in QTL analysis, ensuring reproducible results with few falsely discovered linkages and offering increased power to discover QTL, although their acceptance has been slower than in microarray analysis, for example. The reason is partly because the methodological aspects of applying the false discovery rate to QTL mapping are not well developed. Our aim in this work is to lay a solid foundation for the use of the false discovery rate in QTL mapping. We review the false discovery rate criterion, the appropriate interpretation of the FDR, and alternative formulations of the FDR that appeared in the statistical and genetics literature. We discuss important features of the FDR approach, some stemming from new developments in FDR theory and methodology, which deem it especially useful in linkage analysis. We review false discovery rate-controlling procedures--the BH, the resampling procedure, and the adaptive two-stage procedure-and discuss the validity of these procedures in single- and multiple-trait QTL mapping. Finally we argue that the control of the false discovery rate has an important role in suggesting, indicating the significance of, and confirming QTL and present guidelines for its use.
NASA Technical Reports Server (NTRS)
Hashemi-Kia, Mostafa; Toossi, Mostafa
1990-01-01
A computational procedure for the reduction of large finite element models was developed. This procedure is used to obtain a significantly reduced model while retaining the essential global dynamic characteristics of the full-size model. This reduction procedure is applied to the airframe finite element model of AH-64A Attack Helicopter. The resulting reduced model is then validated by application to a vibration reduction study.
[Evidence based medicine and cost-effectiveness analysis in ophthalmology].
Nováková, D; Rozsíval, P
2004-09-01
To make the reader familiar with the term evidence based medicine (EBM), to explain the principle of cost-effectiveness analysis (price-profit), and to show its usefulness to compare the effectiveness of different medical procedures. Based on few examples, in this article the relevance and calculation of important parameters of cost-effectiveness analysis (CE), as utility value (UV), quality adjusted life years (QALY) is explained. In addition, calculation of UV and QALY for the cataract surgery, including its complications, is provided. According to this method, laser photocoagulation and cryocoagulation of the early stages of retinopathy of prematurity, treatment of amblyopia, cataract surgery of one or both eyes, from the vitreoretinal procedures the early vitrectomy in cases of hemophtalmus in proliferative diabetic retinopathy or grid laser photocoagulation in diabetic macular edema or worsening of the visual acuity due to the branch retinal vein occlusion belong to highly effective procedures. On the other hand, to the procedures with low cost effectiveness belongs the treating of the central retinal artery occlusion with anterior chamber paracentesis, as well as with CO2 inhalation, or photodynamic therapy in choroidal neovascularization in age-related macular degeneration with visual acuity of the better eye 20/200. Cost-effectiveness analysis is a new perspective method evaluating successfulness of medical procedure comparing the final effect with the financial costs. In evaluation of effectiveness of individual procedures, three main aspects are considered: subjective feeling of influence of the disease on the patient's life, objective results of clinical examination and financial costs of the procedure. According to this method, the cataract surgery, as well as procedures in the pediatric ophthalmology belong to the most effective surgical methods.
Qualitative Amino Acid Analysis of Small Peptides by GC/MS.
ERIC Educational Resources Information Center
Mabbott, Gary A.
1990-01-01
Experiments designed to help undergraduate students gain experience operating instruments and interpreting gas chromatography and mass spectrometry data are presented. Experimental reagents, procedures, analysis, and probable results are discussed. (CW)
The SCALE Verified, Archived Library of Inputs and Data - VALID
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, William BJ J; Rearden, Bradley T
The Verified, Archived Library of Inputs and Data (VALID) at ORNL contains high quality, independently reviewed models and results that improve confidence in analysis. VALID is developed and maintained according to a procedure of the SCALE quality assurance (QA) plan. This paper reviews the origins of the procedure and its intended purpose, the philosophy of the procedure, some highlights of its implementation, and the future of the procedure and associated VALID library. The original focus of the procedure was the generation of high-quality models that could be archived at ORNL and applied to many studies. The review process associated withmore » model generation minimized the chances of errors in these archived models. Subsequently, the scope of the library and procedure was expanded to provide high quality, reviewed sensitivity data files for deployment through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Sensitivity data files for approximately 400 such models are currently available. The VALID procedure and library continue fulfilling these multiple roles. The VALID procedure is based on the quality assurance principles of ISO 9001 and nuclear safety analysis. Some of these key concepts include: independent generation and review of information, generation and review by qualified individuals, use of appropriate references for design data and documentation, and retrievability of the models, results, and documentation associated with entries in the library. Some highlights of the detailed procedure are discussed to provide background on its implementation and to indicate limitations of data extracted from VALID for use by the broader community. Specifically, external users of data generated within VALID must take responsibility for ensuring that the files are used within the QA framework of their organization and that use is appropriate. The future plans for the VALID library include expansion to include additional experiments from the IHECSBE, to include experiments from areas beyond criticality safety, such as reactor physics and shielding, and to include application models. In the future, external SCALE users may also obtain qualification under the VALID procedure and be involved in expanding the library. The VALID library provides a pathway for the criticality safety community to leverage modeling and analysis expertise at ORNL.« less
Prabhu, Kristel Lobo; Okrainec, Allan; Maeda, Azusa; Saskin, Refik; Urbach, David; Bell, Chaim M; Jackson, Timothy D
2018-06-16
Laparoscopic adjustable gastric band (LAGB) placement remains a common bariatric procedure. While LAGB procedure is performed within private clinics in most Canadian provinces, public health care is often utilized for LAGB-related reoperations. We identified 642 gastric band removal procedures performed in Ontario from 2011 to 2014 using population-level administrative data. The number of procedures performed increased annually from 101 in 2011 to 220 in 2014. Notably, 54.7% of the patients required laparotomy, and 17.6% of patients underwent a subsequent bariatric surgery. Our findings demonstrated that LAGB placement in private clinics resulted in a large number of band removal procedures performed within the public system. This represents a significant public health concern that may result in significant health care utilization and patient morbidity.
Stepwise Iterative Fourier Transform: The SIFT
NASA Technical Reports Server (NTRS)
Benignus, V. A.; Benignus, G.
1975-01-01
A program, designed specifically to study the respective effects of some common data problems on results obtained through stepwise iterative Fourier transformation of synthetic data with known waveform composition, was outlined. Included in this group were the problems of gaps in the data, different time-series lengths, periodic but nonsinusoidal waveforms, and noisy (low signal-to-noise) data. Results on sinusoidal data were also compared with results obtained on narrow band noise with similar characteristics. The findings showed that the analytic procedure under study can reliably reduce data in the nature of (1) sinusoids in noise, (2) asymmetric but periodic waves in noise, and (3) sinusoids in noise with substantial gaps in the data. The program was also able to analyze narrow-band noise well, but with increased interpretational problems. The procedure was shown to be a powerful technique for analysis of periodicities, in comparison with classical spectrum analysis techniques. However, informed use of the stepwise procedure nevertheless requires some background of knowledge concerning characteristics of the biological processes under study.
Gariepy, Aileen M.; Creinin, Mitchell D.; Schwarz, Eleanor B.; Smith, Kenneth J.
2011-01-01
OBJECTIVE To estimate the probability of successful sterilization after hysteroscopic or laparoscopic sterilization procedure. METHODS An evidence-based clinical decision analysis using a Markov model was performed to estimate the probability of a successful sterilization procedure using laparoscopic sterilization, hysteroscopic sterilization in the operating room, and hysteroscopic sterilization in the office. Procedure and follow-up testing probabilities for the model were estimated from published sources. RESULTS In the base case analysis, the proportion of women having a successful sterilization procedure on first attempt is 99% for laparoscopic, 88% for hysteroscopic in the operating room and 87% for hysteroscopic in the office. The probability of having a successful sterilization procedure within one year is 99% with laparoscopic, 95% for hysteroscopic in the operating room, and 94% for hysteroscopic in the office. These estimates for hysteroscopic success include approximately 6% of women who attempt hysteroscopically but are ultimately sterilized laparoscopically. Approximately 5% of women who have a failed hysteroscopic attempt decline further sterilization attempts. CONCLUSIONS Women choosing laparoscopic sterilization are more likely than those choosing hysteroscopic sterilization to have a successful sterilization procedure within one year. However, the risk of failed sterilization and subsequent pregnancy must be considered when choosing a method of sterilization. PMID:21775842
Ebbers, Hans C; Langedijk, Joris; Bouvy, Jacoline C; Hoekman, Jarno; Boon, Wouter P C; de Jong, Jean Philippe; De Bruin, Marie L
2015-10-01
The aim of this study is to provide a comprehensive overview of the outcomes of marketing authorisation applications via the mutual recognition and decentralised procedures (MRP/DCP) and assess determinants of licensing failure during CMDh referral procedures. All MRP/DCP procedures to the Co-ordination group for Mutual recognition and Decentralised procedures-human (CMDh) during the period from January 2006 to December 2013 were analysed. Reasons for starting referral procedures were scored. In addition, a survey under pharmaceutical companies was performed to estimate the frequency of licensing failure prior to CMDh referrals. During the study period, 10392 MRP/DCP procedures were finalized. Three hundred seventy-seven (3.6%) resulted in a referral procedure, of which 70 (19%) resulted in licensing failure, defined as refusal or withdrawal of the application. The frequency of CMDh referrals decreased from 14.5% in 2006 to 1.6% in 2013. Of all referrals, 272 (72%) were resolved through consensus within the CMDh, the remaining 105 (28%) were resolved at the level of the CHMP. Most referrals were started because of objections raised about the clinical development program. Study design issues and objections about the demonstration of equivalence were most likely to result in licensing failure. An estimated 11% of all MRP/DCP procedures resulted in licensing failure prior to CMDh referral. Whereas the absolute number of MRP/DCP procedures resulting in a referral has reduced substantially over the past years, no specific time trend could be observed regarding the frequency of referrals resulting in licensing failure. Increased knowledge at the level of companies and regulators has reduced the frequency of late-stage failure of marketing applications via the MRP/DCP.
Kopcinovic, Lara Milevoj; Vogrinc, Zeljka; Kocijan, Irena; Culej, Jelena; Aralica, Merica; Jokic, Anja; Antoncic, Dragana; Bozovic, Marija
2016-01-01
Introduction We hypothesized that extravascular body fluid (EBF) analysis in Croatia is not harmonized and aimed to investigate preanalytical, analytical and postanalytical procedures used in EBF analysis in order to identify key aspects that should be addressed in future harmonization attempts. Materials and methods An anonymous online survey created to explore laboratory testing of EBF was sent to secondary, tertiary and private health care Medical Biochemistry Laboratories (MBLs) in Croatia. Statements were designed to address preanalytical, analytical and postanalytical procedures of cerebrospinal, pleural, peritoneal (ascites), pericardial, seminal, synovial, amniotic fluid and sweat. Participants were asked to declare the strength of agreement with proposed statements using a Likert scale. Mean scores for corresponding separate statements divided according to health care setting were calculated and compared. Results The survey response rate was 0.64 (58 / 90). None of the participating private MBLs declared to analyse EBF. We report a mean score of 3.45 obtained for all statements evaluated. Deviations from desirable procedures were demonstrated in all EBF testing phases. Minor differences in procedures used for EBF analysis comparing secondary and tertiary health care MBLs were found. The lowest scores were obtained for statements regarding quality control procedures in EBF analysis, participation in proficiency testing programmes and provision of interpretative comments on EBF’s test reports. Conclusions Although good laboratory EBF practice is present in Croatia, procedures for EBF analysis should be further harmonized to improve the quality of EBF testing and patient safety. PMID:27812307
Development of Test-Analysis Models (TAM) for correlation of dynamic test and analysis results
NASA Technical Reports Server (NTRS)
Angelucci, Filippo; Javeed, Mehzad; Mcgowan, Paul
1992-01-01
The primary objective of structural analysis of aerospace applications is to obtain a verified finite element model (FEM). The verified FEM can be used for loads analysis, evaluate structural modifications, or design control systems. Verification of the FEM is generally obtained as the result of correlating test and FEM models. A test analysis model (TAM) is very useful in the correlation process. A TAM is essentially a FEM reduced to the size of the test model, which attempts to preserve the dynamic characteristics of the original FEM in the analysis range of interest. Numerous methods for generating TAMs have been developed in the literature. The major emphasis of this paper is a description of the procedures necessary for creation of the TAM and the correlation of the reduced models with the FEM or the test results. Herein, three methods are discussed, namely Guyan, Improved Reduced System (IRS), and Hybrid. Also included are the procedures for performing these analyses using MSC/NASTRAN. Finally, application of the TAM process is demonstrated with an experimental test configuration of a ten bay cantilevered truss structure.
Huang, Shou-Guo; Chen, Bo; Lv, Dong; Zhang, Yong; Nie, Feng-Feng; Li, Wei; Lv, Yao; Zhao, Huan-Li; Liu, Hong-Mei
2017-01-01
Purpose Using a network meta-analysis approach, our study aims to develop a ranking of the six surgical procedures, that is, Plate, titanium elastic nail (TEN), tension band wire (TBW), hook plate (HP), reconstruction plate (RP) and Knowles pin, by comparing the post-surgery constant shoulder scores in patients with clavicular fracture (CF). Methods A comprehensive search of electronic scientific literature databases was performed to retrieve publications investigating surgical procedures in CF, with the stringent eligible criteria, and clinical experimental studies of high quality and relevance to our area of interest were selected for network meta-analysis. Statistical analyses were conducted using Stata 12.0. Results A total of 19 studies met our inclusion criteria were eventually enrolled into our network meta-analysis, representing 1164 patients who had undergone surgical procedures for CF (TEN group = 240; Plate group = 164; TBW group = 180; RP group = 168; HP group = 245; Knowles pin group = 167). The network meta-analysis results revealed that RP significantly improved constant shoulder score in patients with CF when compared with TEN, and the post-operative constant shoulder scores in patients with CF after Plate, TBW, HP, Knowles pin and TEN were similar with no statistically significant differences. The treatment relative ranking of predictive probabilities of constant shoulder scores in patients with CF after surgery revealed the surface under the cumulative ranking curves (SUCRA) value is the highest in RP. Conclusion The current network meta-analysis suggests that RP may be the optimum surgical treatment among six inventions for patients with CF, and it can improve the shoulder score of patients with CF. Implications for Rehabilitation RP improves shoulder joint function after surgical procedure. RP achieves stability with minimal complications after surgery. RP may be the optimum surgical treatment for rehabilitation of patients with CF.
Evaluation of the Utility of a Discrete-Trial Functional Analysis in Early Intervention Classrooms
ERIC Educational Resources Information Center
Kodak, Tiffany; Fisher, Wayne W.; Paden, Amber; Dickes, Nitasha
2013-01-01
We evaluated a discrete-trial functional analysis implemented by regular classroom staff in a classroom setting. The results suggest that the discrete-trial functional analysis identified a social function for each participant and may require fewer staff than standard functional analysis procedures.
Neural networks for structural design - An integrated system implementation
NASA Technical Reports Server (NTRS)
Berke, Laszlo; Hafez, Wassim; Pao, Yoh-Han
1992-01-01
The development of powerful automated procedures to aid the creative designer is becoming increasingly critical for complex design tasks. In the work described here Artificial Neural Nets are applied to acquire structural analysis and optimization domain expertise. Based on initial instructions from the user an automated procedure generates random instances of structural analysis and/or optimization 'experiences' that cover a desired domain. It extracts training patterns from the created instances, constructs and trains an appropriate network architecture and checks the accuracy of net predictions. The final product is a trained neural net that can estimate analysis and/or optimization results instantaneously.
NASA Technical Reports Server (NTRS)
Cassarino, S.; Sopher, R.
1982-01-01
user instruction and software descriptions for the base program of the coupled rotor/airframe vibration analysis are provided. The functional capabilities and procedures for running the program are provided. Interfaces with external programs are discussed. The procedure of synthesizing a dynamic system and the various solution methods are described. Input data and output results are presented. Detailed information is provided on the program structure. Sample test case results for five representative dynamic configurations are provided and discussed. System response are plotted to demonstrate the plots capabilities available. Instructions to install and execute SIMVIB on the CDC computer system are provided.
An efficient solution procedure for the thermoelastic analysis of truss space structures
NASA Technical Reports Server (NTRS)
Givoli, D.; Rand, O.
1992-01-01
A solution procedure is proposed for the thermal and thermoelastic analysis of truss space structures in periodic motion. In this method, the spatial domain is first descretized using a consistent finite element formulation. Then the resulting semi-discrete equations in time are solved analytically by using Fourier decomposition. Geometrical symmetry is taken advantage of completely. An algorithm is presented for the calculation of heat flux distribution. The method is demonstrated via a numerical example of a cylindrically shaped space structure.
Modeling and analysis of the space shuttle nose-gear tire with semianalytic finite elements
NASA Technical Reports Server (NTRS)
Kim, Kyun O.; Noor, Ahmed K.; Tanner, John A.
1990-01-01
A computational procedure is presented for the geometrically nonlinear analysis of aircraft tires. The Space Shuttle Orbiter nose gear tire was modeled by using a two-dimensional laminated anisotropic shell theory with the effects of variation in material and geometric parameters included. The four key elements of the procedure are: (1) semianalytic finite elements in which the shell variables are represented by Fourier series in the circumferential direction and piecewise polynominals in the meridional direction; (2) a mixed formulation with the fundamental unknowns consisting of strain parameters, stress-resultant parameters, and generalized displacements; (3) multilevel operator splitting to effect successive simplifications, and to uncouple the equations associated with different Fourier harmonics; and (4) multilevel iterative procedures and reduction techniques to generate the response of the shell. Numerical results of the Space Shuttle Orbiter nose gear tire model are compared with experimental measurements of the tire subjected to inflation loading.
Brock Stewart; Chris J. Cieszewski; Michal Zasada
2005-01-01
This paper presents a sensitivity analysis of the impact of various definitions and inclusions of different variables in the Forest Inventory and Analysis (FIA) inventory on data compilation results. FIA manuals have been changing recently to make the inventory consistent between all the States. Our analysis demonstrates the importance (or insignificance) of different...
Cook-Cunningham, Sheri L; Grady, Melissa L
2018-03-01
The purpose of this investigation was to assess the effects of three warm-up procedures (vocal-only, physical-only, physical/vocal combination) on acoustic and perceptual measures of choir sound. The researchers tested three videotaped, 5-minute, choral warm-up procedures on three university choirs. After participating in a warm-up procedure, each choir was recorded singing a folk song for long-term average spectra and pitch analysis. Singer participants responded to a questionnaire about preferences after each warm-up procedure. Warm-up procedures and recording sessions occurred during each choir's regular rehearsal time and in each choir's regular rehearsal space during three consecutive rehearsals. Long-term average spectra results demonstrated more resonant singing after the physical/vocal warm-up for two of the three choirs. Pitch analysis results indicate that all three choirs sang "in-tune" or with the least pitch deviation after participating in the physical/vocal warm-up. Singer questionnaire responses showed general preference for the physical/vocal combination warm-up, and singer ranking of the three procedures indicated the physical/vocal warm-up as the most favored for readiness to sing. In the context of this study with these three university choir participants, it seems that a combination choral warm-up that includes physical and vocal aspects is preferred by singers, enables more resonant singing, and more in-tune singing. Findings from this study could provide teachers and choral directors with important information as they structure and experiment with their choral warm-up procedures. Copyright © 2018 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
76 FR 78015 - Revised Analysis and Mapping Procedures for Non-Accredited Levees
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-15
...] Revised Analysis and Mapping Procedures for Non-Accredited Levees AGENCY: Federal Emergency Management... comments on the proposed solution for Revised Analysis and Mapping Procedures for Non-Accredited Levees. This document proposes a revised procedure for the analysis and mapping of non-accredited levees on...
ERIC Educational Resources Information Center
Beaver, Rodney W.; And Others
1983-01-01
Describes an experiment on the qualitative analysis of several over-the-counter analgesic tablets. Background information, procedures used (including high pressure liquid chromatography), and typical student results are included. (JN)
Job Analysis: A Local Government's Experience.
ERIC Educational Resources Information Center
Urbanek, Steve J.
1997-01-01
A county personnel department undertook reclassification of all positions by collecting and using job analysis data to rewrite job descriptions. External pay equity and validated selection procedures resulted with only a modest increase in payroll costs. (SK)
Landslide risk models for decision making.
Bonachea, Jaime; Remondo, Juan; de Terán, José Ramón Díaz; González-Díez, Alberto; Cendrero, Antonio
2009-11-01
This contribution presents a quantitative procedure for landslide risk analysis and zoning considering hazard, exposure (or value of elements at risk), and vulnerability. The method provides the means to obtain landslide risk models (expressing expected damage due to landslides on material elements and economic activities in monetary terms, according to different scenarios and periods) useful to identify areas where mitigation efforts will be most cost effective. It allows identifying priority areas for the implementation of actions to reduce vulnerability (elements) or hazard (processes). The procedure proposed can also be used as a preventive tool, through its application to strategic environmental impact analysis (SEIA) of land-use plans. The underlying hypothesis is that reliable predictions about hazard and risk can be made using models based on a detailed analysis of past landslide occurrences in connection with conditioning factors and data on past damage. The results show that the approach proposed and the hypothesis formulated are essentially correct, providing estimates of the order of magnitude of expected losses for a given time period. Uncertainties, strengths, and shortcomings of the procedure and results obtained are discussed and potential lines of research to improve the models are indicated. Finally, comments and suggestions are provided to generalize this type of analysis.
A Bayesian Multinomial Probit MODEL FOR THE ANALYSIS OF PANEL CHOICE DATA.
Fong, Duncan K H; Kim, Sunghoon; Chen, Zhe; DeSarbo, Wayne S
2016-03-01
A new Bayesian multinomial probit model is proposed for the analysis of panel choice data. Using a parameter expansion technique, we are able to devise a Markov Chain Monte Carlo algorithm to compute our Bayesian estimates efficiently. We also show that the proposed procedure enables the estimation of individual level coefficients for the single-period multinomial probit model even when the available prior information is vague. We apply our new procedure to consumer purchase data and reanalyze a well-known scanner panel dataset that reveals new substantive insights. In addition, we delineate a number of advantageous features of our proposed procedure over several benchmark models. Finally, through a simulation analysis employing a fractional factorial design, we demonstrate that the results from our proposed model are quite robust with respect to differing factors across various conditions.
Which Procedural Parts of the IEP Process Are the Most Judicially Vulnerable?
ERIC Educational Resources Information Center
Zirkel, Perry A.; Hetrick, Allyse
2017-01-01
To provide a missing piece to the legal foundation of professional development and practice for the individualized education program (IEP) process, the authors report the results of a comprehensive systematic analysis of court decisions specific to IEP-related procedural violations after the 2004 amendments of the Individuals With Disabilities…
A summary and evaluation of semi-empirical methods for the prediction of helicopter rotor noise
NASA Technical Reports Server (NTRS)
Pegg, R. J.
1979-01-01
Existing prediction techniques are compiled and described. The descriptions include input and output parameter lists, required equations and graphs, and the range of validity for each part of the prediction procedures. Examples are provided illustrating the analysis procedure and the degree of agreement with experimental results.
NASA Astrophysics Data System (ADS)
Jafarian, Yaser; Ghorbani, Ali; Ahmadi, Omid
2014-09-01
Lateral deformation of liquefiable soil is a cause of much damage during earthquakes, reportedly more than other forms of liquefaction-induced ground failures. Researchers have presented studies in which the liquefied soil is considered as viscous fluid. In this manner, the liquefied soil behaves as non-Newtonian fluid, whose viscosity decreases as the shear strain rate increases. The current study incorporates computational fluid dynamics to propose a simplified dynamic analysis for the liquefaction-induced lateral deformation of earth slopes. The numerical procedure involves a quasi-linear elastic model for small to moderate strains and a Bingham fluid model for large strain states during liquefaction. An iterative procedure is considered to estimate the strain-compatible shear stiffness of soil. The post-liquefaction residual strength of soil is considered as the initial Bingham viscosity. Performance of the numerical procedure is examined by using the results of centrifuge model and shaking table tests together with some field observations of lateral ground deformation. The results demonstrate that the proposed procedure predicts the time history of lateral ground deformation with a reasonable degree of precision.
Yao, Xiyang; Ma, Junwei; Li, Haiying; Shen, Haitao; Lu, Xiaojun; Chen, Gang
2017-02-01
Background We evaluated the safety and efficiency of flow diverters (FDs) in treating small intracranial aneurysms (IAs). Materials and Methods We reviewed the literature published in PubMed and EMBASE. R for Project software was used to calculate the complete aneurysm occlusion rates, procedure-related neurologic mortality, procedure-related neurologic morbidity and procedure-related permanent morbidity. Results Ten observational studies were included in this analysis. The complete aneurysm occlusion rate was 84.23% (80.34%-87.76%), the procedure-related neurologic mortality was 0.87% (0.29%-1.74%), the procedure-related neurologic morbidity rate was 5.22% (3.62%-7.1%), the intracerebral haemorrhage rate was 1.42% (0.64%-2.49%), the ischemic rate was 2.35% (1.31%-3.68%), the subarachnoid haemorrhage rate was 0.03% (0%-0.32%) and the procedure-related permanent morbidity was 2.41% (0.81%-4.83%). Conclusions Treatment of small IAs with FDs may be correlated with high complete occlusion rates and low complication rates. Future long-term follow-up randomized trials will determine the optimal treatment for small IAs.
Analysis of Trihalomethanes in Soft Drinks: An Instrumental Analysis Experiment.
ERIC Educational Resources Information Center
Graham, Richard C.; Robertson, John K.
1988-01-01
Describes an experimental procedure for determining trihalomethanes (THMs) in liquids by gas chromatography. Provides recommendations for reactants and supplies to obtain acceptable results. Discusses the analysis of water from various sources: pools, lakes, and drinking water; compares these to three cola drinks. (ML)
Bentzley, Brandon S.; Fender, Kimberly M.; Aston-Jones, Gary
2012-01-01
Rationale Behavioral-economic demand curve analysis offers several useful measures of drug self-administration. Although generation of demand curves previously required multiple days, recent within-session procedures allow curve construction from a single 110-min cocaine self-administration session, making behavioral-economic analyses available to a broad range of self-administration experiments. However, a mathematical approach of curve fitting has not been reported for the within-session threshold procedure. Objectives We review demand curve analysis in drug self-administration experiments and provide a quantitative method for fitting curves to single-session data that incorporates relative stability of brain drug concentration. Methods Sprague-Dawley rats were trained to self-administer cocaine, and then tested with the threshold procedure in which the cocaine dose was sequentially decreased on a fixed ratio-1 schedule. Price points (responses/mg cocaine) outside of relatively stable brain cocaine concentrations were removed before curves were fit. Curve-fit accuracy was determined by the degree of correlation between graphical and calculated parameters for cocaine consumption at low price (Q0) and the price at which maximal responding occurred (Pmax). Results Removing price points that occurred at relatively unstable brain cocaine concentrations generated precise estimates of Q0 and resulted in Pmax values with significantly closer agreement with graphical Pmax than conventional methods. Conclusion The exponential demand equation can be fit to single-session data using the threshold procedure for cocaine self-administration. Removing data points that occur during relatively unstable brain cocaine concentrations resulted in more accurate estimates of demand curve slope than graphical methods, permitting a more comprehensive analysis of drug self-administration via a behavioral-economic framework. PMID:23086021
US line-ups outperform UK line-ups
Seale-Carlisle, Travis M.
2016-01-01
In the USA and the UK, many thousands of police suspects are identified by eyewitnesses every year. Unfortunately, many of those suspects are innocent, which becomes evident when they are exonerated by DNA testing, often after having been imprisoned for years. It is, therefore, imperative to use identification procedures that best enable eyewitnesses to discriminate innocent from guilty suspects. Although police investigators in both countries often administer line-up procedures, the details of how line-ups are presented are quite different and an important direct comparison has yet to be conducted. We investigated whether these two line-up procedures differ in terms of (i) discriminability (using receiver operating characteristic analysis) and (ii) reliability (using confidence–accuracy characteristic analysis). A total of 2249 participants watched a video of a crime and were later tested using either a six-person simultaneous photo line-up procedure (USA) or a nine-person sequential video line-up procedure (UK). US line-up procedure yielded significantly higher discriminability and significantly higher reliability. The results do not pinpoint the reason for the observed difference between the two procedures, but they do suggest that there is much room for improvement with the UK line-up. PMID:27703695
NASA Technical Reports Server (NTRS)
Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.
2005-01-01
This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.
An artificial viscosity method for the design of supercritical airfoils
NASA Technical Reports Server (NTRS)
Mcfadden, G. B.
1979-01-01
A numerical technique is presented for the design of two-dimensional supercritical wing sections with low wave drag. The method is a design mode of the analysis code H which gives excellent agreement with experimental results and is widely used in the aircraft industry. Topics covered include the partial differential equations of transonic flow, the computational procedure and results; the design procedure; a convergence theorem; and description of the code.
NASA Astrophysics Data System (ADS)
Giardina, G.; Mandaglio, G.; Nasirov, A. K.; Anastasi, A.; Curciarello, F.; Fazio, G.
2018-02-01
Experimental and theoretical results of the PCN fusion probability of reactants in the entrance channel and the Wsur survival probability against fission at deexcitation of the compound nucleus formed in heavy-ion collisions are discussed. The theoretical results for a set of nuclear reactions leading to formation of compound nuclei (CNs) with the charge number Z = 102- 122 reveal a strong sensitivity of PCN to the characteristics of colliding nuclei in the entrance channel, dynamics of the reaction mechanism, and excitation energy of the system. We discuss the validity of assumptions and procedures for analysis of experimental data, and also the limits of validity of theoretical results obtained by the use of phenomenological models. The comparison of results obtained in many investigated reactions reveals serious limits of validity of the data analysis and calculation procedures.
Description of data on the Nimbus 7 LIMS map archive tape: Water vapor and nitrogen dioxide
NASA Technical Reports Server (NTRS)
Haggard, Kenneth V.; Marshall, B. T.; Kurzeja, Robert J.; Remsberg, Ellis E.; Russell, James M., III
1988-01-01
Described is the process by which the analysis of the Limb Infrared Monitor of the Stratosphere (LIMS) experiment data were used to produce estimates of synoptic maps of water vapor and nitrogen dioxide. In addition to a detailed description of the analysis procedure, also discussed are several interesting features in the data which are used to demonstrate how the analysis procedure produced the final maps and how one can estimate the uncertainties in the maps. In addition, features in the analysis are noted that would influence how one might use, or interpret, the results. These include subjects such as smoothing and the interpretation of wave components.
Brownian Motion--a Laboratory Experiment.
ERIC Educational Resources Information Center
Kruglak, Haym
1988-01-01
Introduces an experiment involving the observation of Brownian motion for college students. Describes the apparatus, experimental procedures, data analysis and results, and error analysis. Lists experimental techniques used in the experiment. Provides a circuit diagram, typical data, and graphs. (YP)
Coexistence Analysis of Civil Unmanned Aircraft Systems at Low Altitudes
NASA Astrophysics Data System (ADS)
Zhou, Yuzhe
2016-11-01
The requirement of unmanned aircraft systems in civil areas is growing. However, provisioning of flight efficiency and safety of unmanned aircraft has critical requirements on wireless communication spectrum resources. Current researches mainly focus on spectrum availability. In this paper, the unmanned aircraft system communication models, including the coverage model and data rate model, and two coexistence analysis procedures, i. e. the interference and noise ratio criterion and frequency-distance-direction criterion, are proposed to analyze spectrum requirements and interference results of the civil unmanned aircraft systems at low altitudes. In addition, explicit explanations are provided. The proposed coexistence analysis criteria are applied to assess unmanned aircraft systems' uplink and downlink interference performances and to support corresponding spectrum planning. Numerical results demonstrate that the proposed assessments and analysis procedures satisfy requirements of flexible spectrum accessing and safe coexistence among multiple unmanned aircraft systems.
High-performance parallel analysis of coupled problems for aircraft propulsion
NASA Technical Reports Server (NTRS)
Felippa, C. A.; Farhat, C.; Lanteri, S.; Maman, N.; Piperno, S.; Gumaste, U.
1994-01-01
This research program deals with the application of high-performance computing methods for the analysis of complete jet engines. We have entitled this program by applying the two dimensional parallel aeroelastic codes to the interior gas flow problem of a bypass jet engine. The fluid mesh generation, domain decomposition, and solution capabilities were successfully tested. We then focused attention on methodology for the partitioned analysis of the interaction of the gas flow with a flexible structure and with the fluid mesh motion that results from these structural displacements. This is treated by a new arbitrary Lagrangian-Eulerian (ALE) technique that models the fluid mesh motion as that of a fictitious mass-spring network. New partitioned analysis procedures to treat this coupled three-component problem are developed. These procedures involved delayed corrections and subcycling. Preliminary results on the stability, accuracy, and MPP computational efficiency are reported.
Comparing preference assessments: selection- versus duration-based preference assessment procedures.
Kodak, Tiffany; Fisher, Wayne W; Kelley, Michael E; Kisamore, April
2009-01-01
In the current investigation, the results of a selection- and a duration-based preference assessment procedure were compared. A Multiple Stimulus With Replacement (MSW) preference assessment [Windsor, J., Piché, L. M., & Locke, P. A. (1994). Preference testing: A comparison of two presentation methods. Research in Developmental Disabilities, 15, 439-455] and a variation of a Free-Operant (FO) preference assessment procedure [Roane, H. S., Vollmer, T. R., Ringdahl, J. E., & Marcus, B. A. (1998). Evaluation of a brief stimulus preference assessment. Journal of Applied Behavior Analysis, 31, 605-620] were conducted with four participants. A reinforcer assessment was conducted to determine which preference assessment procedure identified the item that produced the highest rates of responding. The items identified as most highly preferred were different across preference assessment procedures for all participants. Results of the reinforcer assessment showed that the MSW identified the item that functioned as the most effective reinforcer for two participants.
Evaluation of the utility of a discrete-trial functional analysis in early intervention classrooms.
Kodak, Tiffany; Fisher, Wayne W; Paden, Amber; Dickes, Nitasha
2013-01-01
We evaluated a discrete-trial functional analysis implemented by regular classroom staff in a classroom setting. The results suggest that the discrete-trial functional analysis identified a social function for each participant and may require fewer staff than standard functional analysis procedures. © Society for the Experimental Analysis of Behavior.
Cost Utility Analysis of Cervical Therapeutic Medial Branch Blocks in Managing Chronic Neck Pain
Manchikanti, Laxmaiah; Pampati, Vidyasagar; Kaye, Alan D.; Hirsch, Joshua A.
2017-01-01
Background:Controlled diagnostic studies have established the prevalence of cervical facet joint pain to range from 36% to 67% based on the criterion standard of ≥ 80% pain relief. Treatment of cervical facet joint pain has been described with Level II evidence of effectiveness for therapeutic facet joint nerve blocks and radiofrequency neurotomy and with no significant evidence for intraarticular injections. However, there have not been any cost effectiveness or cost utility analysis studies performed in managing chronic neck pain with or without headaches with cervical facet joint interventions. Study Design:Cost utility analysis based on the results of a double-blind, randomized, controlled trial of cervical therapeutic medial branch blocks in managing chronic neck pain. Objectives:To assess cost utility of therapeutic cervical medial branch blocks in managing chronic neck pain. Methods: A randomized trial was conducted in a specialty referral private practice interventional pain management center in the United States. This trial assessed the clinical effectiveness of therapeutic cervical medial branch blocks with or without steroids for an established diagnosis of cervical facet joint pain by means of controlled diagnostic blocks. Cost utility analysis was performed with direct payment data for the procedures for a total of 120 patients over a period of 2 years from this trial based on reimbursement rates of 2016. The payment data provided direct procedural costs without inclusion of drug treatments. An additional 40% was added to procedural costs with multiplication of a factor of 1.67 to provide estimated total costs including direct and indirect costs, based on highly regarded surgical literature. Outcome measures included significant improvement defined as at least a 50% improvement with reduction in pain and disability status with a combined 50% or more reduction in pain in Neck Disability Index (NDI) scores. Results:The results showed direct procedural costs per one-year improvement in quality adjusted life year (QALY) of United States Dollar (USD) of $2,552, and overall costs of USD $4,261. Overall, each patient on average received 5.7 ± 2.2 procedures over a period of 2 years. Average significant improvement per procedure was 15.6 ± 12.3 weeks and average significant improvement in 2 years per patient was 86.0 ± 24.6 weeks. Limitations:The limitations of this cost utility analysis are that data are based on a single center evaluation. Only costs of therapeutic interventional procedures and physician visits were included, with extrapolation of indirect costs. Conclusion:The cost utility analysis of therapeutic cervical medial branch blocks in the treatment of chronic neck pain non-responsive to conservative management demonstrated clinical effectiveness and cost utility at USD $4,261 per one year of QALY. PMID:29200944
Cost Utility Analysis of Cervical Therapeutic Medial Branch Blocks in Managing Chronic Neck Pain.
Manchikanti, Laxmaiah; Pampati, Vidyasagar; Kaye, Alan D; Hirsch, Joshua A
2017-01-01
Background: Controlled diagnostic studies have established the prevalence of cervical facet joint pain to range from 36% to 67% based on the criterion standard of ≥ 80% pain relief. Treatment of cervical facet joint pain has been described with Level II evidence of effectiveness for therapeutic facet joint nerve blocks and radiofrequency neurotomy and with no significant evidence for intraarticular injections. However, there have not been any cost effectiveness or cost utility analysis studies performed in managing chronic neck pain with or without headaches with cervical facet joint interventions. Study Design: Cost utility analysis based on the results of a double-blind, randomized, controlled trial of cervical therapeutic medial branch blocks in managing chronic neck pain. Objectives: To assess cost utility of therapeutic cervical medial branch blocks in managing chronic neck pain. Methods: A randomized trial was conducted in a specialty referral private practice interventional pain management center in the United States. This trial assessed the clinical effectiveness of therapeutic cervical medial branch blocks with or without steroids for an established diagnosis of cervical facet joint pain by means of controlled diagnostic blocks. Cost utility analysis was performed with direct payment data for the procedures for a total of 120 patients over a period of 2 years from this trial based on reimbursement rates of 2016. The payment data provided direct procedural costs without inclusion of drug treatments. An additional 40% was added to procedural costs with multiplication of a factor of 1.67 to provide estimated total costs including direct and indirect costs, based on highly regarded surgical literature. Outcome measures included significant improvement defined as at least a 50% improvement with reduction in pain and disability status with a combined 50% or more reduction in pain in Neck Disability Index (NDI) scores. Results: The results showed direct procedural costs per one-year improvement in quality adjusted life year (QALY) of United States Dollar (USD) of $2,552, and overall costs of USD $4,261. Overall, each patient on average received 5.7 ± 2.2 procedures over a period of 2 years. Average significant improvement per procedure was 15.6 ± 12.3 weeks and average significant improvement in 2 years per patient was 86.0 ± 24.6 weeks. Limitations: The limitations of this cost utility analysis are that data are based on a single center evaluation. Only costs of therapeutic interventional procedures and physician visits were included, with extrapolation of indirect costs. Conclusion: The cost utility analysis of therapeutic cervical medial branch blocks in the treatment of chronic neck pain non-responsive to conservative management demonstrated clinical effectiveness and cost utility at USD $4,261 per one year of QALY.
Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun
2015-02-01
Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test-retest repeatability data for illustrative purposes. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun
2017-01-01
Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test–retest repeatability data for illustrative purposes. PMID:24872353
Stanzel, Sven; Weimer, Marc; Kopp-Schneider, Annette
2013-06-01
High-throughput screening approaches are carried out for the toxicity assessment of a large number of chemical compounds. In such large-scale in vitro toxicity studies several hundred or thousand concentration-response experiments are conducted. The automated evaluation of concentration-response data using statistical analysis scripts saves time and yields more consistent results in comparison to data analysis performed by the use of menu-driven statistical software. Automated statistical analysis requires that concentration-response data are available in a standardised data format across all compounds. To obtain consistent data formats, a standardised data management workflow must be established, including guidelines for data storage, data handling and data extraction. In this paper two procedures for data management within large-scale toxicological projects are proposed. Both procedures are based on Microsoft Excel files as the researcher's primary data format and use a computer programme to automate the handling of data files. The first procedure assumes that data collection has not yet started whereas the second procedure can be used when data files already exist. Successful implementation of the two approaches into the European project ACuteTox is illustrated. Copyright © 2012 Elsevier Ltd. All rights reserved.
Modified application of HS-SPME for quality evaluation of essential oil plant materials.
Dawidowicz, Andrzej L; Szewczyk, Joanna; Dybowski, Michal P
2016-01-01
The main limitation in the standard application of head space analysis employing solid phase microextraction (HS-SPME) for the evaluation of plants as sources of essential oils (EOs) are different quantitative relations of EO components from those obtained by direct analysis of EO which was got in the steam distillation (SD) process from the same plant (EO/SD). The results presented in the paper for thyme, mint, sage, basil, savory, and marjoram prove that the quantitative relations of EO components established by HS-SPME procedure and direct analysis of EO/SD are similar when the plant material in the HS-SPME process is replaced by its suspension in oil of the same physicochemical character as that of SPME fiber coating. The observed differences in the thyme EO composition estimated by both procedures are insignificant (F(exp)
NASA Astrophysics Data System (ADS)
Bergen, H. Robert, III; Benson, Linda M.; Naylor, Stephen
2000-10-01
Mass spectrometry has undergone considerable changes in the past decade. The advent of "soft ionization" techniques such as electrospray ionization (ESI) affords the direct analysis of very polar molecules without need for the complex inefficient derivatization procedures often required in GC-MS. These ionization techniques make possible the direct mass spectral analysis of polar nonvolatile molecules such as DNA and proteins, which previously were difficult or impossible to analyze by MS. Compounds that readily take on a charge (acids and bases) lend themselves to ESI-MS analysis, whereas compounds that do not readily accept a charge (e.g. sugars) are often not seen or are seen only as inefficient adducts (e.g., M+Na+). To gain exposure to this state-of-the-art analytical procedure, high school students utilize ESI-MS in an analysis of aspartame and caffeine. They dilute a beverage sample and inject the diluted sample into the ESI-MS. The lab is procedurally simple and the results clearly demonstrate the potential and limitations of ESI-coupled mass spectrometry. Depending upon the instructional goals, the outlined procedures can be used to quantify the content of caffeine and aspartame in beverages or to understand the capabilities of electrospray ionization.
48 CFR 5242.9000 - Requests for refunds.
Code of Federal Regulations, 2010 CFR
2010-10-01
... request or pricing adjustment: (1) A technical or engineering analysis results in a determination that the... equipment, except those contracts awarded as a result of competitive small purchase procedures and orders...
Waltregny, David; de Leval, Jean
2009-03-01
Six years ago, the inside-out transobturator tape TVT-O procedure was developed for the surgical treatment of female stress urinary incontinence (SUI) with the aim of minimizing the risk of urethra and bladder injuries and ensuring minimal tissue dissection. Initial feasibility and efficacy studies suggested that the TVT-O procedure is associated with high SUI cure rates and low morbidity at short term. A recent analysis of medium-term results indicated that the TVT-O procedure is efficient, with maintenance, after a 3-year minimum follow-up, of cure rates comparing favorably with those reported for TVT. No late complications were observed. As of July 2008, more than 35 clinical papers, including ten randomized trials and two national registries, have been published on the outcome of the TVT-O surgery. Results from these studies have confirmed that the TVT-O procedure is safe and as efficient as the TVT procedure, at least in the short/medium term.
Alignment of high-throughput sequencing data inside in-memory databases.
Firnkorn, Daniel; Knaup-Gregori, Petra; Lorenzo Bermejo, Justo; Ganzinger, Matthias
2014-01-01
In times of high-throughput DNA sequencing techniques, performance-capable analysis of DNA sequences is of high importance. Computer supported DNA analysis is still an intensive time-consuming task. In this paper we explore the potential of a new In-Memory database technology by using SAP's High Performance Analytic Appliance (HANA). We focus on read alignment as one of the first steps in DNA sequence analysis. In particular, we examined the widely used Burrows-Wheeler Aligner (BWA) and implemented stored procedures in both, HANA and the free database system MySQL, to compare execution time and memory management. To ensure that the results are comparable, MySQL has been running in memory as well, utilizing its integrated memory engine for database table creation. We implemented stored procedures, containing exact and inexact searching of DNA reads within the reference genome GRCh37. Due to technical restrictions in SAP HANA concerning recursion, the inexact matching problem could not be implemented on this platform. Hence, performance analysis between HANA and MySQL was made by comparing the execution time of the exact search procedures. Here, HANA was approximately 27 times faster than MySQL which means, that there is a high potential within the new In-Memory concepts, leading to further developments of DNA analysis procedures in the future.
NASA Astrophysics Data System (ADS)
Hong, JaeSub; van den Berg, Maureen; Schlegel, Eric M.; Grindlay, Jonathan E.; Koenig, Xavier; Laycock, Silas; Zhao, Ping
2005-12-01
We describe the X-ray analysis procedure of the ongoing Chandra Multiwavelength Plane (ChaMPlane) Survey and report the initial results from the analysis of 15 selected anti-Galactic center observations (90deg
Confocal laser endomicroscopy: in vivo endoscopic tissue analysis.
Smith, Christine; Ogilvie, Jeanette; McClelland, Laurie
2008-01-01
In today's fast-paced world of instant messaging, high-speed Internet, and cell phones, patients want results of procedures in the same high-speed fashion. The development of the new technique of confocal laser endomicroscopy and the restructuring of the endoscope may enable quick procedure results to be delivered. First used in Germany and Australia for research and now available for clinical use, confocal laser endomicroscopy has been approved by the Food and Drug Administration for marketing and clinical use in the United States. This article provides the gastroenterology nurse with information about how the confocal laser endomicroscope works, assisting with the procedure, and pre- and postprocedure patient instructions.
A Hands-On Experience of English Language Teachers as Researchers
ERIC Educational Resources Information Center
Yayli, Demet
2012-01-01
This study presents the results of a teacher research project. The analysis aimed to explore both the four teacher researchers' interpretations of conducting research in English language teaching and the nature of their collaboration with their supervisor in the procedure. The results showed that qualitative data analysis and interpreting the…
NASA Technical Reports Server (NTRS)
Mason, P. W.; Harris, H. G.; Zalesak, J.; Bernstein, M.
1974-01-01
The methods and procedures used in the analysis and testing of the scale model are reported together with the correlation of the analytical and experimental results. The model, the NASTRAN finite element analysis, and results are discussed. Tests and analytical investigations are also reported.
Random analysis of bearing capacity of square footing using the LAS procedure
NASA Astrophysics Data System (ADS)
Kawa, Marek; Puła, Wojciech; Suska, Michał
2016-09-01
In the present paper, a three-dimensional problem of bearing capacity of square footing on random soil medium is analyzed. The random fields of strength parameters c and φ are generated using LAS procedure (Local Average Subdivision, Fenton and Vanmarcke 1990). The procedure used is re-implemented by the authors in Mathematica environment in order to combine it with commercial program. Since the procedure is still tested the random filed has been assumed as one-dimensional: the strength properties of soil are random in vertical direction only. Individual realizations of bearing capacity boundary-problem with strength parameters of medium defined the above procedure are solved using FLAC3D Software. The analysis is performed for two qualitatively different cases, namely for the purely cohesive and cohesive-frictional soils. For the latter case the friction angle and cohesion have been assumed as independent random variables. For these two cases the random square footing bearing capacity results have been obtained for the range of fluctuation scales from 0.5 m to 10 m. Each time 1000 Monte Carlo realizations have been performed. The obtained results allow not only the mean and variance but also the probability density function to be estimated. An example of application of this function for reliability calculation has been presented in the final part of the paper.
A close examination of double filtering with fold change and t test in microarray analysis
2009-01-01
Background Many researchers use the double filtering procedure with fold change and t test to identify differentially expressed genes, in the hope that the double filtering will provide extra confidence in the results. Due to its simplicity, the double filtering procedure has been popular with applied researchers despite the development of more sophisticated methods. Results This paper, for the first time to our knowledge, provides theoretical insight on the drawback of the double filtering procedure. We show that fold change assumes all genes to have a common variance while t statistic assumes gene-specific variances. The two statistics are based on contradicting assumptions. Under the assumption that gene variances arise from a mixture of a common variance and gene-specific variances, we develop the theoretically most powerful likelihood ratio test statistic. We further demonstrate that the posterior inference based on a Bayesian mixture model and the widely used significance analysis of microarrays (SAM) statistic are better approximations to the likelihood ratio test than the double filtering procedure. Conclusion We demonstrate through hypothesis testing theory, simulation studies and real data examples, that well constructed shrinkage testing methods, which can be united under the mixture gene variance assumption, can considerably outperform the double filtering procedure. PMID:19995439
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Peters, Jeanne M.
1989-01-01
A computational procedure is presented for the nonlinear dynamic analysis of unsymmetric structures on vector multiprocessor systems. The procedure is based on a novel hierarchical partitioning strategy in which the response of the unsymmetric and antisymmetric response vectors (modes), each obtained by using only a fraction of the degrees of freedom of the original finite element model. The three key elements of the procedure which result in high degree of concurrency throughout the solution process are: (1) mixed (or primitive variable) formulation with independent shape functions for the different fields; (2) operator splitting or restructuring of the discrete equations at each time step to delineate the symmetric and antisymmetric vectors constituting the response; and (3) two level iterative process for generating the response of the structure. An assessment is made of the effectiveness of the procedure on the CRAY X-MP/4 computers.
A comparison of vowel normalization procedures for language variation research
NASA Astrophysics Data System (ADS)
Adank, Patti; Smits, Roel; van Hout, Roeland
2004-11-01
An evaluation of vowel normalization procedures for the purpose of studying language variation is presented. The procedures were compared on how effectively they (a) preserve phonemic information, (b) preserve information about the talker's regional background (or sociolinguistic information), and (c) minimize anatomical/physiological variation in acoustic representations of vowels. Recordings were made for 80 female talkers and 80 male talkers of Dutch. These talkers were stratified according to their gender and regional background. The normalization procedures were applied to measurements of the fundamental frequency and the first three formant frequencies for a large set of vowel tokens. The normalization procedures were evaluated through statistical pattern analysis. The results show that normalization procedures that use information across multiple vowels (``vowel-extrinsic'' information) to normalize a single vowel token performed better than those that include only information contained in the vowel token itself (``vowel-intrinsic'' information). Furthermore, the results show that normalization procedures that operate on individual formants performed better than those that use information across multiple formants (e.g., ``formant-extrinsic'' F2-F1). .
A comparison of vowel normalization procedures for language variation research.
Adank, Patti; Smits, Roel; van Hout, Roeland
2004-11-01
An evaluation of vowel normalization procedures for the purpose of studying language variation is presented. The procedures were compared on how effectively they (a) preserve phonemic information, (b) preserve information about the talker's regional background (or sociolinguistic information), and (c) minimize anatomical/physiological variation in acoustic representations of vowels. Recordings were made for 80 female talkers and 80 male talkers of Dutch. These talkers were stratified according to their gender and regional background. The normalization procedures were applied to measurements of the fundamental frequency and the first three formant frequencies for a large set of vowel tokens. The normalization procedures were evaluated through statistical pattern analysis. The results show that normalization procedures that use information across multiple vowels ("vowel-extrinsic" information) to normalize a single vowel token performed better than those that include only information contained in the vowel token itself ("vowel-intrinsic" information). Furthermore, the results show that normalization procedures that operate on individual formants performed better than those that use information across multiple formants (e.g., "formant-extrinsic" F2-F1).
Capsule Endoscopy in the Assessment of Obscure Gastrointestinal Bleeding: An Economic Analysis
Palimaka, S; Blackhouse, Gord; Goeree, Ron
2015-01-01
Background Small-bowel capsule endoscopy is a tool used to visualize the small bowel to identify the location of bleeds in obscure gastrointestinal bleeding (OGIB). Capsule endoscopy is currently funded in Ontario in cases where there has been a failure to identify a source of bleeding via conventional diagnostic procedures. In Ontario, capsule endoscopy is a diagnostic option for patients whose findings on esophagogastroduodenoscopy, colonoscopy, and push enteroscopy have been negative (i.e., the source of bleeding was not found). Objectives This economic analysis aims to estimate the budget impact of different rates of capsule endoscopy use as a complement to push enteroscopy procedures in patients aged 18 years and older. Data Sources Population-based administrative databases for Ontario were used to identify patients receiving push enteroscopy and small-bowel capsule endoscopy in the fiscal years 2008 to 2012. Review Methods A systematic literature search was performed to identify economic evaluations of capsule endoscopy for the investigation of OGIB. Studies were assessed for their methodological quality and their applicability to the Ontarian setting. An original budget impact analysis was performed using data from Ontarian administrative sources and published literature. The budget impact was estimated for different levels of use of capsule endoscopy as a complement to push enteroscopy due to the uncertain clinical utility of the capsule based on current clinical evidence. The analysis was conducted from the provincial public payer perspective. Results With varying rates of capsule endoscopy use, the budgetary impact spans from savings of $510,000,1 when no (0%) push enteroscopy procedures are complemented with capsule endoscopy, to $2,036,000, when all (100%) push enteroscopy procedures are complemented with capsule endoscopy. A scenario where 50% of push enteroscopy procedures are complemented with capsule endoscopy (expected use based on expert opinion) would result in additional expenditure of about $763,000. Limitations In the literature on OGIB, estimates of rebleeding rates after endoscopic procedures or spontaneous cessation rates are unreliable, with a lack of data. Rough estimates from expert consultation can provide an indication of expected additional use of capsule endoscopy; however, a wide range of capsule uses was explored. Conclusions The budgetary impact in the first year in Ontario of capsule endoscopy use to complement push enteroscopy procedures ranges from $510,000 in savings to an additional expenditure of $2,036,000 (at 0% and 100% push enteroscopy procedures complemented, respectively). The expected scenario of 50% of push enteroscopy procedures likely to benefit from the use of capsule endoscopy, based on expert opinion, would result in additional expenditures of $763,000 in the first year. PMID:26355732
An Examination of the Test Scores of the Folger and Konovsky Measure of Procedural Justice.
ERIC Educational Resources Information Center
DeConinck, James B.; King, Wesley C., Jr.
2002-01-01
Examined the validity of the measure of procedural justice developed by R. Folger and M. Konovsky (1989) through confirmatory factor analysis of data from 416 bank employees and 221 marketing managers. Results indicate that an underlying construct for the feedback and planning subscales is the communication relationship between manager and…
Fine fuel moisture measured and estimated in dead Andropogon virginicus in Hawaii
Francis M. Fujioka
1976-01-01
Fuel moisture estimates generated by the National Fire-Danger Rating System procedure were compared with actual fuel moisture measurements determined from laboratory analysis. Meteorological data required for the NFDRS procedure were collected at two heights to assess the effect of temperature and humidity lapse rates. Standard measurements gave the best results, but...
Operator's manual on the visual-accumulation tube method for sedimentation analysis of sands
Colby, V.C.; Witzgman, F.W.
1958-01-01
The personnel who will be operating these units may have little or no previous knowledge of either the principles involved or the details of opera.ting procedure. This manual is intended as an aid to them in setting up the apparatus, learning the analytical procedure, interpreting the results, and understanding the primary principles encountered.
ICAP - An Interactive Cluster Analysis Procedure for analyzing remotely sensed data
NASA Technical Reports Server (NTRS)
Wharton, S. W.; Turner, B. J.
1981-01-01
An Interactive Cluster Analysis Procedure (ICAP) was developed to derive classifier training statistics from remotely sensed data. ICAP differs from conventional clustering algorithms by allowing the analyst to optimize the cluster configuration by inspection, rather than by manipulating process parameters. Control of the clustering process alternates between the algorithm, which creates new centroids and forms clusters, and the analyst, who can evaluate and elect to modify the cluster structure. Clusters can be deleted, or lumped together pairwise, or new centroids can be added. A summary of the cluster statistics can be requested to facilitate cluster manipulation. The principal advantage of this approach is that it allows prior information (when available) to be used directly in the analysis, since the analyst interacts with ICAP in a straightforward manner, using basic terms with which he is more likely to be familiar. Results from testing ICAP showed that an informed use of ICAP can improve classification, as compared to an existing cluster analysis procedure.
Sun, Jianguo; Feng, Yanqin; Zhao, Hui
2015-01-01
Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.
Salinas, Maria; Lopez-Garrigos, Maite; Flores, Emilio; Leiva-Salinas, Carlos
2018-06-01
To study the urinalysis request, pre-analytical sample conditions, and analytical procedures. Laboratories were asked to provide the number of primary care urinalyses requested, and to fill out a questionnaire regarding pre-analytical conditions and analytical procedures. 110 laboratories participated in the study. 232.5 urinalyses/1,000 inhabitants were reported. 75.4% used the first morning urine. The sample reached the laboratory in less than 2 hours in 18.8%, between 2 - 4 hours in 78.3%, and between 4 - 6 hours in the remaining 2.9%. 92.5% combined the use of test strip and particle analysis, and only 7.5% used the strip exclusively. All participants except one performed automated particle analysis depending on strip results; in 16.2% the procedure was only manual. Urinalysis was highly requested. There was a lack of compliance with guidelines regarding time between micturition and analysis that usually involved the combination of strip followed by particle analysis.
Comparison of VFA titration procedures used for monitoring the biogas process.
Lützhøft, Hans-Christian Holten; Boe, Kanokwan; Fang, Cheng; Angelidaki, Irini
2014-05-01
Titrimetric determination of volatile fatty acids (VFAs) contents is a common way to monitor a biogas process. However, digested manure from co-digestion biogas plants has a complex matrix with high concentrations of interfering components, resulting in varying results when using different titration procedures. Currently, no standardized procedure is used and it is therefore difficult to compare the performance among plants. The aim of this study was to evaluate four titration procedures (for determination of VFA-levels of digested manure samples) and compare results with gas chromatographic (GC) analysis. Two of the procedures are commonly used in biogas plants and two are discussed in literature. The results showed that the optimal titration results were obtained when 40 mL of four times diluted digested manure was gently stirred (200 rpm). Results from samples with different VFA concentrations (1-11 g/L) showed linear correlation between titration results and GC measurements. However, determination of VFA by titration generally overestimated the VFA contents compared with GC measurements when samples had low VFA concentrations, i.e. around 1 g/L. The accuracy of titration increased when samples had high VFA concentrations, i.e. around 5 g/L. It was further found that the studied ionisable interfering components had lowest effect on titration when the sample had high VFA concentration. In contrast, bicarbonate, phosphate and lactate had significant effect on titration accuracy at low VFA concentration. An extended 5-point titration procedure with pH correction was best to handle interferences from bicarbonate, phosphate and lactate at low VFA concentrations. Contrary, the simplest titration procedure with only two pH end-points showed the highest accuracy among all titration procedures at high VFA concentrations. All in all, if the composition of the digested manure sample is not known, the procedure with only two pH end-points should be the procedure of choice, due to its simplicity and accuracy. Copyright © 2014 Elsevier Ltd. All rights reserved.
Office-based deep sedation for pediatric ophthalmologic procedures using a sedation service model.
Lalwani, Kirk; Tomlinson, Matthew; Koh, Jeffrey; Wheeler, David
2012-01-01
Aims. (1) To assess the efficacy and safety of pediatric office-based sedation for ophthalmologic procedures using a pediatric sedation service model. (2) To assess the reduction in hospital charges of this model of care delivery compared to the operating room (OR) setting for similar procedures. Background. Sedation is used to facilitate pediatric procedures and to immobilize patients for imaging and examination. We believe that the pediatric sedation service model can be used to facilitate office-based deep sedation for brief ophthalmologic procedures and examinations. Methods. After IRB approval, all children who underwent office-based ophthalmologic procedures at our institution between January 1, 2000 and July 31, 2008 were identified using the sedation service database and the electronic health record. A comparison of hospital charges between similar procedures in the operating room was performed. Results. A total of 855 procedures were reviewed. Procedure completion rate was 100% (C.I. 99.62-100). There were no serious complications or unanticipated admissions. Our analysis showed a significant reduction in hospital charges (average of $1287 per patient) as a result of absent OR and recovery unit charges. Conclusions. Pediatric ophthalmologic minor procedures can be performed using a sedation service model with significant reductions in hospital charges.
Inverse Thermal Analysis of Titanium GTA Welds Using Multiple Constraints
NASA Astrophysics Data System (ADS)
Lambrakos, S. G.; Shabaev, A.; Huang, L.
2015-06-01
Inverse thermal analysis of titanium gas-tungsten-arc welds using multiple constraint conditions is presented. This analysis employs a methodology that is in terms of numerical-analytical basis functions for inverse thermal analysis of steady-state energy deposition in plate structures. The results of this type of analysis provide parametric representations of weld temperature histories that can be adopted as input data to various types of computational procedures, such as those for prediction of solid-state phase transformations. In addition, these temperature histories can be used to construct parametric function representations for inverse thermal analysis of welds corresponding to other process parameters or welding processes whose process conditions are within similar regimes. The present study applies an inverse thermal analysis procedure that provides for the inclusion of constraint conditions associated with both solidification and phase transformation boundaries.
Analysis of aircraft tires via semianalytic finite elements
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Kim, Kyun O.; Tanner, John A.
1990-01-01
A computational procedure is presented for the geometrically nonlinear analysis of aircraft tires. The tire was modeled by using a two-dimensional laminated anisotropic shell theory with the effects of variation in material and geometric parameters included. The four key elements of the procedure are: (1) semianalytic finite elements in which the shell variables are represented by Fourier series in the circumferential direction and piecewise polynomials in the meridional direction; (2) a mixed formulation with the fundamental unknowns consisting of strain parameters, stress-resultant parameters, and generalized displacements; (3) multilevel operator splitting to effect successive simplifications, and to uncouple the equations associated with different Fourier harmonics; and (4) multilevel iterative procedures and reduction techniques to generate the response of the shell.
A SAS(®) macro implementation of a multiple comparison post hoc test for a Kruskal-Wallis analysis.
Elliott, Alan C; Hynan, Linda S
2011-04-01
The Kruskal-Wallis (KW) nonparametric analysis of variance is often used instead of a standard one-way ANOVA when data are from a suspected non-normal population. The KW omnibus procedure tests for some differences between groups, but provides no specific post hoc pair wise comparisons. This paper provides a SAS(®) macro implementation of a multiple comparison test based on significant Kruskal-Wallis results from the SAS NPAR1WAY procedure. The implementation is designed for up to 20 groups at a user-specified alpha significance level. A Monte-Carlo simulation compared this nonparametric procedure to commonly used parametric multiple comparison tests. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Nonlinear filtering properties of detrended fluctuation analysis
NASA Astrophysics Data System (ADS)
Kiyono, Ken; Tsujimoto, Yutaka
2016-11-01
Detrended fluctuation analysis (DFA) has been widely used for quantifying long-range correlation and fractal scaling behavior. In DFA, to avoid spurious detection of scaling behavior caused by a nonstationary trend embedded in the analyzed time series, a detrending procedure using piecewise least-squares fitting has been applied. However, it has been pointed out that the nonlinear filtering properties involved with detrending may induce instabilities in the scaling exponent estimation. To understand this issue, we investigate the adverse effects of the DFA detrending procedure on the statistical estimation. We show that the detrending procedure using piecewise least-squares fitting results in the nonuniformly weighted estimation of the root-mean-square deviation and that this property could induce an increase in the estimation error. In addition, for comparison purposes, we investigate the performance of a centered detrending moving average analysis with a linear detrending filter and sliding window DFA and show that these methods have better performance than the standard DFA.
Development of a thermal and structural analysis procedure for cooled radial turbines
NASA Technical Reports Server (NTRS)
Kumar, Ganesh N.; Deanna, Russell G.
1988-01-01
A procedure for computing the rotor temperature and stress distributions in a cooled radial turbine are considered. Existing codes for modeling the external mainstream flow and the internal cooling flow are used to compute boundary conditions for the heat transfer and stress analysis. The inviscid, quasi three dimensional code computes the external free stream velocity. The external velocity is then used in a boundary layer analysis to compute the external heat transfer coefficients. Coolant temperatures are computed by a viscous three dimensional internal flow cade for the momentum and energy equation. These boundary conditions are input to a three dimensional heat conduction code for the calculation of rotor temperatures. The rotor stress distribution may be determined for the given thermal, pressure and centrifugal loading. The procedure is applied to a cooled radial turbine which will be tested at the NASA Lewis Research Center. Representative results are given.
Results of the first provisional technical secretariat interlaboratory comparison test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stuff, J.R.; Hoffland, L.
1995-06-01
The principal task of this laboratory in the first Provisional Technical Secretariat (PTS) Interlaboratory Comparison Test was to verify and test the extraction and preparation procedures outlined in the Recommended Operating Procedures for Sampling and Analysis in the Verification of Chemical Disarmament in addition to our laboratory extraction methods and our laboratory analysis methods. Sample preparation began on 16 May 1994 and analysis was completed on 12 June 1994. The analytical methods used included NMR ({sup 1}H and {sup 31}P) GC/AED, GC/MS (EI and methane CI), GC/IRD, HPLC/IC, HPLC/TSP/MS, MS/MS(Electrospray), and CZE.
Lee, Woo Jin; Lee, Won Kyung
2016-01-01
Because of the remarkable developments in robotics in recent years, technological convergence has been active in this area. We focused on finding patterns of convergence within robot technology using network analysis of patents in both the USPTO and KIPO. To identify the variables that affect convergence, we used quadratic assignment procedures (QAP). From our analysis, we observed the patent network ecology related to convergence and found technologies that have great potential to converge with other robotics technologies. The results of our study are expected to contribute to setting up convergence based R&D policies for robotics, which can lead new innovation. PMID:27764196
The Thermal Decomposition of Basic Copper(II) Sulfate.
ERIC Educational Resources Information Center
Tanaka, Haruhiko; Koga, Nobuyoshi
1990-01-01
Discussed is the preparation of synthetic brochantite from solution and a thermogravimetric-differential thermal analysis study of the thermal decomposition of this compound. Other analyses included are chemical analysis and IR spectroscopy. Experimental procedures and results are presented. (CW)
Availability Analysis of Dual Mode Systems
DOT National Transportation Integrated Search
1974-04-01
The analytical procedures presented define a method of evaluating the effects of failures in a complex dual-mode system based on a worst case steady-state analysis. The computed result is an availability figure of merit and not an absolute prediction...
3D spherical-cap fitting procedure for (truncated) sessile nano- and micro-droplets & -bubbles.
Tan, Huanshu; Peng, Shuhua; Sun, Chao; Zhang, Xuehua; Lohse, Detlef
2016-11-01
In the study of nanobubbles, nanodroplets or nanolenses immobilised on a substrate, a cross-section of a spherical cap is widely applied to extract geometrical information from atomic force microscopy (AFM) topographic images. In this paper, we have developed a comprehensive 3D spherical-cap fitting procedure (3D-SCFP) to extract morphologic characteristics of complete or truncated spherical caps from AFM images. Our procedure integrates several advanced digital image analysis techniques to construct a 3D spherical-cap model, from which the geometrical parameters of the nanostructures are extracted automatically by a simple algorithm. The procedure takes into account all valid data points in the construction of the 3D spherical-cap model to achieve high fidelity in morphology analysis. We compare our 3D fitting procedure with the commonly used 2D cross-sectional profile fitting method to determine the contact angle of a complete spherical cap and a truncated spherical cap. The results from 3D-SCFP are consistent and accurate, while 2D fitting is unavoidably arbitrary in the selection of the cross-section and has a much lower number of data points on which the fitting can be based, which in addition is biased to the top of the spherical cap. We expect that the developed 3D spherical-cap fitting procedure will find many applications in imaging analysis.
An Isotopic Dilution Experiment Using Liquid Scintillation: A Simple Two-System, Two-Phase Analysis.
ERIC Educational Resources Information Center
Moehs, Peter J.; Levine, Samuel
1982-01-01
A simple isotonic, dilution analysis whose principles apply to methods of more complex radioanalyses is described. Suitable for clinical and instrumental analysis chemistry students, experimental manipulations are kept to a minimum involving only aqueous extraction before counting. Background information, procedures, and results are discussed.…
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.
1991-01-01
Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.
Environmental control and life support system: Analysis of STS-1
NASA Technical Reports Server (NTRS)
Steines, G.
1980-01-01
The capability of the orbiter environmental control and life support system (ECLSS) to support vehicle cooling requirements in the event of cabin pressure reduction to 9 psia was evaluated, using the Orbiter versions of the shuttle environmental consumbles usage requirement evaluation (SECURE) program, and using heat load input data developed by the spacecraft electrical power simulator (SEPS) program. The SECURE model used in the analysis, the timeline and ECLSS configuration used in formulating the analysis, and the results of the analysis are presented. The conclusion which may be drawn drom these results. is summarized. There are no significant thermal problems with the proposed mission. There are, however, several procedures which could be optimized for better performance: setting the cabin HX air bypass and the interchanger water bypass to the zero flow position is of questionable efficacy; the cabin air pressure monitoring procedure should be re-evaluated; and the degree of equipment power down specified for this analysis and no problems were noted.
Markov chain decision model for urinary incontinence procedures.
Kumar, Sameer; Ghildayal, Nidhi; Ghildayal, Neha
2017-03-13
Purpose Urinary incontinence (UI) is a common chronic health condition, a problem specifically among elderly women that impacts quality of life negatively. However, UI is usually viewed as likely result of old age, and as such is generally not evaluated or even managed appropriately. Many treatments are available to manage incontinence, such as bladder training and numerous surgical procedures such as Burch colposuspension and Sling for UI which have high success rates. The purpose of this paper is to analyze which of these popular surgical procedures for UI is effective. Design/methodology/approach This research employs randomized, prospective studies to obtain robust cost and utility data used in the Markov chain decision model for examining which of these surgical interventions is more effective in treating women with stress UI based on two measures: number of quality adjusted life years (QALY) and cost per QALY. Treeage Pro Healthcare software was employed in Markov decision analysis. Findings Results showed the Sling procedure is a more effective surgical intervention than the Burch. However, if a utility greater than certain utility value, for which both procedures are equally effective, is assigned to persistent incontinence, the Burch procedure is more effective than the Sling procedure. Originality/value This paper demonstrates the efficacy of a Markov chain decision modeling approach to study the comparative effectiveness analysis of available treatments for patients with UI, an important public health issue, widely prevalent among elderly women in developed and developing countries. This research also improves upon other analyses using a Markov chain decision modeling process to analyze various strategies for treating UI.
Efficient runner safety assessment during early design phase and root cause analysis
NASA Astrophysics Data System (ADS)
Liang, Q. W.; Lais, S.; Gentner, C.; Braun, O.
2012-11-01
Fatigue related problems in Francis turbines, especially high head Francis turbines, have been published several times in the last years. During operation the runner is exposed to various steady and unsteady hydraulic loads. Therefore the analysis of forced response of the runner structure requires a combined approach of fluid dynamics and structural dynamics. Due to the high complexity of the phenomena and due to the limitation of computer power, the numerical prediction was in the past too expensive and not feasible for the use as standard design tool. However, due to continuous improvement of the knowledge and the simulation tools such complex analysis has become part of the design procedure in ANDRITZ HYDRO. This article describes the application of most advanced analysis techniques in runner safety check (RSC), including steady state CFD analysis, transient CFD analysis considering rotor stator interaction (RSI), static FE analysis and modal analysis in water considering the added mass effect, in the early design phase. This procedure allows a very efficient interaction between the hydraulic designer and the mechanical designer during the design phase, such that a risk of failure can be detected and avoided in an early design stage.The RSC procedure can also be applied to a root cause analysis (RCA) both to find out the cause of failure and to quickly define a technical solution to meet the safety criteria. An efficient application to a RCA of cracks in a Francis runner is quoted in this article as an example. The results of the RCA are presented together with an efficient and inexpensive solution whose effectiveness could be proven again by applying the described RSC technics. It is shown that, with the RSC procedure developed and applied as standard procedure in ANDRITZ HYDRO such a failure is excluded in an early design phase. Moreover, the RSC procedure is compatible with different commercial and open source codes and can be easily adapted to apply for other types of turbines, such as pump turbines and Pelton runners.
Preclinical Feasibility of a Technology Framework for MRI-guided Iliac Angioplasty
Rube, Martin A.; Fernandez-Gutierrez, Fabiola; Cox, Benjamin F.; Holbrook, Andrew B.; Houston, J. Graeme; White, Richard D.; McLeod, Helen; Fatahi, Mahsa; Melzer, Andreas
2015-01-01
Purpose Interventional MRI has significant potential for image guidance of iliac angioplasty and related vascular procedures. A technology framework with in-room image display, control, communication and MRI-guided intervention techniques was designed and tested for its potential to provide safe, fast and efficient MRI-guided angioplasty of the iliac arteries. Methods A 1.5T MRI scanner was adapted for interactive imaging during endovascular procedures using new or modified interventional devices such as guidewires and catheters. A perfused vascular phantom was used for testing. Pre-, intra- and post-procedural visualization and measurement of vascular morphology and flow was implemented. A detailed analysis of X-Ray fluoroscopic angiography workflow was conducted and applied. Two interventional radiologists and one physician in training performed 39 procedures. All procedures were timed and analyzed. Results MRI-guided iliac angioplasty procedures were successfully performed with progressive adaptation of techniques and workflow. The workflow, setup and protocol enabled a reduction in table time for a dedicated MRI-guided procedure to 6 min 33 s with a mean procedure time of 9 min 2 s, comparable to the mean procedure time of 8 min 42 s for the standard X-Ray guided procedure. Conclusions MRI-guided iliac vascular interventions were found to be feasible and practical using this framework and optimized workflow. In particular the real-time flow analysis was found to be helpful for pre- and post-interventional assessments. Design optimization of the catheters and in vivo experiments are required before clinical evaluation. PMID:25102933
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
In this report, the scope of the tests, the method of analysis, the results, and the conclusions are discussed. The first test indicated that the requirements generated by the Standard procedures and formulae appear to yield reasonable results, although some of the cost data provided as defaults in the Standard should be reevaluated. The second test provided experience that was useful in modifying the points compliance format, but did not uncover any procedural issues that would lead to unreasonable results. These conclusions are based on analysis using the Automated Residential Energy Standard (ARES) computer program, developed to simplify the processmore » of standards generation.« less
Dynamic response of a monorail steel bridge under a moving train
NASA Astrophysics Data System (ADS)
Lee, C. H.; Kawatani, M.; Kim, C. W.; Nishimura, N.; Kobayashi, Y.
2006-06-01
This study proposes a dynamic response analysis procedure for traffic-induced vibration of a monorail bridge and train. Each car in the monorail train is idealized as a dynamic system of 15-degrees-of-freedom. The governing equations of motion for a three-dimensional monorail bridge-train interaction system are derived using Lagrange's formulation for monorail trains, and a finite-element method for modal analysis of monorail bridges. Analytical results on dynamic response of the monorail train and bridge are compared with field-test data in order to verify the validity of the proposed analysis procedure, and a positive correlation is found. An interesting feature of the monorail bridge response is that sway motion is caused by torsional behavior resulting from eccentricity between the shear center of the bridge section and the train load.
NASA Astrophysics Data System (ADS)
Yi, Dake; Wang, TzuChiang
2018-06-01
In the paper, a new procedure is proposed to investigate three-dimensional fracture problems of a thin elastic plate with a long through-the-thickness crack under remote uniform tensile loading. The new procedure includes a new analytical method and high accurate finite element simulations. In the part of theoretical analysis, three-dimensional Maxwell stress functions are employed in order to derive three-dimensional crack tip fields. Based on the theoretical analysis, an equation which can describe the relationship among the three-dimensional J-integral J( z), the stress intensity factor K( z) and the tri-axial stress constraint level T z ( z) is derived first. In the part of finite element simulations, a fine mesh including 153360 elements is constructed to compute the stress field near the crack front, J( z) and T z ( z). Numerical results show that in the plane very close to the free surface, the K field solution is still valid for in-plane stresses. Comparison with the numerical results shows that the analytical results are valid.
ERIC Educational Resources Information Center
Sarkis, Vahak D.
1974-01-01
Describes a method (involving a Hach Colorimeter and simplified procedures) that can be used for the analysis of up to 56 different chemical constituents of water. Presents the results of student analysis of waters of Fulton and Montgomery counties in New York. (GS)
Ozel, Bora; Sezgin, Billur; Guney, Kirdar; Latifoglu, Osman; Celebi, Cemallettin
2015-02-01
Although aesthetic procedures are known to have a higher impact on women, men are becoming more inclined toward such procedures since the last decade. To determine the reason behind the increase in demand for male aesthetic procedures and to learn about the expectations and inquietude related to body contouring surgery, a prospective questionnaire study was conducted on 200 Turkish males from January 1, 2011-May 31, 2012. Demographic information, previous aesthetic procedures and thoughts on body contouring procedures with given reasons were questioned. The results of the study showed that 53 % of all participants considered undergoing body contouring surgery with the given reason that they believed their current body structure required it. For those who did not consider contouring operations, 92.5 % said they felt that they did not need such a procedure. The results of the statistical analysis showed that BMI was a significant factor in the decision making process for wanting to undergo body contouring procedures. The results of the study showed that men's consideration for aesthetic operations depends mainly on necessity and that the most considered region was the abdominal zone in regard to contouring. We can conclude that men are becoming more interested in body contouring operations and therefore different surgical procedures should be refined and re-defined according to the expectations of this new patient group.
Willard, Melissa A Bodnar; McGuffin, Victoria L; Smith, Ruth Waddell
2012-01-01
Salvia divinorum is a hallucinogenic herb that is internationally regulated. In this study, salvinorin A, the active compound in S. divinorum, was extracted from S. divinorum plant leaves using a 5-min extraction with dichloromethane. Four additional Salvia species (Salvia officinalis, Salvia guaranitica, Salvia splendens, and Salvia nemorosa) were extracted using this procedure, and all extracts were analyzed by gas chromatography-mass spectrometry. Differentiation of S. divinorum from other Salvia species was successful based on visual assessment of the resulting chromatograms. To provide a more objective comparison, the total ion chromatograms (TICs) were subjected to principal components analysis (PCA). Prior to PCA, the TICs were subjected to a series of data pretreatment procedures to minimize non-chemical sources of variance in the data set. Successful discrimination of S. divinorum from the other four Salvia species was possible based on visual assessment of the PCA scores plot. To provide a numerical assessment of the discrimination, a series of statistical procedures such as Euclidean distance measurement, hierarchical cluster analysis, Student's t tests, Wilcoxon rank-sum tests, and Pearson product moment correlation were also applied to the PCA scores. The statistical procedures were then compared to determine the advantages and disadvantages for forensic applications.
Pricing strategy for aesthetic surgery: economic analysis of a resident clinic's change in fees.
Krieger, L M; Shaw, W W
1999-02-01
The laws of microeconomics explain how prices affect consumer purchasing decisions and thus overall revenues and profits. These principles can easily be applied to the behavior aesthetic plastic surgery patients. The UCLA Division of Plastic Surgery resident aesthetics clinic recently offered a radical price change for its services. The effects of this change on demand for services and revenue were tracked. Economic analysis was applied to see if this price change resulted in the maximization of total revenues, or if additional price changes could further optimize them. Economic analysis of pricing involves several steps. The first step is to assess demand. The number of procedures performed by a given practice at different price levels can be plotted to create a demand curve. From this curve, price sensitivities of consumers can be calculated (price elasticity of demand). This information can then be used to determine the pricing level that creates demand for the exact number of procedures that yield optimal revenues. In economic parlance, revenues are maximized by pricing services such that elasticity is equal to 1 (the point of unit elasticity). At the UCLA resident clinic, average total fees per procedure were reduced by 40 percent. This resulted in a 250-percent increase in procedures performed for representative 4-month periods before and after the price change. Net revenues increased by 52 percent. Economic analysis showed that the price elasticity of demand before the price change was 6.2. After the price change it was 1. We conclude that the magnitude of the price change resulted in a fee schedule that yielded the highest possible revenues from the resident clinic. These results show that changes in price do affect total revenue and that the nature of these effects can be understood, predicted, and maximized using the tools of microeconomics.
SUS in nuclear medicine in Brazil: analysis and comparison of data provided by Datasus and CNEN*
Pozzo, Lorena; Coura Filho, George; Osso Júnior, João Alberto; Squair, Peterson Lima
2014-01-01
Objective To investigate the outpatient access to nuclear medicine procedures by means of the Brazilian Unified Health System (SUS), analyzing the correspondence between data provided by this system and those from Comissão Nacional de Energia Nuclear (CNEN) (National Commission of Nuclear Energy). Materials and Methods Data provided by Datasus regarding number of scintillation chambers, outpatient procedures performed from 2008 to 2012, administrative responsibility for such procedures, type of service providers and outsourced services were retrieved and evaluated. Also, such data were compared with those from institutions certified by CNEN. Results The present study demonstrated that the system still lacks maturity in terms of correct data input, particularly regarding equipment available. It was possible to list the most common procedures and check the growth of the specialty along the study period. Private centers are responsible for most of the procedures covered and reimbursed by SUS. However, many healthcare facilities are not certified by CNEN. Conclusion Datasus provides relevant data for analysis as done in the present study, although some issues still require attention. The present study has quantitatively depicted the Brazilian reality regarding access to nuclear medicine procedures offered by/for SUS. PMID:25741070
Developing an approach for teaching and learning about Lewis structures
NASA Astrophysics Data System (ADS)
Kaufmann, Ilana; Hamza, Karim M.; Rundgren, Carl-Johan; Eriksson, Lars
2017-08-01
This study explores first-year university students' reasoning as they learn to draw Lewis structures. We also present a theoretical account of the formal procedure commonly taught for drawing these structures. Students' discussions during problem-solving activities were video recorded and detailed analyses of the discussions were made through the use of practical epistemology analysis (PEA). Our results show that the formal procedure was central for drawing Lewis structures, but its use varied depending on situational aspects. Commonly, the use of individual steps of the formal procedure was contingent on experiences of chemical structures, and other information such as the characteristics of the problem given. The analysis revealed a number of patterns in how students constructed, checked and modified the structure in relation to the formal procedure and the situational aspects. We suggest that explicitly teaching the formal procedure as a process of constructing, checking and modifying might be helpful for students learning to draw Lewis structures. By doing so, the students may learn to check the accuracy of the generated structure not only in relation to the octet rule and formal charge, but also to other experiences that are not explicitly included in the formal procedure.
Using GOMS models and hypertext to create representations of medical procedures for online display
NASA Technical Reports Server (NTRS)
Gugerty, Leo; Halgren, Shannon; Gosbee, John; Rudisill, Marianne
1991-01-01
This study investigated two methods to improve organization and presentation of computer-based medical procedures. A literature review suggested that the GOMS (goals, operators, methods, and selecton rules) model can assist in rigorous task analysis, which can then help generate initial design ideas for the human-computer interface. GOMS model are hierarchical in nature, so this study also investigated the effect of hierarchical, hypertext interfaces. We used a 2 x 2 between subjects design, including the following independent variables: procedure organization - GOMS model based vs. medical-textbook based; navigation type - hierarchical vs. linear (booklike). After naive subjects studies the online procedures, measures were taken of their memory for the content and the organization of the procedures. This design was repeated for two medical procedures. For one procedure, subjects who studied GOMS-based and hierarchical procedures remembered more about the procedures than other subjects. The results for the other procedure were less clear. However, data for both procedures showed a 'GOMSification effect'. That is, when asked to do a free recall of a procedure, subjects who had studies a textbook procedure often recalled key information in a location inconsistent with the procedure they actually studied, but consistent with the GOMS-based procedure.
Optimization for minimum sensitivity to uncertain parameters
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.; Sobieszczanski-Sobieski, Jaroslaw
1994-01-01
A procedure to design a structure for minimum sensitivity to uncertainties in problem parameters is described. The approach is to minimize directly the sensitivity derivatives of the optimum design with respect to fixed design parameters using a nested optimization procedure. The procedure is demonstrated for the design of a bimetallic beam for minimum weight with insensitivity to uncertainties in structural properties. The beam is modeled with finite elements based on two dimensional beam analysis. A sequential quadratic programming procedure used as the optimizer supplies the Lagrange multipliers that are used to calculate the optimum sensitivity derivatives. The method was perceived to be successful from comparisons of the optimization results with parametric studies.
Reising, Deanna L; Carr, Douglas E; Gindling, Sally; Barnes, Roxie; Garletts, Derrick; Ozdogan, Zulfukar
Interprofessional team performance is believed to be dependent on the development of effective team communication skills. Yet, little evidence exists in undergraduate nursing programs on whether team communication skills affect team performance. A secondary analysis of a larger study on interprofessional student teams in simulations was conducted to determine if there is a relationship between team communication and team procedure performance. The results showed a positive, significant correlation between interprofessional team communication ratings and procedure accuracy in the simulation. Interprofessional team training in communication skills for nursing and medical students improves the procedure accuracy in a simulated setting.
Thomas Harless; Francis G. Wagner; Phillip Steele; Fred Taylor; Vikram Yadama; Charles W. McMillin
1991-01-01
A precise research methodology is described by which internal log-defect locations may help select hardwood log ortentation and sawing procedure to improve lumber value. Procedures for data collection, data handling, simulated sawing, and data analysis are described. A single test log verified the methodology. Results from this log showed significant differences in...
NASA Technical Reports Server (NTRS)
Seshadri, B. R.; Smith, S. W.; Johnston, W. M.
2008-01-01
This viewgraph presentation describes residual strength analysis of integral structures fabricated using different manufacturing procedures. The topics include: 1) Built-up and Integral Structures; 2) Development of Prediction Methodology for Integral Structures Fabricated using different Manufacturing Procedures; 3) Testing Facility; 4) Fracture Parameters Definition; 5) Crack Branching in Integral Structures; 6) Results and Discussion; and 7) Concluding Remarks.
Calibration Of Partial-Pressure-Of-Oxygen Sensors
NASA Technical Reports Server (NTRS)
Yount, David W.; Heronimus, Kevin
1995-01-01
Report and analysis of, and discussion of improvements in, procedure for calibrating partial-pressure-of-oxygen sensors to satisfy Spacelab calibration requirements released. Sensors exhibit fast drift, which results in short calibration period not suitable for Spacelab. By assessing complete process of determining total drift range available, calibration procedure modified to eliminate errors and still satisfy requirements without compromising integrity of system.
Uncertainty Analysis for DAM Projects.
1987-09-01
overwhelming majority of articles published on the use of statistical methodology for geotechnical engineering focus on performance predictions and design ...Results of the present study do not support the adoption of more esoteric statistical procedures except on a special case basis or in research ...influence that recommended statistical procedures might have had on the Carters Project, had they been applied during planning and design phases
Rapid microfluidic analysis of a Y-STR multiplex for screening of forensic samples.
Gibson-Daw, Georgiana; Albani, Patricia; Gassmann, Marcus; McCord, Bruce
2017-02-01
In this paper, we demonstrate a rapid analysis procedure for use with a small set of rapidly mutating Y chromosomal short tandem repeat (Y-STR) loci that combines both rapid polymerase chain reaction (PCR) and microfluidic separation elements. The procedure involves a high-speed polymerase and a rapid cycling protocol to permit PCR amplification in 16 min. The resultant amplified sample is next analysed using a short 1.8-cm microfluidic electrophoresis system that permits a four-locus Y-STR genotype to be produced in 80 s. The entire procedure takes less than 25 min from sample collection to result. This paper describes the rapid amplification protocol as well as studies of the reproducibility and sensitivity of the procedure and its optimisation. The amplification process utilises a small high-speed thermocycler, microfluidic device and compact laptop, making it portable and potentially useful for rapid, inexpensive on-site genotyping. The four loci used for the multiplex were selected due to their rapid mutation rates and should proved useful in preliminary screening of samples and suspects. Overall, this technique provides a method for rapid sample screening of suspect and crime scene samples in forensic casework. Graphical abstract ᅟ.
Identifying Human Factors Issues in Aircraft Maintenance Operations
NASA Technical Reports Server (NTRS)
Veinott, Elizabeth S.; Kanki, Barbara G.; Shafto, Michael G. (Technical Monitor)
1995-01-01
Maintenance operations incidents submitted to the Aviation Safety Reporting System (ASRS) between 1986-1992 were systematically analyzed in order to identify issues relevant to human factors and crew coordination. This exploratory analysis involved 95 ASRS reports which represented a wide range of maintenance incidents. The reports were coded and analyzed according to the type of error (e.g, wrong part, procedural error, non-procedural error), contributing factors (e.g., individual, within-team, cross-team, procedure, tools), result of the error (e.g., aircraft damage or not) as well as the operational impact (e.g., aircraft flown to destination, air return, delay at gate). The main findings indicate that procedural errors were most common (48.4%) and that individual and team actions contributed to the errors in more than 50% of the cases. As for operational results, most errors were either corrected after landing at the destination (51.6%) or required the flight crew to stop enroute (29.5%). Interactions among these variables are also discussed. This analysis is a first step toward developing a taxonomy of crew coordination problems in maintenance. By understanding what variables are important and how they are interrelated, we may develop intervention strategies that are better tailored to the human factor issues involved.
Lum, Jarrad A.G.; Ullman, Michael T.; Conti-Ramsden, Gina
2013-01-01
A number of studies have investigated procedural learning in dyslexia using serial reaction time (SRT) tasks. Overall, the results have been mixed, with evidence of both impaired and intact learning reported. We undertook a systematic search of studies that examined procedural learning using SRT tasks, and synthesized the data using meta-analysis. A total of 14 studies were identified, representing data from 314 individuals with dyslexia and 317 typically developing control participants. The results indicate that, on average, individuals with dyslexia have worse procedural learning abilities than controls, as indexed by sequence learning on the SRT task. The average weighted standardized mean difference (the effect size) was found to be 0.449 (CI95: .204, .693), and was significant (p < .001). However, moderate levels of heterogeneity were found between study-level effect sizes. Meta-regression analyses indicated that studies with older participants that used SRT tasks with second order conditional sequences, or with older participants that used sequences that were presented a large number of times, were associated with smaller effect sizes. These associations are discussed with respect to compensatory and delayed memory systems in dyslexia. PMID:23920029
Plazas-Nossa, Leonardo; Torres, Andrés
2014-01-01
The objective of this work is to introduce a forecasting method for UV-Vis spectrometry time series that combines principal component analysis (PCA) and discrete Fourier transform (DFT), and to compare the results obtained with those obtained by using DFT. Three time series for three different study sites were used: (i) Salitre wastewater treatment plant (WWTP) in Bogotá; (ii) Gibraltar pumping station in Bogotá; and (iii) San Fernando WWTP in Itagüí (in the south part of Medellín). Each of these time series had an equal number of samples (1051). In general terms, the results obtained are hardly generalizable, as they seem to be highly dependent on specific water system dynamics; however, some trends can be outlined: (i) for UV range, DFT and PCA/DFT forecasting accuracy were almost the same; (ii) for visible range, the PCA/DFT forecasting procedure proposed gives systematically lower forecasting errors and variability than those obtained with the DFT procedure; and (iii) for short forecasting times the PCA/DFT procedure proposed is more suitable than the DFT procedure, according to processing times obtained.
Environmental Risk Assessment of dredging processes - application to Marin harbour (NW Spain)
NASA Astrophysics Data System (ADS)
Gómez, A. G.; García Alba, J.; Puente, A.; Juanes, J. A.
2014-04-01
A methodological procedure to estimate the environmental risk of dredging operations in aquatic systems has been developed. Environmental risk estimations are based on numerical models results, which provide an appropriated spatio-temporal framework analysis to guarantee an effective decision-making process. The methodological procedure has been applied on a real dredging operation in the port of Marin (NW Spain). Results from Marin harbour confirmed the suitability of the developed methodology and the conceptual approaches as a comprehensive and practical management tool.
Workload Trend Analysis for the Military Graduate Medical Education Program in San Antonio
2005-05-25
Procedures 57 Introduction and Methodology 57 Results and Discussion 58 Craniotomy 61 Introduction and Methodology 61 Results and Discussion 62...distribution of major vascular procedures by age group for FY 00-04 36. WHMC and BAMC craniotomies for FY 00-04 by age group 37. WHMC and BAMC FY 00-04...average craniotomies by age group compared to required average based on RRC requirement 38. WHMC and BAMC distribution of craniotomies by age group for
Improvements in estimating proportions of objects from multispectral data
NASA Technical Reports Server (NTRS)
Horwitz, H. M.; Hyde, P. D.; Richardson, W.
1974-01-01
Methods for estimating proportions of objects and materials imaged within the instantaneous field of view of a multispectral sensor were developed further. Improvements in the basic proportion estimation algorithm were devised as well as improved alien object detection procedures. Also, a simplified signature set analysis scheme was introduced for determining the adequacy of signature set geometry for satisfactory proportion estimation. Averaging procedures used in conjunction with the mixtures algorithm were examined theoretically and applied to artificially generated multispectral data. A computationally simpler estimator was considered and found unsatisfactory. Experiments conducted to find a suitable procedure for setting the alien object threshold yielded little definitive result. Mixtures procedures were used on a limited amount of ERTS data to estimate wheat proportion in selected areas. Results were unsatisfactory, partly because of the ill-conditioned nature of the pure signature set.
NASA Technical Reports Server (NTRS)
Mohler, R. R. J.; Palmer, W. F.; Smyrski, M. M.; Baker, T. C.; Nazare, C. V.
1982-01-01
A number of methods which can provide information concerning crop acreages on the basis of a utilization of multispectral scanner (MSS) data require for their implementation a comparatively large amount of labor. The present investigation is concerned with a project designed to improve the efficiency of analysis through increased automation. The Caesar technique was developed to realize this objective. The processability rates of the Caesar procedure versus the historical state-of-the-art proportion estimation procedures were determined in an experiment. Attention is given to the study site, the aggregation technology, the results of the aggregation test, and questions of error characterization. It is found that the Caesar procedure, which has been developed for the spring small grains region of North America, is highly efficient and provides accurate results.
Critical analysis of radiologist-patient interaction.
Morris, K J; Tarico, V S; Smith, W L; Altmaier, E M; Franken, E A
1987-05-01
A critical incident interview technique was used to identify features of radiologist-patient interactions considered effective and ineffective by patients. During structured interviews with 35 radiology patients and five patients' parents, three general categories of physician behavior were described: attention to patient comfort, explanation of procedure and results, and interpersonal sensitivity. The findings indicated that patients are sensitive to physicians' interpersonal styles and that they want physicians to explain procedures and results in an understandable manner and to monitor their well-being during procedures. The sample size of the study is small; thus further confirmation is needed. However, the implications for training residents and practicing radiologists in these behaviors are important in the current competitive medical milieu.
The geometry of structural equilibrium
2017-01-01
Building on a long tradition from Maxwell, Rankine, Klein and others, this paper puts forward a geometrical description of structural equilibrium which contains a procedure for the graphic analysis of stress resultants within general three-dimensional frames. The method is a natural generalization of Rankine’s reciprocal diagrams for three-dimensional trusses. The vertices and edges of dual abstract 4-polytopes are embedded within dual four-dimensional vector spaces, wherein the oriented area of generalized polygons give all six components (axial and shear forces with torsion and bending moments) of the stress resultants. The relevant quantities may be readily calculated using four-dimensional Clifford algebra. As well as giving access to frame analysis and design, the description resolves a number of long-standing problems with the incompleteness of Rankine’s description of three-dimensional trusses. Examples are given of how the procedure may be applied to structures of engineering interest, including an outline of a two-stage procedure for addressing the equilibrium of loaded gridshell rooves. PMID:28405361
Modeling Woven Polymer Matrix Composites with MAC/GMC
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M. (Technical Monitor)
2000-01-01
NASA's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) is used to predict the elastic properties of plain weave polymer matrix composites (PMCs). The traditional one step three-dimensional homogertization procedure that has been used in conjunction with MAC/GMC for modeling woven composites in the past is inaccurate due to the lack of shear coupling inherent to the model. However, by performing a two step homogenization procedure in which the woven composite repeating unit cell is homogenized independently in the through-thickness direction prior to homogenization in the plane of the weave, MAC/GMC can now accurately model woven PMCs. This two step procedure is outlined and implemented, and predictions are compared with results from the traditional one step approach and other models and experiments from the literature. Full coupling of this two step technique with MAC/ GMC will result in a widely applicable, efficient, and accurate tool for the design and analysis of woven composite materials and structures.
Gough, H; Luke, G A; Beeley, J A; Geddes, D A
1996-02-01
The aim of this project was to develop an analytical procedure with the required level of sensitivity for the determination of glucose concentrations in small volumes of unstimulated fasting whole saliva. The technique involves high-performance ion-exchange chromatography at high pH and pulsed amperometric detection. It has a high level of reproducibility, a sensitivity as low as 0.1 mumol/l and requires only 50 microliters samples (sensitivity = 0.002 pmol). Inhibition of glucose metabolism, by procedures such as collection into 0.1% (w/v) sodium fluoride, was shown to be essential if accurate results are to be obtained. Collection on to ice followed by storage at -20 degrees C was shown to be unsuitable and resulted in glucose loss by degradation. There were inter- and intraindividual variations in the glucose concentration in unstimulated mixed saliva (range; 0.02-0.4 mmol/l). The procedure can be used for the analysis of other salivary carbohydrates and for monitoring the clearance of dietary carbohydrates from the mouth.
Modeling Geometry and Progressive Failure of Material Interfaces in Plain Weave Composites
NASA Technical Reports Server (NTRS)
Hsu, Su-Yuen; Cheng, Ron-Bin
2010-01-01
A procedure combining a geometrically nonlinear, explicit-dynamics contact analysis, computer aided design techniques, and elasticity-based mesh adjustment is proposed to efficiently generate realistic finite element models for meso-mechanical analysis of progressive failure in textile composites. In the procedure, the geometry of fiber tows is obtained by imposing a fictitious expansion on the tows. Meshes resulting from the procedure are conformal with the computed tow-tow and tow-matrix interfaces but are incongruent at the interfaces. The mesh interfaces are treated as cohesive contact surfaces not only to resolve the incongruence but also to simulate progressive failure. The method is employed to simulate debonding at the material interfaces in a ceramic-matrix plain weave composite with matrix porosity and in a polymeric matrix plain weave composite without matrix porosity, both subject to uniaxial cyclic loading. The numerical results indicate progression of the interfacial damage during every loading and reverse loading event in a constant strain amplitude cyclic process. However, the composites show different patterns of damage advancement.
An IMU-to-Body Alignment Method Applied to Human Gait Analysis.
Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo
2016-12-10
This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodriguez, Mario E.
An area in earthquake risk reduction that needs an urgent examination is the selection of earthquake records for nonlinear dynamic analysis of structures. An often-mentioned shortcoming from results of nonlinear dynamic analyses of structures is that these results are limited to the type of records that these analyses use as input data. This paper proposes a procedure for selecting earthquake records for nonlinear dynamic analysis of structures. This procedure uses a seismic damage index evaluated using the hysteretic energy dissipated by a Single Degree of Freedom System (SDOF) representing a multi-degree-of freedom structure responding to an earthquake record, and themore » plastic work capacity of the system at collapse. The type of structural system is considered using simple parameters. The proposed method is based on the evaluation of the damage index for a suite of earthquake records and a selected type of structural system. A set of 10 strong ground motion records is analyzed to show an application of the proposed procedure for selecting earthquake records for structural design.« less
Token Economy: A Systematic Review of Procedural Descriptions.
Ivy, Jonathan W; Meindl, James N; Overley, Eric; Robson, Kristen M
2017-09-01
The token economy is a well-established and widely used behavioral intervention. A token economy is comprised of six procedural components: the target response(s), a token that functions as a conditioned reinforcer, backup reinforcers, and three interconnected schedules of reinforcement. Despite decades of applied research, the extent to which the procedures of a token economy are described in complete and replicable detail has not been evaluated. Given the inherent complexity of a token economy, an analysis of the procedural descriptions may benefit future token economy research and practice. Articles published between 2000 and 2015 that included implementation of a token economy within an applied setting were identified and reviewed with a focus on evaluating the thoroughness of procedural descriptions. The results show that token economy components are regularly omitted or described in vague terms. Of the articles included in this analysis, only 19% (18 of 96 articles reviewed) included replicable and complete descriptions of all primary components. Missing or vague component descriptions could negatively affect future research or applied practice. Recommendations are provided to improve component descriptions.
The development of a purification procedure for saxitoxin-induced protein.
Smith, D S; Kitts, D D; Fenske, B; Owen, T G; Shyng, S
1995-02-01
A simple economical procedure for purifying saxitoxin-induced protein (SIP) from crude extracts of the small shore crab, Hemigrapsus oregenesis, was developed. (NH4)2SO4 precipitation, chymotrypsin digestion, heat treatment, gel filtration and ion-exchange-chromatography procedures were evaluated in purifying SIP. An enzyme immunoassay was used to determine the SIP yield and relative purity at each step of three procedures, thus permitting an assessment of the conditions required for maximum recovery. Response surface analysis was used in an attempt to determine the optimum temperature and exposure time for the heat treatment. A 20 min incubation at 65 degrees C was confirmed by electrophoretic analysis to be the best combination of time and temperature for achieving both an acceptable yield and purity of SIP. SIP in desalted concentrate was shown to be resistant to chymotrypsin proteolysis; however, this enzyme had deleterious effects on SIP purification at later stages of the procedure. The omission of the chymotrypsin digestion, and the inclusion of gel-filtration chromatography in the final clean-up step, resulted in the purification of SIP comparable with that achieved with affinity chromatography.
Situational Analysis of Essential Surgical Care Management in Iran Using the WHO Tool
Kalhor, Rohollah; Keshavarz Mohamadi, Nastaran; Khalesi, Nader; Jafari, Mehdi
2016-01-01
Background: Surgery is an essential component of health care, yet it has usually been overlooked in public health across the world. Objectives: This study aimed to perform a situational analysis of essential surgical care management at district hospitals in Iran. Materials and Methods: This research was a descriptive and cross-sectional study performed at 42 first-referral district hospitals of Iran in 2013. The World Health Organization (WHO) Tool for the situational analysis of emergency and essential care was used for data collection in four domains of facilities and equipment, human resources, surgical interventions, and infrastructure. Data analysis was conducted using simple descriptive statistical methods. Results: In this study, 100% of the studied hospitals had oxygen cylinders, running water, electricity, anesthesia machines, emergency departments, archives of medical records, and X-ray machines. In 100% of the surveyed hospitals, specialists in surgery, anesthesia, and obstetrics and gynecology were available as full-time staff. Life-saving procedures were performed in the majority of the hospitals. Among urgent procedures, neonatal surgeries were conducted in 14.3% of the hospitals. Regarding non-urgent procedures, acute burn management was conducted in 38.1% of the hospitals. Also, a few other procedures such as cricothyrotomy and foreign body removal were performed in 85.7% of the hospitals. Conclusions: The results indicated that suitable facilities and equipment, human resources, and infrastructure were available in the district hospitals in Iran. These findings showed that there is potential for the district hospitals to provide care in a wider spectrum. PMID:27437121
ERIC Educational Resources Information Center
Kleppinger, E. W.; And Others
1984-01-01
Although determination of phosphorus is important in biology, physiology, and environmental science, traditional gravimetric and colorimetric methods are cumbersome and lack the requisite sensitivity. Therefore, a derivative activation analysis method is suggested. Background information, procedures, and results are provided. (JN)
Determination of Reaction Stoichiometries by Flow Injection Analysis.
ERIC Educational Resources Information Center
Rios, Angel; And Others
1986-01-01
Describes a method of flow injection analysis intended for calculation of complex-formation and redox reaction stoichiometries based on a closed-loop configuration. The technique is suitable for use in undergraduate laboratories. Information is provided for equipment, materials, procedures, and sample results. (JM)
Separation and Analysis of Citral Isomers.
ERIC Educational Resources Information Center
Sacks, Jeff; And Others
1983-01-01
Provides background information, procedures, and results of an experiments designed to introduce undergraduates to the technique of steam distillation as a means of isolating thermally sensitive compounds. Chromatographic techniques (HPLC) and mass spectrometric analysis are used in the experiment which requires three laboratory periods. (JN)
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-18
... Act; Analysis and Sampling Procedures; Extension of Comment Period AGENCY: Environmental Protection..., 2010, EPA proposed changes to analysis and sampling test procedures in wastewater regulations. These...
Marine stratocumulus cloud characteristics from multichannel satellite measurements
NASA Technical Reports Server (NTRS)
Durkee, Philip A.; Mineart, Gary M.
1990-01-01
Understanding the effects of aerosols on the microphysical characteristics of marine stratocumulus clouds, and the resulting influence on cloud radiative properties, is a primary goal of FIRE. The potential for observing variations of cloud characteristics that might be related to variations of available aerosols is studied. Some results from theoretical estimates of cloud reflectance are presented. Also presented are the results of comparisons between aircraft measured microphysical characteristics and satellite detected radiative properties of marine stratocumulus clouds. These results are extracted from Mineart where the analysis procedures and a full discussion of the observations are presented. Only a brief description of the procedures and the composite results are presented.
Modal Analysis for Grid Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
MANGO software is to provide a solution for improving small signal stability of power systems through adjusting operator-controllable variables using PMU measurement. System oscillation problems are one of the major threats to the grid stability and reliability in California and the Western Interconnection. These problems result in power fluctuations, lower grid operation efficiency, and may even lead to large-scale grid breakup and outages. This MANGO software aims to solve this problem by automatically generating recommended operation procedures termed Modal Analysis for Grid Operation (MANGO) to improve damping of inter-area oscillation modes. The MANGO procedure includes three steps: recognizing small signalmore » stability problems, implementing operating point adjustment using modal sensitivity, and evaluating the effectiveness of the adjustment. The MANGO software package is designed to help implement the MANGO procedure.« less
Crack cause analysis of a graphite nozzle throat insert
NASA Astrophysics Data System (ADS)
Sun, Lin; Bao, Futing; Zhao, Yu; Hou, Lian; Hui, Weihua; Zhang, Ning; Shi, Wei
2017-08-01
With an objective to determine the failure cause of a throughout crack at an angle of 45° and a breach during a firing test, a simplified analysis procedure with consideration of the structure gap was established to simulate the thermo-structural response of a nozzle. By neglecting erosion and pyrolysis of the insulating materials and establishing temperature-dependent or anisotropic material models, ANSYS Parameter Design Language codes were written to perform the fully coupled thermal-structural simulation. A Quasi-1D flow was calculated for supplying boundary conditions. Study on mesh independence and time step independence was also conducted to evaluate simulated results. It was found that shortly after ignition, compressive stress in the x direction and tensile stress in the y direction contributed to anomalies. And through contact status analysis, inappropriate gap design was regarded as the origin of the too large stress, which was the primary cause of these anomalies during firing test. Simulation results were in good agreement with firing test results. In addition, the simplified analysis procedure was proven effective. Gap size should be seriously dealt with in the future design.
Seamans, David P; Louka, Boshra F; Fortuin, F David; Patel, Bhavesh M; Sweeney, John P; Lanza, Louis A; DeValeria, Patrick A; Ezrre, Kim M; Ramakrishna, Harish
2016-10-01
The surgical and procedural specialties are continually evolving their methods to include more complex and technically difficult cases. These cases can be longer and incorporate multiple teams in a different model of operating room synergy. Patients are frequently older, with comorbidities adding to the complexity of these cases. Recording of this environment has become more feasible recently with advancement in video and audio capture systems often used in the simulation realm. We began using live capture to record a new procedure shortly after starting these cases in our institution. This has provided continued assessment and evaluation of live procedures. The goal of this was to improve human factors and situational challenges by review and debriefing. B-Line Medical's LiveCapture video system was used to record successive transcatheter aortic valve replacement (TAVR) procedures in our cardiac catheterization/laboratory. An illustrative case is used to discuss analysis and debriefing of the case using this system. An illustrative case is presented that resulted in long-term changes to our approach of these cases. The video capture documented rare events during one of our TAVR procedures. Analysis and debriefing led to definitive changes in our practice. While there are hurdles to the use of this technology in every institution, the role for the ongoing use of video capture, analysis, and debriefing may play an important role in the future of patient safety and human factors analysis in the operating environment.
Rail-highway crossing accident prediction analysis
DOT National Transportation Integrated Search
1987-04-01
This report contains technical results that have been produced in a study : to revise and update the DOT rail-highway crossing resource allocation : procedure. This work has resulted in new accident prediction and severity : formulas, a modified and ...
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Mckissick, B. T.; Steinmetz, G. G.
1979-01-01
A recent modification of the methodology of profile analysis, which allows the testing for differences between two functions as a whole with a single test, rather than point by point with multiple tests is discussed. The modification is applied to the examination of the issue of motion/no motion conditions as shown by the lateral deviation curve as a function of engine cut speed of a piloted 737-100 simulator. The results of this application are presented along with those of more conventional statistical test procedures on the same simulator data.
Human Factors Analysis to Improve the Processing of Ares-1 Launch Vehicle
NASA Technical Reports Server (NTRS)
Dippolito, Gregory M.; Stambolian, Damon B.
2011-01-01
The Constellation Program (CxP) is composed of an array of vehicles used to go to the Moon and Mars. The Ares vehicle one of the components of CxP, goes through several stages of processing before it is launched at the Kennedy Space Center. In order to have efficient and effective ground processing inside and outside the vehicle, all of the ground processing activities should be analyzed. The analysis for this program was performed, by engineers, technicians, and human factors experts with spacecraft processing experience. The procedure used to gather data was accomplished by observing human activities within physical mockups. The paper will focus on the procedures, analysis and results from these observations.
Completely automated modal analysis procedure based on the combination of different OMA methods
NASA Astrophysics Data System (ADS)
Ripamonti, Francesco; Bussini, Alberto; Resta, Ferruccio
2018-03-01
In this work a completely automated output-only Modal Analysis procedure is presented and all its benefits are listed. Based on the merging of different Operational Modal Analysis methods and a statistical approach, the identification process has been improved becoming more robust and giving as results only the real natural frequencies, damping ratios and mode shapes of the system. The effect of the temperature can be taken into account as well, leading to the creation of a better tool for automated Structural Health Monitoring. The algorithm has been developed and tested on a numerical model of a scaled three-story steel building present in the laboratories of Politecnico di Milano.
A strategy for selecting data mining techniques in metabolomics.
Banimustafa, Ahmed Hmaidan; Hardy, Nigel W
2012-01-01
There is a general agreement that the development of metabolomics depends not only on advances in chemical analysis techniques but also on advances in computing and data analysis methods. Metabolomics data usually requires intensive pre-processing, analysis, and mining procedures. Selecting and applying such procedures requires attention to issues including justification, traceability, and reproducibility. We describe a strategy for selecting data mining techniques which takes into consideration the goals of data mining techniques on the one hand, and the goals of metabolomics investigations and the nature of the data on the other. The strategy aims to ensure the validity and soundness of results and promote the achievement of the investigation goals.
NASA Astrophysics Data System (ADS)
Khondok, Piyoros; Sakulkalavek, Aparporn; Suwansukho, Kajpanya
2018-03-01
A simplified and powerful image processing procedures to separate the paddy of KHAW DOK MALI 105 or Thai jasmine rice and the paddy of sticky rice RD6 varieties were proposed. The procedures consist of image thresholding, image chain coding and curve fitting using polynomial function. From the fitting, three parameters of each variety, perimeters, area, and eccentricity, were calculated. Finally, the overall parameters were determined by using principal component analysis. The result shown that these procedures can be significantly separate both varieties.
Application of the differential decay-curve method to γ-γ fast-timing lifetime measurements
NASA Astrophysics Data System (ADS)
Petkov, P.; Régis, J.-M.; Dewald, A.; Kisyov, S.
2016-10-01
A new procedure for the analysis of delayed-coincidence lifetime experiments focused on the Fast-timing case is proposed following the approach of the Differential decay-curve method. Examples of application of the procedure on experimental data reveal its reliability for lifetimes even in the sub-nanosecond range. The procedure is expected to improve both precision/reliability and treatment of systematic errors and scarce data as well as to provide an option for cross-check with the results obtained by means of other analyzing methods.
Polarographic study on the presence of antibiotics in food.
Bottari, Emilio; Colombi, Massimiliano; De Bernardis, Chiara; Festa, Maria Rosa; Rampino, Vittorio
2018-07-01
EU and Italian laws dealing for the presence of antibiotics or, more in general, drags in food established limits for different kinds of food. Suitable rules exist about the medical treatment of cattle in relation to the production of milk and meat. The adoption of a procedure to check the respect of the law limits is necessary. In this paper, the presence of different classes of antibiotics in milk and in homogenised meat is investigated. Generally, HPLC methods are applied for this purpose. In this paper, the application of polarographic analysis is studied and the results are compared with the chromatographic ones. The comparison is relative to all the phases of analysis including the sample preparation. The results show the advantage of the proposed procedure.
Design and Optimization of Composite Gyroscope Momentum Wheel Rings
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2007-01-01
Stress analysis and preliminary design/optimization procedures are presented for gyroscope momentum wheel rings composed of metallic, metal matrix composite, and polymer matrix composite materials. The design of these components involves simultaneously minimizing both true part volume and mass, while maximizing angular momentum. The stress analysis results are combined with an anisotropic failure criterion to formulate a new sizing procedure that provides considerable insight into the design of gyroscope momentum wheel ring components. Results compare the performance of two optimized metallic designs, an optimized SiC/Ti composite design, and an optimized graphite/epoxy composite design. The graphite/epoxy design appears to be far superior to the competitors considered unless a much greater premium is placed on volume efficiency compared to mass efficiency.
An overall decline both in recollection and familiarity in healthy aging.
Pitarque, Alfonso; Sales, Alicia; Meléndez, Juan C; Mayordomo, Teresa; Satorres, Encar
2015-01-01
In the area of recognition memory, the experimental data have been inconsistent about whether or not familiarity declines in healthy aging. A recent meta-analysis concluded that familiarity is impaired when estimated with the remember-know procedure, but not with the process-dissociation procedure. We present an associative recognition experiment with remember-know judgments that allow us to estimate both recollection and familiarity using both procedures in the same task and with the same participants (a sample of healthy older people and another sample of young people). Moreover, we performed a within-subjects manipulation of the type of materials (pairs of words or pairs of pictures), and the repetition or not of the pairs during the study phase. The results show that familiarity, estimated using both estimation procedures, declines significantly with age, although the effect size obtained with the process-dissociation procedure is significantly smaller than the one obtained with the remember-know procedure. Our results show that aging is associated with significant decreases both in recollection and, to a lesser extent, familiarity.
Recent developments in the Dorfman-Berbaum-Metz procedure for multireader ROC study analysis.
Hillis, Stephen L; Berbaum, Kevin S; Metz, Charles E
2008-05-01
The Dorfman-Berbaum-Metz (DBM) method has been one of the most popular methods for analyzing multireader receiver-operating characteristic (ROC) studies since it was proposed in 1992. Despite its popularity, the original procedure has several drawbacks: it is limited to jackknife accuracy estimates, it is substantially conservative, and it is not based on a satisfactory conceptual or theoretical model. Recently, solutions to these problems have been presented in three papers. Our purpose is to summarize and provide an overview of these recent developments. We present and discuss the recently proposed solutions for the various drawbacks of the original DBM method. We compare the solutions in a simulation study and find that they result in improved performance for the DBM procedure. We also compare the solutions using two real data studies and find that the modified DBM procedure that incorporates these solutions yields more significant results and clearer interpretations of the variance component parameters than the original DBM procedure. We recommend using the modified DBM procedure that incorporates the recent developments.
Estimating acreage by double sampling using LANDSAT data
NASA Technical Reports Server (NTRS)
Pont, F.; Horwitz, H.; Kauth, R. (Principal Investigator)
1982-01-01
Double sampling techniques employing LANDSAT data for estimating the acreage of corn and soybeans was investigated and evaluated. The evaluation was based on estimated costs and correlations between two existing procedures having differing cost/variance characteristics, and included consideration of their individual merits when coupled with a fictional 'perfect' procedure of zero bias and variance. Two features of the analysis are: (1) the simultaneous estimation of two or more crops; and (2) the imposition of linear cost constraints among two or more types of resource. A reasonably realistic operational scenario was postulated. The costs were estimated from current experience with the measurement procedures involved, and the correlations were estimated from a set of 39 LACIE-type sample segments located in the U.S. Corn Belt. For a fixed variance of the estimate, double sampling with the two existing LANDSAT measurement procedures can result in a 25% or 50% cost reduction. Double sampling which included the fictional perfect procedure results in a more cost effective combination when it is used with the lower cost/higher variance representative of the existing procedures.
Analytical methods of the U.S. Geological Survey's New York District Water-Analysis Laboratory
Lawrence, Gregory B.; Lincoln, Tricia A.; Horan-Ross, Debra A.; Olson, Mark L.; Waldron, Laura A.
1995-01-01
The New York District of the U.S. Geological Survey (USGS) in Troy, N.Y., operates a water-analysis laboratory for USGS watershed-research projects in the Northeast that require analyses of precipitation and of dilute surface water and soil water for major ions; it also provides analyses of certain chemical constituents in soils and soil gas samples.This report presents the methods for chemical analyses of water samples, soil-water samples, and soil-gas samples collected in wateshed-research projects. The introduction describes the general materials and technicques for each method and explains the USGS quality-assurance program and data-management procedures; it also explains the use of cross reference to the three most commonly used methods manuals for analysis of dilute waters. The body of the report describes the analytical procedures for (1) solution analysis, (2) soil analysis, and (3) soil-gas analysis. The methods are presented in alphabetical order by constituent. The method for each constituent is preceded by (1) reference codes for pertinent sections of the three manuals mentioned above, (2) a list of the method's applications, and (3) a summary of the procedure. The methods section for each constitutent contains the following categories: instrumentation and equipment, sample preservation and storage, reagents and standards, analytical procedures, quality control, maintenance, interferences, safety considerations, and references. Sufficient information is presented for each method to allow the resulting data to be appropriately used in environmental investigations.
NASA Astrophysics Data System (ADS)
Walaszek, Damian; Senn, Marianne; Wichser, Adrian; Faller, Markus; Wagner, Barbara; Bulska, Ewa; Ulrich, Andrea
2014-09-01
This work describes an evaluation of a strategy for multi-elemental analysis of typical ancient bronzes (copper, lead bronze and tin bronze) by means of laser ablation inductively coupled plasma mass spectrometry (LA-ICPMS).The samples originating from archeological experiments on ancient metal smelting processes using direct reduction in a ‘bloomery’ furnace as well as historical casting techniques were investigated with the use of the previously proposed analytical procedure, including metallurgical observation and preliminary visual estimation of the homogeneity of the samples. The results of LA-ICPMS analysis were compared to the results of bulk composition obtained by X-ray fluorescence spectrometry (XRF) and by inductively coupled plasma mass spectrometry (ICPMS) after acid digestion. These results were coherent for most of the elements confirming the usefulness of the proposed analytical procedure, however the reliability of the quantitative information about the content of the most heterogeneously distributed elements was also discussed in more detail.
40 CFR 246.201-7 - Recommended procedures: Cost analysis.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Recommended procedures: Cost analysis... § 246.201-7 Recommended procedures: Cost analysis. After potential markets have been located (but prior... residual solid waste have been established, an analysis should be conducted which compares the costs of the...
40 CFR 246.202-6 - Recommended procedures: Cost analysis.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Recommended procedures: Cost analysis... § 246.202-6 Recommended procedures: Cost analysis. After potential markets have been identified (but... residual solid waste have been established, an analysis should be conducted which compares the costs of the...
40 CFR 246.200-8 - Recommended procedures: Cost analysis.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Recommended procedures: Cost analysis... § 246.200-8 Recommended procedures: Cost analysis. After potential markets have been located (but prior... paper and residual solid waste have been established, an analysis should be conducted which compares the...
Filimberti, E; Degl'Innocenti, S; Borsotti, M; Quercioli, M; Piomboni, P; Natali, I; Fino, M G; Caglieresi, C; Criscuoli, L; Gandini, L; Biggeri, A; Maggi, M; Baldi, E
2013-05-01
We report the results of the first three trials of an external quality control (EQC) programme performed in 71 laboratories executing semen analysis in Tuscany Region (Italy). At the end of the second trial, participants were invited to attend a teaching course illustrating and inviting to adhere to procedures recommended by WHO (V edition). Results of the first three trials of the EQC documented a huge variability in the procedures and the results. The highest variability was found for morphology (CV above 80% for all the trials), followed by count (CV of about 60% for all the trials) and motility (CV below 30% for all the trials). When results of sperm count and morphology were divided according to the used method, mean CV values did not show significant differences. CV for morphology dropped significantly at the third trial for most methods, indicating the usefulness of the teaching course for morphology assessment. Conversely, no differences were observed after the course for motility and for most methods to evaluate count, although CV values were lower at the second and third trial for the laboratories using the Burker cytometer. When results were divided according to tertiles of activity, the lowest mean bias values (difference between each laboratory result and the median value of the results) for count and morphology were observed for laboratories in the third tertile (performing over 200 semen analysis/year). Of interest, mean bias values for concentration dropped significantly at the third trial for low activity laboratories. In conclusion, lack of agreement of results of semen analysis in Tuscany is mainly because of the activity and the experience of the laboratory. Our study points out the importance of participating in EQC programmes and periodical teaching courses as well as the use of WHO recommended standardized procedures to increase precision and to allow the use of WHO reference values. © 2013 American Society of Andrology and European Academy of Andrology.
Baniya, Ramkaji; Upadhaya, Sunil; Subedi, Subash Chandra; Khan, Jahangir; Sharma, Prabin; Mohammed, Tabrez Shaik; Bachuwa, Ghassan; Jamil, Laith H
2017-12-01
Two novel enteroscopic procedures, balloon enteroscopy and spiral enteroscopy, have revolutionized the diagnostic and therapeutic approach to small-bowel disorders. These disorders that historically required surgical interventions are now investigated and managed nonsurgically. Only a few weakly powered studies have compared the outcomes of spiral enteroscopy and balloon enteroscopy. We conducted a systematic review and meta-analysis to compare the efficacy and safety of these 2 procedures. PubMed, Cochrane Library, Scopus, and clinicaltrials.gov databases were searched for all studies published up to January 12, 2017 comparing the efficacy and safety of balloon enteroscopy (single or double) and spiral enteroscopy. Primary outcomes of interest were diagnostic and therapeutic success rates. Other outcomes included procedure length, depth of maximal insertion (DMI), rate of complete enteroscopy, and adverse events. We calculated Odds ratios (ORs) for categorical variables and mean difference (MD) for continuous variables. The Mantel-Haenszel method was used to analyze the data. Fixed and random effect models were used for <50% heterogeneity and >50% heterogeneity, respectively. Eight studies met the inclusion criteria for this meta-analysis. A total of 615 procedures were analyzed, which included 394 balloon enteroscopy and 221 spiral enteroscopy procedures. There were no significant differences in diagnostic and therapeutic success rates (OR, 1.27; 95% confidence interval [CI], .86-1.88; P = .22; and OR, 1.23; 95% CI, .82-1.84; P = .32, respectively) between the 2 procedures. Similarly, DMI was not significantly different between the 2 groups (MD, 26.29; 95% CI, 20.92-73.49; P = .28). However, the procedure time was significantly shorter for the spiral enteroscopy group compared with the balloon enteroscopy group (MD, 11.26; 95% CI, 2.72-19.79; P = .010). A subgroup analysis comparing double balloon enteroscopy with spiral enteroscopy yielded similar results. Both procedures achieved similar diagnostic and therapeutic outcomes and with similar depth of insertion. Spiral enteroscopy has the benefit of shorter procedural time. Copyright © 2017 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.
An Alternative View of Some FIA Sample Design and Analysis Issues
Paul C. Van Deusen
2005-01-01
Sample design and analysis decisions are the result of compromises and inputs from many sources. The end result would likely change if different individuals or groups were involved in the planning process. Discussed here are some alternatives to the procedures that are currently being used for the annual inventory. The purpose is to indicate that alternatives exist and...
Alternative Modal Basis Selection Procedures For Reduced-Order Nonlinear Random Response Simulation
NASA Technical Reports Server (NTRS)
Przekop, Adam; Guo, Xinyun; Rizi, Stephen A.
2012-01-01
Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of a computationally taxing full-order analysis in physical degrees of freedom are taken as the benchmark for comparison with the results from the three reduced-order analyses. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.
NASA Technical Reports Server (NTRS)
Nelson, C. C.; Nguyen, D. T.
1987-01-01
A new analysis procedure has been presented which solves for the flow variables of an annular pressure seal in which the rotor has a large static displacement (eccentricity) from the centered position. The present paper incorporates the solutions to investigate the effect of eccentricity on the rotordynamic coefficients. The analysis begins with a set of governing equations based on a turbulent bulk-flow model and Moody's friction factor equation. Perturbations of the flow variables yields a set of zeroth- and first-order equations. After integration of the zeroth-order equations, the resulting zeroth-order flow variables are used as input in the solution of the first-order equations. Further integration of the first order pressures yields the eccentric rotordynamic coefficients. The results from this procedure compare well with available experimental and theoretical data, with accuracy just as good or slightly better than the predictions based on a finite-element model.
van der Vorm, Lisa N; Hendriks, Jan C M; Laarakkers, Coby M; Klaver, Siem; Armitage, Andrew E; Bamberg, Alison; Geurts-Moespot, Anneke J; Girelli, Domenico; Herkert, Matthias; Itkonen, Outi; Konrad, Robert J; Tomosugi, Naohisa; Westerman, Mark; Bansal, Sukhvinder S; Campostrini, Natascia; Drakesmith, Hal; Fillet, Marianne; Olbina, Gordana; Pasricha, Sant-Rayn; Pitts, Kelly R; Sloan, John H; Tagliaro, Franco; Weykamp, Cas W; Swinkels, Dorine W
2016-07-01
Absolute plasma hepcidin concentrations measured by various procedures differ substantially, complicating interpretation of results and rendering reference intervals method dependent. We investigated the degree of equivalence achievable by harmonization and the identification of a commutable secondary reference material to accomplish this goal. We applied technical procedures to achieve harmonization developed by the Consortium for Harmonization of Clinical Laboratory Results. Eleven plasma hepcidin measurement procedures (5 mass spectrometry based and 6 immunochemical based) quantified native individual plasma samples (n = 32) and native plasma pools (n = 8) to assess analytical performance and current and achievable equivalence. In addition, 8 types of candidate reference materials (3 concentrations each, n = 24) were assessed for their suitability, most notably in terms of commutability, to serve as secondary reference material. Absolute hepcidin values and reproducibility (intrameasurement procedure CVs 2.9%-8.7%) differed substantially between measurement procedures, but all were linear and correlated well. The current equivalence (intermeasurement procedure CV 28.6%) between the methods was mainly attributable to differences in calibration and could thus be improved by harmonization with a common calibrator. Linear regression analysis and standardized residuals showed that a candidate reference material consisting of native lyophilized plasma with cryolyoprotectant was commutable for all measurement procedures. Mathematically simulated harmonization with this calibrator resulted in a maximum achievable equivalence of 7.7%. The secondary reference material identified in this study has the potential to substantially improve equivalence between hepcidin measurement procedures and contributes to the establishment of a traceability chain that will ultimately allow standardization of hepcidin measurement results. © 2016 American Association for Clinical Chemistry.
Evaluation of Second-Level Inference in fMRI Analysis
Roels, Sanne P.; Loeys, Tom; Moerkerke, Beatrijs
2016-01-01
We investigate the impact of decisions in the second-level (i.e., over subjects) inferential process in functional magnetic resonance imaging on (1) the balance between false positives and false negatives and on (2) the data-analytical stability, both proxies for the reproducibility of results. Second-level analysis based on a mass univariate approach typically consists of 3 phases. First, one proceeds via a general linear model for a test image that consists of pooled information from different subjects. We evaluate models that take into account first-level (within-subjects) variability and models that do not take into account this variability. Second, one proceeds via inference based on parametrical assumptions or via permutation-based inference. Third, we evaluate 3 commonly used procedures to address the multiple testing problem: familywise error rate correction, False Discovery Rate (FDR) correction, and a two-step procedure with minimal cluster size. Based on a simulation study and real data we find that the two-step procedure with minimal cluster size results in most stable results, followed by the familywise error rate correction. The FDR results in most variable results, for both permutation-based inference and parametrical inference. Modeling the subject-specific variability yields a better balance between false positives and false negatives when using parametric inference. PMID:26819578
DOT National Transportation Integrated Search
1994-02-01
This report describes the data collection procedures, the data analysis methods, and the results gained from the on-site evaluations. The content of the report is as follows: Chapter 2 - State Profiles. This chapter includes descriptions of the organ...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-24
...; Analysis and Sampling Procedures AGENCY: Environmental Protection Agency (EPA). ACTION: Final rule. SUMMARY... Contaminants Under the Safe Drinking Water Act; Analysis and Sampling Procedures. 75 FR 32295. June 8, 2010...
Description of data on the Nimbus 7 LIMS map archive tape: Temperature and geopotential height
NASA Technical Reports Server (NTRS)
Haggard, K. V.; Remsberg, E. E.; Grose, W. L.; Russell, J. M., III; Marshall, B. T.; Lingenfelser, G.
1986-01-01
The process by which the analysis of the Limb Infared Monitor of the Stratosphere (LIMS) experiment data were used to produce estimates of synoptic maps of temperature and geopotential height is described. In addition to a detailed description of the analysis procedure, several interesting features in the data are discussed and these features are used to demonstrate how the analysis procedure produced the final maps and how one can estimate the uncertainties in the maps. In addition, features in the analysis are noted that would influence how one might use, or interpret, the results. These include subjects such as smoothing and the interpretation of wave components. While some suggestions are made for an improved analysis of the data, it is shown that, in general, the maps are an excellent estimation of the synoptic fields.
Thermal-stress analysis for a wood composite blade
NASA Technical Reports Server (NTRS)
Fu, K. C.; Harb, A.
1984-01-01
A thermal-stress analysis of a wind turbine blade made of wood composite material is reported. First, the governing partial differential equation on heat conduction is derived, then, a finite element procedure using variational approach is developed for the solution of the governing equation. Thus, the temperature distribution throughout the blade is determined. Next, based on the temperature distribution, a finite element procedure using potential energy approach is applied to determine the thermal-stress distribution. A set of results is obtained through the use of a computer, which is considered to be satisfactory. All computer programs are contained in the report.
[A new HPLC procedure for cyclamate in food with pre-chromatographic derivatization].
Schwedt, G; Hauck, M
1988-08-01
A high-pressure liquid chromatography (HPLC) procedure for the detection of cyclamate in liquid and solid samples is presented, which depends on oxidation and the reaction of cyclohexylamine with o-phthaldialdehyde to form a condensation product. The results of the HPLC analysis, using an RP-C 18 separation system with UV detection at 242 nm are reported. Contents, from 2 to 400 mg/l, can be detected in less than 2 h (HPLC analysis within 20 min) with relative standard deviations of 4%. Only for cucumber infusions were incomplete recoveries of 68% obtained.
Accuracy of remotely sensed data: Sampling and analysis procedures
NASA Technical Reports Server (NTRS)
Congalton, R. G.; Oderwald, R. G.; Mead, R. A.
1982-01-01
A review and update of the discrete multivariate analysis techniques used for accuracy assessment is given. A listing of the computer program written to implement these techniques is given. New work on evaluating accuracy assessment using Monte Carlo simulation with different sampling schemes is given. The results of matrices from the mapping effort of the San Juan National Forest is given. A method for estimating the sample size requirements for implementing the accuracy assessment procedures is given. A proposed method for determining the reliability of change detection between two maps of the same area produced at different times is given.
Review of ESOC re-entry prediction results of Salyut-7/Kosmos-1686
NASA Technical Reports Server (NTRS)
Klinkrad, H.
1991-01-01
An overview of activities at ESA/ESOC during the followup of the Salyut-7/Kosmos-1686 decay, and of related cooperations with space agencies, research institutes, and national bodies within the ESA Member States, within the U.S. and within the USSR, is presented. A postflight analysis indicated areas for improvement in the forecast procedures, especially during the last day of the orbital lifetime. Corresponding revised decay predictions are presented for Salyut-7/Kosmos-1686, and the improved procedures are verified by an analysis of the reentries of Kosmos-1402A and Kosmos-1402C.
Periodic response of nonlinear systems
NASA Technical Reports Server (NTRS)
Nataraj, C.; Nelson, H. D.
1988-01-01
A procedure is developed to determine approximate periodic solutions of autonomous and non-autonomous systems. The trignometric collocation method (TCM) is formalized to allow for the analysis of relatively small order systems directly in physical coordinates. The TCM is extended to large order systems by utilizing modal analysis in a component mode synthesis strategy. The procedure was coded and verified by several check cases. Numerical results for two small order mechanical systems and one large order rotor dynamic system are presented. The method allows for the possibility of approximating periodic responses for large order forced and self-excited nonlinear systems.
Continuation of advanced crew procedures development techniques
NASA Technical Reports Server (NTRS)
Arbet, J. D.; Benbow, R. L.; Evans, M. E.; Mangiaracina, A. A.; Mcgavern, J. L.; Spangler, M. C.; Tatum, I. C.
1976-01-01
An operational computer program, the Procedures and Performance Program (PPP) which operates in conjunction with the Phase I Shuttle Procedures Simulator to provide a procedures recording and crew/vehicle performance monitoring capability was developed. A technical synopsis of each task resulting in the development of the Procedures and Performance Program is provided. Conclusions and recommendations for action leading to the improvements in production of crew procedures development and crew training support are included. The PPP provides real-time CRT displays and post-run hardcopy output of procedures, difference procedures, performance data, parametric analysis data, and training script/training status data. During post-run, the program is designed to support evaluation through the reconstruction of displays to any point in time. A permanent record of the simulation exercise can be obtained via hardcopy output of the display data and via transfer to the Generalized Documentation Processor (GDP). Reference procedures data may be transferred from the GDP to the PPP. Interface is provided with the all digital trajectory program, the Space Vehicle Dynamics Simulator (SVDS) to support initial procedures timeline development.
Richman, David M; Grubb, Laura; Thompson, Samuel
2018-01-01
Strategic Incremental Rehearsal (SIR) is an effective method for teaching sight-word acquisition, but has neither been evaluated for use in adults with an intellectual disability, nor directly compared to the ongoing instruction in the natural environment. Experimental analysis of sight word acquisition via an alternating treatment design was conducted with a 23-year-old woman with Down syndrome. SIR was compared to the current reading instruction (CRI) in a classroom for young adults with intellectual disabilities. CRI procedures included non-contingent praise, receptive touch prompts ("touch the word bat"), echoic prompts ("say bat"), textual prompts ("read the word"), and pre-determined introduction of new words. SIR procedures included textual prompts on flash cards, contingent praise, corrective feedback, and mastery-based introduction of new words. The results indicated that SIR was associated with more rapid acquisition of sight words than CRI. Directions for future research could include systematic comparisons to other procedures, and evaluations of procedural permutations of SIR.
Kluge, Annette; Grauel, Britta; Burkolter, Dina
2013-03-01
Two studies are presented in which the design of a procedural aid and the impact of an additional decision aid for process control were assessed. In Study 1, a procedural aid was developed that avoids imposing unnecessary extraneous cognitive load on novices when controlling a complex technical system. This newly designed procedural aid positively affected germane load, attention, satisfaction, motivation, knowledge acquisition and diagnostic speed for novel faults. In Study 2, the effect of a decision aid for use before the procedural aid was investigated, which was developed based on an analysis of diagnostic errors committed in Study 1. Results showed that novices were able to diagnose both novel faults and practised faults, and were even faster at diagnosing novel faults. This research contributes to the question of how to optimally support novices in dealing with technical faults in process control. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Aiello, Francesco; Durgin, Jonathan; Daniel, Vijaya; Messina, Louis; Doucet, Danielle; Simons, Jessica; Jenkins, James; Schanzer, Andres
2017-10-01
Fenestrated endovascular aneurysm repair (FEVAR) allows endovascular treatment of thoracoabdominal and juxtarenal aneurysms previously outside the indications of use for standard devices. However, because of considerable device costs and increased procedure time, FEVAR is thought to result in financial losses for medical centers and physicians. We hypothesized that surgeon leadership in the coding, billing, and contractual negotiations for FEVAR procedures will increase medical center contribution margin (CM) and physician reimbursement. At the UMass Memorial Center for Complex Aortic Disease, a vascular surgeon with experience in medical finances is supported to manage the billing and coding of FEVAR procedures for medical center and physician reimbursement. A comprehensive financial analysis was performed for all FEVAR procedures (2011-2015), independent of insurance status, patient presentation, or type of device used. Medical center CM (actual reimbursement minus direct costs) was determined for each index FEVAR procedure and for all related subsequent procedures, inpatient or outpatient, 3 months before and 1 year subsequent to the index FEVAR procedure. Medical center CM for outpatient clinic visits, radiology examinations, vascular laboratory studies, and cardiology and pulmonary evaluations related to FEVAR were also determined. Surgeon reimbursement for index FEVAR procedure, related adjunct procedures, and assistant surgeon reimbursement were also calculated. All financial analyses were performed and adjudicated by the UMass Department of Finance. The index hospitalization for 63 FEVAR procedures incurred $2,776,726 of direct costs and generated $3,027,887 in reimbursement, resulting in a positive CM of $251,160. Subsequent related hospital procedures (n = 26) generated a CM of $144,473. Outpatient clinic visits, radiologic examinations, and vascular laboratory studies generated an additional CM of $96,888. Direct cost analysis revealed that grafts accounted for the largest proportion of costs (55%), followed by supplies (12%), bed (12%), and operating room (10%). Total medical center CM for all FEVAR services was $492,521. Average surgeon reimbursements per FEVAR from 2011 to 2015 increased from $1601 to $2480 while the surgeon payment denial rate declined from 50% to 0%. Surgeon-led negotiations with the Centers for Medicare & Medicaid Services during 2015 resulted in a 27% increase in physician reimbursement for the remainder of 2015 ($2480 vs $3068/case) and a 91% increase in reimbursement from 2011 ($1601 vs $3068). Assistant surgeon reimbursement also increased ($266 vs $764). Concomitant FEVAR-related procedures generated an additional $27,347 in surgeon reimbursement. Physician leadership in the coding, billing, and contractual negotiations for FEVAR results in a positive medical center CM and increased physician reimbursement. Copyright © 2017 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Rüter, Anders; Vikstrom, Tore
2009-01-01
Good staff procedure skills in a management group during incidents and disasters are believed to be a prerequisite for good management of the situation. However, this has not been demonstrated scientifically. Templates for evaluation results from performance indicators during simulation exercises have previously been tested. The aim of this study was to demonstrate the possibility that these indicators can be used as a tool for studying the relationship between good management skills and good staff procedure skills. Good and structured work (staff procedure skills) in a hospital management group during simulation exercises in disaster medicine is related to good and timely decisions (good management skills). Results from 29 consecutive simulation exercises in which staff procedure skills and management skills were evaluated using quantitative measurements were included. The statistical analysis method used was simple linear regression with staff procedure skills as the response variable and management skills as the predictor variable. An overall significant relationship was identified between staff procedure skills and management skills (p(2)0.05). This study suggests that there is a relationship between staff procedure skills and management skills in the educational setting used. Future studies are needed to demonstrate if this also can be observed during actual incidents.
Hanna, Andrew N; Datta, Jashodeep; Ginzberg, Sara; Dasher, Kevin; Ginsberg, Gregory G; Dempsey, Daniel T
2018-04-01
Although laparoscopic Heller myotomy (LHM) has been the standard of care for achalasia, per oral endoscopic myotomy (POEM) has gained popularity as a viable alternative. This retrospective study aimed to compare patient-reported outcomes between LHM and POEM in a consecutive series of achalasia patients with more than 1 year of follow-up. We reviewed demographic and procedure-related data for patients who underwent either LHM or POEM for achalasia between January 2011 and May 2016. Phone interviews were conducted assessing post-procedure achalasia symptoms via the Eckardt score and achalasia severity questionnaire (ASQ). Demographics, disease factors, and survey results were compared between LHM and POEM patients using univariate analysis. Significant predictors of procedure failure were analyzed using univariate and multivariate analysis. There were no serious complications in 110 consecutive patients who underwent LHM or POEM during the study period, and 96 (87%) patients completed phone surveys. There was a nonsignificant trend toward better patient-reported outcomes with POEM. There were significant differences in patient characteristics including sex, achalasia type, mean residual lower esophageal pressure (rLESP), and follow-up time. The only univariate predictors of an unsatisfactory Eckardt score or ASQ were longer follow-up and lower rLESP, with follow-up length being the only predictor on multivariate analysis. There were significant demographic and clinical differences in patient selection for POEM vs LHM in our group. Although the 2 procedures have similar patient-reported effectiveness, subjective outcomes seem to decline as a result of time rather than procedure type. Copyright © 2018 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Uncertainty Analysis of A Flood Risk Mapping Procedure Applied In Urban Areas
NASA Astrophysics Data System (ADS)
Krause, J.; Uhrich, S.; Bormann, H.; Diekkrüger, B.
In the framework of IRMA-Sponge program the presented study was part of the joint research project FRHYMAP (flood risk and hydrological mapping). A simple con- ceptual flooding model (FLOODMAP) has been developed to simulate flooded areas besides rivers within cities. FLOODMAP requires a minimum of input data (digital el- evation model (DEM), river line, water level plain) and parameters and calculates the flood extent as well as the spatial distribution of flood depths. of course the simulated model results are affected by errors and uncertainties. Possible sources of uncertain- ties are the model structure, model parameters and input data. Thus after the model validation (comparison of simulated water to observed extent, taken from airborne pictures) the uncertainty of the essential input data set (digital elevation model) was analysed. Monte Carlo simulations were performed to assess the effect of uncertain- ties concerning the statistics of DEM quality and to derive flooding probabilities from the set of simulations. The questions concerning a minimum resolution of a DEM re- quired for flood simulation and concerning the best aggregation procedure of a given DEM was answered by comparing the results obtained using all available standard GIS aggregation procedures. Seven different aggregation procedures were applied to high resolution DEMs (1-2m) in three cities (Bonn, Cologne, Luxembourg). Basing on this analysis the effect of 'uncertain' DEM data was estimated and compared with other sources of uncertainties. Especially socio-economic information and monetary transfer functions required for a damage risk analysis show a high uncertainty. There- fore this study helps to analyse the weak points of the flood risk and damage risk assessment procedure.
NASA Astrophysics Data System (ADS)
Błażejewski, Paweł; Marcinowski, Jakub
2017-06-01
Existing provisions leading to the assessment of the buckling resistance of pressurised spherical shells were published in the European Design Recommendations (EDR) [
Adding results to a meta-analysis: Theory and example
NASA Astrophysics Data System (ADS)
Willson, Victor L.
Meta-analysis has been used as a research method to describe bodies of research data. It promotes hypothesis formation and the development of science education laws. A function overlooked, however, is the role it plays in updating research. Methods to integrate new research with meta-analysis results need explication. A procedure is presented using Bayesian analysis. Research in science education attitude correlation with achievement has been published after a recent meta-analysis of the topic. The results show how new findings complement the previous meta-analysis and extend its conclusions. Additional methodological questions adddressed are how studies are to be weighted, which variables are to be examined, and how often meta-analysis are to be updated.
46 CFR 4.06-50 - Specimen analysis and follow-up procedures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... to develop all relevant information and to produce a complete analysis report. (b) Reports shall be... 46 Shipping 1 2011-10-01 2011-10-01 false Specimen analysis and follow-up procedures. 4.06-50... Involving Vessels in Commercial Service § 4.06-50 Specimen analysis and follow-up procedures. (a) Each...
46 CFR 4.06-50 - Specimen analysis and follow-up procedures.
Code of Federal Regulations, 2012 CFR
2012-10-01
... to develop all relevant information and to produce a complete analysis report. (b) Reports shall be... 46 Shipping 1 2012-10-01 2012-10-01 false Specimen analysis and follow-up procedures. 4.06-50... Involving Vessels in Commercial Service § 4.06-50 Specimen analysis and follow-up procedures. (a) Each...
Risk Factors Analysis for Occurrence of Asymptomatic Bacteriuria After Endourological Procedures
Junuzovic, Dzelaludin; Hasanbegovic, Munira
2014-01-01
Introduction: Endourological procedures are performed according to the principles of aseptic techniques, jet still in certain number of patients urinary tract infections may occur. Considering the risk of urinary tract infection, there is no unique opinion about the prophylactic use of antibiotics in endourological procedures. Goal: The objective of this study was to determine the connection between endourological procedures and occurrence of urinary infections and to analyze the risk factors of urinary infection for patients who were hospitalized at the Urology Clinic of the Clinical Center University of Sarajevo CCUS. Materials and Methods: The research was conducted as a prospective study on a sample of 208 patients of both genders, who were hospitalized at the Urology Clinic of the CCUS and to whom some endourological procedure was indicated for diagnostic or therapeutic purposes. We analyzed data from patient’s histories of illness, laboratory tests taken at admission and after endourological procedures, also surgical programs for endoscopic procedures. All patients were clinically examined prior to endoscopic procedures while after the treatment attention was focused to the symptoms of urinary tract infections. Results: Statistical analysis of the tested patients indicates that there is no significant difference in the presence of postoperative, compared to preoperative bacteriuria, which implies that the endourological procedures are safe procedures in terms of urinary tract infections. Preoperatively, the most commonly isolated bacteria was Escherichia coli (30.9%) and postoperatively, Enterococcus faecalis (25%). Statistically significant effect on the occurrence of postoperative bacteriuria has preoperative bacteriuria, duration of postoperative catheterization, and duration of hospitalization. Conclusion: In everyday urological practice, it is very important to identify and control risk factors for the development of urinary infection after endourological procedures, with main objective to minimize occurrence of infectious complications. PMID:25568546
Cognitive Task Analysis: Implications for the Theory and Practice of Instructional Design.
ERIC Educational Resources Information Center
Dehoney, Joanne
Cognitive task analysis grew out of efforts by cognitive psychologists to understand problem-solving in a lab setting. It has proved a useful tool for describing expert performance in complex problem solving domains. This review considers two general models of cognitive task analysis and examines the procedures and results of analyses in three…
On the Extraction of Components and the Applicability of the Factor Model.
ERIC Educational Resources Information Center
Dziuban, Charles D.; Harris, Chester W.
A reanalysis of Shaycroft's matrix of intercorrelations of 10 test variables plus 4 random variables is discussed. Three different procedures were used in the reanalysis: (1) Image Component Analysis, (2) Uniqueness Rescaling Factor Analysis, and (3) Alpha Factor Analysis. The results of these analyses are presented in tables. It is concluded from…
Niedzielski, P; Kozak, L; Wachelka, M; Jakubowski, K; Wybieralska, J
2015-01-01
The article presents the optimisation, validation and application of the microwave induced plasma optical emission spectrometry (MIP-OES) dedicated for a routine determination of Ag, Al, B, Ba, Bi, Ca, Cd, Cr, Cu, Fe, Ga, In, K, Li, Mg, Mn, Mo, Na, Ni, Pb, Sr, Tl, Zn, in the geological samples. The three procedures of sample preparation has been proposed: sample digestion with the use of hydrofluoric acid for determination of total concentration of elements, extraction by aqua regia for determination of the quasi-total element concentration and extraction by hydrochloric acid solution to determine contents of the elements in acid leachable fraction. The detection limits were on the level 0.001-0.121 mg L(-1) (from 0.010-0.10 to 1.2-12 mg kg(-1) depend on the samples preparation procedure); the precision: 0.20-1.37%; accuracy 85-115% (for recovery for certified standards materials analysis and parallel analysis by independent analytical techniques: X-ray fluorescence (XRF) and flame absorption spectrometry (FAAS)). The conformity of the results obtained by MIP-OES analytical procedures with the results obtained by XRF and FAAS analysis allows to propose the procedures for studies of elemental composition of the fraction of the geological samples. Additionally, the MIP-OES technique is much less expensive than ICP techniques and much less time-consuming than AAS techniques. Copyright © 2014 Elsevier B.V. All rights reserved.
Extractive procedure for uranium determination in water samples by liquid scintillation counting.
Gomez Escobar, V; Vera Tomé, F; Lozano, J C; Martín Sánchez, A
1998-07-01
An extractive procedure for uranium determination using liquid scintillation counting with the URAEX cocktail is described. Interference from radon and a strong influence of nitrate ion were detected in this procedure. Interference from radium, thorium and polonium emissions were very low when optimal operating conditions were reached. Quenching effects were considered and the minimum detectable activity was evaluated for different sample volumes. Isotopic analysis of samples can be performed using the proposed method. Comparisons with the results obtained with the general procedure used in alpha spectrometry with passivated implanted planar silicon detectors showed good agreement. The proposed procedure is thus suitable for uranium determination in water samples and can be considered as an alternative to the laborious conventional chemical preparations needed for alpha spectrometry methods using semiconductor detectors.
Evaluation of a training manual for the acquisition of behavioral assessment interviewing skills.
Miltenberger, R G; Fuqua, R W
1985-01-01
Two procedures were used to teach behavioral assessment interviewing skills: a training manual and one-to-one instruction that included modeling, rehearsal, and feedback. Two graduate students and two advanced undergraduates were trained with each procedure. Interviewing skills were recorded in simulated assessment interviews conducted by each student across baseline and treatment conditions. Each training procedure was evaluated in a multiple baseline across students design. The results showed that both procedures were effective for training behavioral interviewing skills, with all students reaching a level of 90%-100% correct responding. Finally, a group of experts in behavior analysis rated each interviewing skill as relevant to the conduct of an assessment interview and a group of behavioral clinicians socially validated the outcomes of the two procedures. PMID:4086413
2012-01-01
Background Saccharide materials have been used for centuries as binding media, to paint, write and illuminate manuscripts and to apply metallic leaf decorations. Although the technical literature often reports on the use of plant gums as binders, actually several other saccharide materials can be encountered in paint samples, not only as major binders, but also as additives. In the literature, there are a variety of analytical procedures that utilize GC-MS to characterize saccharide materials in paint samples, however the chromatographic profiles are often extremely different and it is impossible to compare them and reliably identify the paint binder. Results This paper presents a comparison between two different analytical procedures based on GC-MS for the analysis of saccharide materials in works-of-art. The research presented here evaluates the influence of the analytical procedure used, and how it impacts the sugar profiles obtained from the analysis of paint samples that contain saccharide materials. The procedures have been developed, optimised and systematically used to characterise plant gums at the Getty Conservation Institute in Los Angeles, USA (GCI) and the Department of Chemistry and Industrial Chemistry of the University of Pisa, Italy (DCCI). The main steps of the analytical procedures and their optimisation are discussed. Conclusions The results presented highlight that the two methods give comparable sugar profiles, whether the samples analysed are simple raw materials, pigmented and unpigmented paint replicas, or paint samples collected from hundreds of centuries old polychrome art objects. A common database of sugar profiles of reference materials commonly found in paint samples was thus compiled. The database presents data also from those materials that only contain a minor saccharide fraction. This database highlights how many sources of saccharides can be found in a paint sample, representing an important step forward in the problem of identifying polysaccharide binders in paint samples. PMID:23050842
Nahas, Samar; Yi, Johnny; Magrina, Javier
2013-01-01
To evaluate the surgical outcome and the anatomic and sexual function in 10 women with Rokitansky syndrome who underwent the laparoscopic Vecchietti procedure at our center. Retrospective analysis. Data were analyzed on the basis of short-term and long-term surgical outcome and sexual function. All patients underwent clinical follow-up at 1, 2, and 6 months after surgery. In all 10 patients, the procedure produced anatomic and functional success. The laparoscopic Vecchietti technique is safe, simple, and effective for treatment of vaginal agenesis. Results are comparable to those of all European studies, and the procedure should gain more popularity in North America. Copyright © 2013 AAGL. All rights reserved.
NASA Technical Reports Server (NTRS)
Rogallo, Vernon L; Yaggy, Paul F; Mccloud, John L , III
1956-01-01
A simplified procedure is shown for calculating the once-per-revolution oscillating aerodynamic thrust loads on propellers of tractor airplanes at zero yaw. The only flow field information required for the application of the procedure is a knowledge of the upflow angles at the horizontal center line of the propeller disk. Methods are presented whereby these angles may be computed without recourse to experimental survey of the flow field. The loads computed by the simplified procedure are compared with those computed by a more rigorous method and the procedure is applied to several airplane configurations which are believed typical of current designs. The results are generally satisfactory.
Wang, Guoqing; Hou, Zhenyu; Peng, Yang; Wang, Yanjun; Sun, Xiaoli; Sun, Yu-an
2011-11-07
By determination of the number of absorptive chemical components (ACCs) in mixtures using median absolute deviation (MAD) analysis and extraction of spectral profiles of ACCs using kernel independent component analysis (KICA), an adaptive KICA (AKICA) algorithm was proposed. The proposed AKICA algorithm was used to characterize the procedure for processing prepared rhubarb roots by resolution of the measured mixed raw UV spectra of the rhubarb samples that were collected at different steaming intervals. The results show that the spectral features of ACCs in the mixtures can be directly estimated without chemical and physical pre-separation and other prior information. The estimated three independent components (ICs) represent different chemical components in the mixtures, which are mainly polysaccharides (IC1), tannin (IC2), and anthraquinone glycosides (IC3). The variations of the relative concentrations of the ICs can account for the chemical and physical changes during the processing procedure: IC1 increases significantly before the first 5 h, and is nearly invariant after 6 h; IC2 has no significant changes or is slightly decreased during the processing procedure; IC3 decreases significantly before the first 5 h and decreases slightly after 6 h. The changes of IC1 can explain why the colour became black and darkened during the processing procedure, and the changes of IC3 can explain why the processing procedure can reduce the bitter and dry taste of the rhubarb roots. The endpoint of the processing procedure can be determined as 5-6 h, when the increasing or decreasing trends of the estimated ICs are insignificant. The AKICA-UV method provides an alternative approach for the characterization of the processing procedure of rhubarb roots preparation, and provides a novel way for determination of the endpoint of the traditional Chinese medicine (TCM) processing procedure by inspection of the change trends of the ICs.
A CAD Approach to Integrating NDE With Finite Element
NASA Technical Reports Server (NTRS)
Abdul-Aziz, Ali; Downey, James; Ghosn, Louis J.; Baaklini, George Y.
2004-01-01
Nondestructive evaluation (NDE) is one of several technologies applied at NASA Glenn Research Center to determine atypical deformities, cracks, and other anomalies experienced by structural components. NDE consists of applying high-quality imaging techniques (such as x-ray imaging and computed tomography (CT)) to discover hidden manufactured flaws in a structure. Efforts are in progress to integrate NDE with the finite element (FE) computational method to perform detailed structural analysis of a given component. This report presents the core outlines for an in-house technical procedure that incorporates this combined NDE-FE interrelation. An example is presented to demonstrate the applicability of this analytical procedure. FE analysis of a test specimen is performed, and the resulting von Mises stresses and the stress concentrations near the anomalies are observed, which indicates the fidelity of the procedure. Additional information elaborating on the steps needed to perform such an analysis is clearly presented in the form of mini step-by-step guidelines.
Ricci, L; Formica, D; Tamilia, E; Taffoni, F; Sparaci, L; Capirci, O; Guglielmelli, E
2013-01-01
Motion capture based on magneto-inertial sensors is a technology enabling data collection in unstructured environments, allowing "out of the lab" motion analysis. This technology is a good candidate for motion analysis of children thanks to the reduced weight and size as well as the use of wireless communication that has improved its wearability and reduced its obtrusivity. A key issue in the application of such technology for motion analysis is its calibration, i.e. a process that allows mapping orientation information from each sensor to a physiological reference frame. To date, even if there are several calibration procedures available for adults, no specific calibration procedures have been developed for children. This work addresses this specific issue presenting a calibration procedure for motion capture of thorax and upper limbs on healthy children. Reported results suggest comparable performance with similar studies on adults and emphasize some critical issues, opening the way to further improvements.
English in the Ecuadorian Commercial Context.
ERIC Educational Resources Information Center
Alm, Cecilia Ovesdotter
2003-01-01
Presents a study completed in Quito, Ecuador that investigates the attitudinal perceptions toward English in advertising in the Ecuadorian commercial context. Findings are the result of four data collection procedures; a questionnaire administered to advertising experts, an analysis of business names in ten shopping centers, an analysis of…
ERIC Educational Resources Information Center
Boone, Harry N., Jr.; Boone, Deborah A.
2012-01-01
This article provides information for Extension professionals on the correct analysis of Likert data. The analyses of Likert-type and Likert scale data require unique data analysis procedures, and as a result, misuses and/or mistakes often occur. This article discusses the differences between Likert-type and Likert scale data and provides…
Publish unexpected results that conflict with assumptions
USDA-ARS?s Scientific Manuscript database
Some widely held scientific assumptions have been discredited, whereas others are just inappropriate for many applications. Sometimes, a widely-held analysis procedure takes on a life of its own, forgetting the original purpose of the analysis. The peer-reviewed system makes it difficult to get a pa...
ERIC Educational Resources Information Center
Bailey, Leonard
1978-01-01
The experiment described was developed for the third-year course in inorganic and analytical pharmaceutical chemistry to provide students with "hands-on" experience with high pressure liquid chromatography. Assay procedures are given along with experimental parameters and student results. (LBH)
Functional Analysis and Treatment of Noncompliance by Preschool Children
ERIC Educational Resources Information Center
Wilder, David A.; Harris, Carelle; Reagan, Renee; Rasey, Amy
2007-01-01
A functional analysis showed that noncompliance occurred most often for 2 preschoolers when it resulted in termination of a preferred activity, suggesting that noncompliance was maintained by positive reinforcement. A differential reinforcement procedure, which involved contingent access to coupons that could be exchanged for uninterrupted access…
An Analysis of Students' Mistakes on Routine Slope Tasks
ERIC Educational Resources Information Center
Cho, Peter; Nagle, Courtney
2017-01-01
This study extends past research on students' understanding of slope by analyzing college students' mistakes on routine tasks involving slope. We conduct quantitative and qualitative analysis of students' mistakes to extract information regarding slope conceptualizations described in prior research. Results delineate procedural proficiencies and…
Measuring Nitrification: A Laboratory Approach to Nutrient Cycling.
ERIC Educational Resources Information Center
Hicks, David J.
1990-01-01
Presented is an approach to the study of nutrient cycling in the school laboratory. Discussed are obtaining, processing, and incubating samples; extraction of ions from soil; procedures for nitrate and ammonium analysis; data analysis; an example of results; and other aspects of the nitrogen cycle. (CW)
How Multiple Interventions Influenced Employee Turnover: A Case Study.
ERIC Educational Resources Information Center
Hatcher, Timothy
1999-01-01
A 3-year study of 46 textile industry workers identified causes of employee turnover (supervision, training, organizational communication) using performance analysis. A study of multiple interventions based on the analysis resulted in changes in orientation procedures, organizational leadership, and climate, reducing turnover by 24%. (SK)
Generalized Appended Product Indicator Procedure for Nonlinear Structural Equation Analysis.
ERIC Educational Resources Information Center
Wall, Melanie M.; Amemiya, Yasuo
2001-01-01
Considers the estimation of polynomial structural models and shows a limitation of an existing method. Introduces a new procedure, the generalized appended product indicator procedure, for nonlinear structural equation analysis. Addresses statistical issues associated with the procedure through simulation. (SLD)
Home - Virginia Department of Forensic Science
Procedure Manuals Training Manuals Digital & Multimedia Evidence Computer Analysis Video Analysis Procedure Manual Training Manual FAQ Updates Firearms & Toolmarks Procedure Manuals Training Manuals Forensic Biology Procedure Manuals Training Manuals Familial Searches Post-Conviction DNA Issues FAQ
Ogawa, Yasushi; Fawaz, Farah; Reyes, Candice; Lai, Julie; Pungor, Erno
2007-01-01
Parameter settings of a parallel line analysis procedure were defined by applying statistical analysis procedures to the absorbance data from a cell-based potency bioassay for a recombinant adenovirus, Adenovirus 5 Fibroblast Growth Factor-4 (Ad5FGF-4). The parallel line analysis was performed with a commercially available software, PLA 1.2. The software performs Dixon outlier test on replicates of the absorbance data, performs linear regression analysis to define linear region of the absorbance data, and tests parallelism between the linear regions of standard and sample. Width of Fiducial limit, expressed as a percent of the measured potency, was developed as a criterion for rejection of the assay data and to significantly improve the reliability of the assay results. With the linear range-finding criteria of the software set to a minimum of 5 consecutive dilutions and best statistical outcome, and in combination with the Fiducial limit width acceptance criterion of <135%, 13% of the assay results were rejected. With these criteria applied, the assay was found to be linear over the range of 0.25 to 4 relative potency units, defined as the potency of the sample normalized to the potency of Ad5FGF-4 standard containing 6 x 10(6) adenovirus particles/mL. The overall precision of the assay was estimated to be 52%. Without the application of Fiducial limit width criterion, the assay results were not linear over the range, and an overall precision of 76% was calculated from the data. An absolute unit of potency for the assay was defined by using the parallel line analysis procedure as the amount of Ad5FGF-4 that results in an absorbance value that is 121% of the average absorbance readings of the wells containing cells not infected with the adenovirus.
NASA Astrophysics Data System (ADS)
Durocher, M.; Mostofi Zadeh, S.; Burn, D. H.; Ashkar, F.
2017-12-01
Floods are one of the most costly hazards and frequency analysis of river discharges is an important part of the tools at our disposal to evaluate their inherent risks and to provide an adequate response. In comparison to the common examination of annual streamflow maximums, peaks over threshold (POT) is an interesting alternative that makes better use of the available information by including more than one flood event per year (on average). However, a major challenge is the selection of a satisfactory threshold above which peaks are assumed to respect certain conditions necessary for an adequate estimation of the risk. Additionally, studies have shown that POT is also a valuable approach to investigate the evolution of flood regimes in the context of climate change. Recently, automatic procedures for the selection of the threshold were suggested to guide that important choice, which otherwise rely on graphical tools and expert judgment. Furthermore, having an automatic procedure that is objective allows for quickly repeating the analysis on a large number of samples, which is useful in the context of large databases or for uncertainty analysis based on a resampling approach. This study investigates the impact of considering such procedures in a case study including many sites across Canada. A simulation study is conducted to evaluate the bias and predictive power of the automatic procedures in similar conditions as well as investigating the power of derived nonstationarity tests. The results obtained are also evaluated in the light of expert judgments established in a previous study. Ultimately, this study provides a thorough examination of the considerations that need to be addressed when conducting POT analysis using automatic threshold selection.
Poch, G K; Klette, K L; Anderson, C
2000-04-01
This paper compares the potential forensic application of two sensitive and rapid procedures (liquid chromatography-mass spectrometry and liquid chromatography-ion trap mass spectrometry) for the detection and quantitation of 2-oxo-3-hydroxy lysergic acid diethylamide (O-H-LSD) a major LSD metabolite. O-H-LSD calibration curves for both procedures were linear over the concentration range 0-8,000 pg/mL with correlation coefficients (r2) greater than 0.99. The observed limit of detection (LOD) and limit of quantitation (LOQ) for O-H-LSD in both procedures was 400 pg/mL. Sixty-eight human urine specimens that had previously been found to contain LSD by gas chromatography-mass spectrometry were reanalyzed by both procedures for LSD and O-H-LSD. These specimens contained a mean concentration of O-H-LSD approximately 16 times higher than the LSD concentration. Because both LC methods produce similar results, either procedure can be readily adapted to O-H-LSD analysis for use in high-volume drug-testing laboratories. In addition, the possibility of significantly increasing the LSD detection time window by targeting this major LSD metabolite for analysis may influence other drug-free workplace programs to test for LSD.
Shan, Chen Jen; Lucon, Antonio Marmo; Pagani, Rodrigo; Srougi, Miguel
2011-01-01
To evaluate the success rates of sclerotherapy of the tunica vaginalis with alcohol for the treatment of hydroceles and/or spermatoceles, as well as, evaluation of pain, formation of hematomas, infection and its effects in spermatogenesis. A total of 69 patients, with offsprings and diagnosis of hydrocele and/or spermatocele, were treated during the period from April 2003 to June 2007. Semen analysis was obtained from patients who were able to provide us with samples. The sclerotherapy with alcohol at 99.5% was undertaken as outpatient procedure. The average volume drained pre-sclerotherapy was 279.82 mL (27 to 1145). The median follow-up was 43 months (9 to 80). A total of 114 procedures were performed on 84 units, with an average of 1.35 procedures/unit and an overall success rate of 97.62%. Of the 69 patients, 7 (10.14%) reported minor pain immediately after the procedure, 3 (4.35%) moderate pain and 2 (2.89%) intense pain. Post-Sclerotherapy spermograms revealed reduction of the parameters regarding: concentration, motility and morphology up to 6 months post procedure, with return to normal parameters 12th months after procedure. Sclerotherapy of hydroceles and spermatoceles with 99.5% alcohol is an efficient procedure that can be performed without difficulties, cost-effectiveness, with few side effects and which may be performed in patients who wish fertility.
On Statistical Analysis of Neuroimages with Imperfect Registration
Kim, Won Hwa; Ravi, Sathya N.; Johnson, Sterling C.; Okonkwo, Ozioma C.; Singh, Vikas
2016-01-01
A variety of studies in neuroscience/neuroimaging seek to perform statistical inference on the acquired brain image scans for diagnosis as well as understanding the pathological manifestation of diseases. To do so, an important first step is to register (or co-register) all of the image data into a common coordinate system. This permits meaningful comparison of the intensities at each voxel across groups (e.g., diseased versus healthy) to evaluate the effects of the disease and/or use machine learning algorithms in a subsequent step. But errors in the underlying registration make this problematic, they either decrease the statistical power or make the follow-up inference tasks less effective/accurate. In this paper, we derive a novel algorithm which offers immunity to local errors in the underlying deformation field obtained from registration procedures. By deriving a deformation invariant representation of the image, the downstream analysis can be made more robust as if one had access to a (hypothetical) far superior registration procedure. Our algorithm is based on recent work on scattering transform. Using this as a starting point, we show how results from harmonic analysis (especially, non-Euclidean wavelets) yields strategies for designing deformation and additive noise invariant representations of large 3-D brain image volumes. We present a set of results on synthetic and real brain images where we achieve robust statistical analysis even in the presence of substantial deformation errors; here, standard analysis procedures significantly under-perform and fail to identify the true signal. PMID:27042168
A procedure to determine the radiation isocenter size in a linear accelerator.
González, A; Castro, I; Martínez, J A
2004-06-01
Measurement of radiation isocenter is a fundamental part of commissioning and quality assurance (QA) for a linear accelerator (linac). In this work we present an automated procedure for the analysis of the stars-shots employed in the radiation isocenter determination. Once the star-shot film has been developed and digitized, the resulting image is analyzed by scanning concentric circles centered around the intersection of the lasers that had been previously marked on the film. The center and the radius of the minimum circle intersecting the central rays are determined with an accuracy and precision better than 1% of the pixel size. The procedure is applied to the position and size determination of the radiation isocenter by means of the analysis of star-shots, placed in different planes with respect to the gantry, couch and collimator rotation axes.
Samaraweera, Nalaka; Larkin, Jason M; Chan, Kin L; Mithraratne, Kumar
2018-06-06
In this study, unique thermal transport features of nanowires over bulk materials are investigated using a combined analysis based on lattice dynamics and equilibrium molecular dynamics (EMD). The evaluation of the thermal conductivity (TC) of Lenard-Jones nanowires becomes feasible due to the multi-step normal mode decomposition (NMD) procedure implemented in the study. A convergence issue of the TC of nanowires is addressed by the NMD implementation for two case studies, which employ pristine nanowires (PNW) and superlattice nanowires. Interestingly, mode relaxation times at low frequencies of acoustic branches exhibit signs of approaching constant values, thus indicating the convergence of TC. The TC evaluation procedure is further verified by implementing EMD-based Green-Kubo analysis, which is based on a fundamentally different physical perspective. Having verified the NMD procedure, the non-monotonic trend of the TC of nanowires is addressed. It is shown that the principal cause for the observed trend is due to the competing effects of long wavelength phonons and phonon-surface scatterings as the nanowire's cross-sectional width is changed. A computational procedure is developed to decompose the different modal contribution to the TC of shell alloy nanowires (SANWs) using virtual crystal NMD and the Allen-Feldman theory. Several important conclusions can be drawn from the results. A propagons to non-propagons boundary appeared, resulting in a cut-off frequency (ω cut ); moreover, as alloy atomic mass is increased, ω cut shifts to lower frequencies. The existence of non-propagons partly causes the low TC of SANWs. It can be seen that modes with low frequencies demonstrate a similar behavior to corresponding modes of PNWs. Moreover, lower group velocities associated with higher alloy atomic mass resulted in a lower TC of SANWs.
Modal analysis of the thermal conductivity of nanowires: examining unique thermal transport features
NASA Astrophysics Data System (ADS)
Samaraweera, Nalaka; Larkin, Jason M.; Chan, Kin L.; Mithraratne, Kumar
2018-06-01
In this study, unique thermal transport features of nanowires over bulk materials are investigated using a combined analysis based on lattice dynamics and equilibrium molecular dynamics (EMD). The evaluation of the thermal conductivity (TC) of Lenard–Jones nanowires becomes feasible due to the multi-step normal mode decomposition (NMD) procedure implemented in the study. A convergence issue of the TC of nanowires is addressed by the NMD implementation for two case studies, which employ pristine nanowires (PNW) and superlattice nanowires. Interestingly, mode relaxation times at low frequencies of acoustic branches exhibit signs of approaching constant values, thus indicating the convergence of TC. The TC evaluation procedure is further verified by implementing EMD-based Green–Kubo analysis, which is based on a fundamentally different physical perspective. Having verified the NMD procedure, the non-monotonic trend of the TC of nanowires is addressed. It is shown that the principal cause for the observed trend is due to the competing effects of long wavelength phonons and phonon–surface scatterings as the nanowire’s cross-sectional width is changed. A computational procedure is developed to decompose the different modal contribution to the TC of shell alloy nanowires (SANWs) using virtual crystal NMD and the Allen–Feldman theory. Several important conclusions can be drawn from the results. A propagons to non-propagons boundary appeared, resulting in a cut-off frequency (ω cut); moreover, as alloy atomic mass is increased, ω cut shifts to lower frequencies. The existence of non-propagons partly causes the low TC of SANWs. It can be seen that modes with low frequencies demonstrate a similar behavior to corresponding modes of PNWs. Moreover, lower group velocities associated with higher alloy atomic mass resulted in a lower TC of SANWs.
Analysis of free and bound chlorophenoxy acids in cereals.
Lokke, H
1975-06-01
Extraction of the chlorophenoxy acids 2,4-D and dichlorprop in cereals has been examined by analyzing barley from spraying experiments. A procedure has been set up by combination of acid hydrolysis and enzymatic degradation followed by extraction and clean up on either silica gel or basic aluminum oxide. The final determination is based on reaction with diazomethane and subsequently GLC with ECD. This procedure was compared with two different extraction procedures previously described in the literature. The one comparative procedure uses a mixture of 50% diethyl ether/hexane in presence of sulphuric acid and resulted in residues up to ten times lower than found after the combined acid hydrolysis/enzymatic degradation procedure. In the second comparison a direct extraction was made with a mixture of 65% (v/v) acetonitrile in water. No differences were found between this and the combined acid hydrolysis/enzymatic degradation procedure.
New test techniques and analytical procedures for understanding the behavior of advanced propellers
NASA Technical Reports Server (NTRS)
Stefko, G. L.; Bober, L. J.; Neumann, H. E.
1983-01-01
Analytical procedures and experimental techniques were developed to improve the capability to design advanced high speed propellers. Some results from the propeller lifting line and lifting surface aerodynamic analysis codes are compared with propeller force data, probe data and laser velocimeter data. In general, the code comparisons with data indicate good qualitative agreement. A rotating propeller force balance demonstrated good accuracy and reduced test time by 50 percent. Results from three propeller flow visualization techniques are shown which illustrate some of the physical phenomena occurring on these propellers.
A Quantitative Review of Functional Analysis Procedures in Public School Settings
ERIC Educational Resources Information Center
Solnick, Mark D.; Ardoin, Scott P.
2010-01-01
Functional behavioral assessments can consist of indirect, descriptive and experimental procedures, such as a functional analysis. Although the research contains numerous examples demonstrating the effectiveness of functional analysis procedures, experimental conditions are often difficult to implement in classroom settings and analog conditions…
A dc model for power switching transistors suitable for computer-aided design and analysis
NASA Technical Reports Server (NTRS)
Wilson, P. M.; George, R. T., Jr.; Owen, H. A.; Wilson, T. G.
1979-01-01
A model for bipolar junction power switching transistors whose parameters can be readily obtained by the circuit design engineer, and which can be conveniently incorporated into standard computer-based circuit analysis programs is presented. This formulation results from measurements which may be made with standard laboratory equipment. Measurement procedures, as well as a comparison between actual and computed results, are presented.
Kepler AutoRegressive Planet Search
NASA Astrophysics Data System (ADS)
Feigelson, Eric
NASA's Kepler mission is the source of more exoplanets than any other instrument, but the discovery depends on complex statistical analysis procedures embedded in the Kepler pipeline. A particular challenge is mitigating irregular stellar variability without loss of sensitivity to faint periodic planetary transits. This proposal presents a two-stage alternative analysis procedure. First, parametric autoregressive ARFIMA models, commonly used in econometrics, remove most of the stellar variations. Second, a novel matched filter is used to create a periodogram from which transit-like periodicities are identified. This analysis procedure, the Kepler AutoRegressive Planet Search (KARPS), is confirming most of the Kepler Objects of Interest and is expected to identify additional planetary candidates. The proposed research will complete application of the KARPS methodology to the prime Kepler mission light curves of 200,000: stars, and compare the results with Kepler Objects of Interest obtained with the Kepler pipeline. We will then conduct a variety of astronomical studies based on the KARPS results. Important subsamples will be extracted including Habitable Zone planets, hot super-Earths, grazing-transit hot Jupiters, and multi-planet systems. Groundbased spectroscopy of poorly studied candidates will be performed to better characterize the host stars. Studies of stellar variability will then be pursued based on KARPS analysis. The autocorrelation function and nonstationarity measures will be used to identify spotted stars at different stages of autoregressive modeling. Periodic variables with folded light curves inconsistent with planetary transits will be identified; they may be eclipsing or mutually-illuminating binary star systems. Classification of stellar variables with KARPS-derived statistical properties will be attempted. KARPS procedures will then be applied to archived K2 data to identify planetary transits and characterize stellar variability.
Boriani, Filippo; Villani, Riccardo; Morselli, Paolo Giovanni
2014-10-01
Obesity is increasingly frequent in our society and is associated closely with metabolic disorders. As some studies have suggested, removal of fat tissue through liposuction and dermolipectomies may be of some benefit in the improvement of metabolic indices. This article aimed to review the published literature on this topic and to evaluate metabolic variations meta-analytically after liposuction, dermolipectomy, or both. Through a literature search with the PubMed/Medline database, 14 studies were identified. All articles were analyzed, and several metabolic variables were chosen in the attempt to meta-analyze the effect of adipose tissue removal through the various studies. All statistical calculations were performed with Review Manager (RevMan), version 5.0. Several cardiovascular and metabolic variables are described as prone to variations after body-contouring procedures when a significant amount of adipose tissue has been excised. Four of the studies included in the analysis reported improvements in all the parameters examined. Seven articles showed improvement in some variables and no improvement in others, whereas three studies showed no beneficial variation in any of the considered indicators after body-contouring procedures. Fasting plasma insulin was identified as the only variable for which a meta-analysis of five included studies was possible. The meta-analysis showed a statistically significant reduction in fasting plasma insulin resulting from large-volume liposuction in obese healthy women. Many beneficial metabolic effects resulting from dermolipectomy and liposuction procedures are described in the literature. In particular, fasting plasma insulin and thus insulin sensitivity seem to be positively influenced. Further research, including prospective clinical studies, is necessary for better exploration of the effects that body-contouring plastic surgery procedures have on metabolic parameters.
2007-03-01
self -reporting. The interview process and resulting data analysis may be impacted by research bias since both were conducted by the same individual...the processes you employ? Answer: 97 MAJCOM CONTACTS RESPOSIBLE FOR GENERAL TCNO PROCEDURES SECTION 1: INTERVIEWEE INFO Question 1: Please...BASE-LEVEL NCC CONTACTS RESPOSIBLE FOR GENERAL TCNO PROCEDURES SECTION 1: INTERVIEWEE INFO Question 1: Please provide your general job description
Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment
NASA Technical Reports Server (NTRS)
Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.
2012-01-01
Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anisovich, V. V., E-mail: anisovic@thd.pnpi.spb.ru; Sarantsev, A. V.
We present technical aspects of the fitting procedure given in the paper by V.V. Anisovich and A.V. Sarantsev 'The analysis of reactions {pi}N {yields} two mesons + N within reggeon exchanges. Fit and results.'
Differences in metabolite profiles caused by pre-analytical blood processing procedures.
Nishiumi, Shin; Suzuki, Makoto; Kobayashi, Takashi; Yoshida, Masaru
2018-05-01
Recently, the use of metabolomic analysis of human serum and plasma for biomarker discovery and disease diagnosis in clinical studies has been increasing. The feasibility of using a metabolite biomarker for disease diagnosis is strongly dependent on the metabolite's stability during pre-analytical blood processing procedures, such as serum or plasma sampling and sample storage prior to centrifugation. However, the influence of blood processing procedures on the stability of metabolites has not been fully characterized. In the present study, we compared the levels of metabolites in matched human serum and plasma samples using gas chromatography coupled with mass spectrometry and liquid chromatography coupled with mass spectrometry. In addition, we evaluated the changes in plasma metabolite levels induced by storage at room temperature or at a cold temperature prior to centrifugation. As a result, it was found that 76 metabolites exhibited significant differences between their serum and plasma levels. Furthermore, the pre-centrifugation storage conditions significantly affected the plasma levels of 45 metabolites. These results highlight the importance of blood processing procedures during metabolome analysis, which should be considered during biomarker discovery and the subsequent use of biomarkers for disease diagnosis. Copyright © 2017 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Concentration measurement of lysosome enzymes in blood by fluorimetric analysis method
NASA Astrophysics Data System (ADS)
Strinadko, Marina M.; Strinadko, Elena M.
2002-02-01
The diagnostics of heritable disease series and sugar diabetes, myocardial infarction, collagenosis and kidney diseases widely uses the measurement of lysosomic enzymes in blood. In the present research work the definition procedure of concentration (beta) -glucuronidase with the help of fluorimetric analysis is offered, which allows using microamounts of biological fluids and samples with low enzyme activity which is especially important in paediatric practice. Due to the sharp sensibility of fluorimetric analysis and high speed of luminescent reactions the procedure gives an opportunity to obtain the result in the minimum terms as well as the use of small amounts of reaction mixture. The incubation in large dilution leads thereby to the elimination of influence of endogenic inhibitors and activators.
Initial Data Analysis Results for ATD-2 ISAS HITL Simulation
NASA Technical Reports Server (NTRS)
Lee, Hanbong
2017-01-01
To evaluate the operational procedures and information requirements for the core functional capabilities of the ATD-2 project, such as tactical surface metering tool, APREQ-CFR procedure, and data element exchanges between ramp and tower, human-in-the-loop (HITL) simulations were performed in March, 2017. This presentation shows the initial data analysis results from the HITL simulations. With respect to the different runway configurations and metering values in tactical surface scheduler, various airport performance metrics were analyzed and compared. These metrics include gate holding time, taxi-out in time, runway throughput, queue size and wait time in queue, and TMI flight compliance. In addition to the metering value, other factors affecting the airport performance in the HITL simulation, including run duration, runway changes, and TMI constraints, are also discussed.
Vermeir, Lien; Sabatino, Paolo; Balcaen, Mathieu; Declerck, Arnout; Dewettinck, Koen; Martins, José C; Guthausen, Gisela; Van der Meeren, Paul
2016-08-01
The accuracy of the inner water droplet size determination of W/O/W emulsions upon water diffusion measurement by diffusion NMR was evaluated. The resulting droplet size data were compared to the results acquired from the diffusion measurement of a highly water soluble marker compound with low permeability in the oil layer of a W/O/W emulsion, which provide a closer representation of the actual droplet size. Differences in droplet size data obtained from water and the marker were ascribed to extra-droplet water diffusion. The diffusion data of the tetramethylammonium cation marker were measured using high-resolution pulsed field gradient NMR, whereas the water diffusion was measured using both low-resolution and high-resolution NMR. Different data analysis procedures were evaluated to correct for the effect of extra-droplet water diffusion on the accuracy of water droplet size analysis. Using the water diffusion data, the use of a low measurement temperature and diffusion delay Δ could reduce the droplet size overestimation resulting from extra-droplet water diffusion, but this undesirable effect was inevitable. Detailed analysis of the diffusion data revealed that the extra-droplet diffusion effect was due to an exchange between the inner water phase and the oil phase, rather than by exchange between the internal and external aqueous phase. A promising data analysis procedure for retrieving reliable size data consisted of the application of Einstein's diffusion law to the experimentally determined diffusion distances. This simple procedure allowed determining the inner water droplet size of W/O/W emulsions upon measurement of water diffusion by low-resolution NMR at or even above room temperature. Copyright © 2016 Elsevier Inc. All rights reserved.
Simultaneous Classification and Multidimensional Scaling with External Information
ERIC Educational Resources Information Center
Kiers, Henk A. L.; Vicari, Donatella; Vichi, Maurizio
2005-01-01
For the exploratory analysis of a matrix of proximities or (dis)similarities between objects, one often uses cluster analysis (CA) or multidimensional scaling (MDS). Solutions resulting from such analyses are sometimes interpreted using external information on the objects. Usually the procedures of CA, MDS and using external information are…
Hypnosis as an Adjunct to Cognitive-Behavioral Psychotherapy: A Meta-Analysis.
ERIC Educational Resources Information Center
Kirsch, Irving; And Others
1995-01-01
Performed a meta-analysis on 18 studies in which a cognitive-behavioral therapy was compared with the same therapy supplemented by hypnosis. Results indicated that hypnosis substantially enhanced treatment outcome, even though there were few procedural differences between the hypnotic and nonhypnotic treatments. Effects seemed particularly…
Troubleshooting 101: An Instrumental Analysis Experiment
ERIC Educational Resources Information Center
Vitt, Joseph E.
2008-01-01
An experiment is described where students troubleshoot a published procedure for the analysis of ethanol. UV-vis spectroscopy is used to measure the change in absorbance upon reaction of dichromate with ethanol. The experiment requires the students to critically evaluate their experimental results to correct a fundamental flaw in the original…
The Circumplex Pattern of the Life Styles Inventory: A Reanalysis.
ERIC Educational Resources Information Center
Levin, Joseph
1991-01-01
A reanalysis of the intercorrelation matrix from a principal components analysis of the Life Styles Inventory was conducted using a Canadian sample. Using nonmetric multidimensional scaling, analyses show an almost perfect circumplex pattern. Results illustrate the inadequacy of factor analytic procedures for the analysis and representation of a…
ERIC Educational Resources Information Center
Haddad, Paul; And Others
1983-01-01
Background information, procedures, and results are provided for an experiment demonstrating techniques of solvent selection, gradient elution, pH control, and ion-pairing in the analysis of an analgesic mixture using reversed-phase liquid chromatography on an octadecylsilane column. Although developed using sophisticated/expensive equipment, less…
AN APPROACH TO WATER RESOURCES EVALUATION OF NON-POINT SILVICULTURAL SOURCES (A PROCEDURAL HANDBOOK)
This handbook provides an analysis methodology that can be used to describe and evaluate changes to the water resource resulting from non-point silvicultural activities. This state-of-the-art approach for analysis and prediction of pollution from non point silvicultural activitie...
RANDOMIZATION PROCEDURES FOR THE ANALYSIS OF EDUCATIONAL EXPERIMENTS.
ERIC Educational Resources Information Center
COLLIER, RAYMOND O.
CERTAIN SPECIFIC ASPECTS OF HYPOTHESIS TESTS USED FOR ANALYSIS OF RESULTS IN RANDOMIZED EXPERIMENTS WERE STUDIED--(1) THE DEVELOPMENT OF THE THEORETICAL FACTOR, THAT OF PROVIDING INFORMATION ON STATISTICAL TESTS FOR CERTAIN EXPERIMENTAL DESIGNS AND (2) THE DEVELOPMENT OF THE APPLIED ELEMENT, THAT OF SUPPLYING THE EXPERIMENTER WITH MACHINERY FOR…
da Costa, Márcia Gisele Santos; Santos, Marisa da Silva; Sarti, Flávia Mori; Simões e Senna, Kátia Marie; Tura, Bernardo Rangel; Correia, Marcelo Goulart; Goulart, Marcelo Correia
2014-01-01
The study performs a cost-effectiveness analysis of procedures for atrial septal defects occlusion, comparing conventional surgery to septal percutaneous implant. A model of analytical decision was structured with symmetric branches to estimate cost-effectiveness ratio between the procedures. The decision tree model was based on evidences gathered through meta-analysis of literature, and validated by a panel of specialists. The lower number of surgical procedures performed for atrial septal defects occlusion at each branch was considered as the effectiveness outcome. Direct medical costs and probabilities for each event were inserted in the model using data available from Brazilian public sector database system and information extracted from the literature review, using micro-costing technique. Sensitivity analysis included price variations of percutaneous implant. The results obtained from the decision model demonstrated that the percutaneous implant was more cost effective in cost-effectiveness analysis at a cost of US$8,936.34 with a reduction in the probability of surgery occurrence in 93% of the cases. Probability of atrial septal communication occlusion and cost of the implant are the determinant factors of cost-effectiveness ratio. The proposal of a decision model seeks to fill a void in the academic literature. The decision model proposed includes the outcomes that present major impact in relation to the overall costs of the procedure. The atrial septal defects occlusion using percutaneous implant reduces the physical and psychological distress to the patients in relation to the conventional surgery, which represent intangible costs in the context of economic evaluation.
NASA Astrophysics Data System (ADS)
Mortezaei, A.; Ronagh, H. R.
2013-06-01
Near-fault ground motions with long-period pulses have been identified as being critical in the design of structures. These motions, which have caused severe damage in recent disastrous earthquakes, are characterized by a short-duration impulsive motion that transmits large amounts of energy into the structures at the beginning of the earthquake. In nearly all of the past near-fault earthquakes, significant higher mode contributions have been evident in building structures near the fault rupture, resulting in the migration of dynamic demands (i.e. drifts) from the lower to the upper stories. Due to this, the static nonlinear pushover analysis (which utilizes a load pattern proportional to the shape of the fundamental mode of vibration) may not produce accurate results when used in the analysis of structures subjected to near-fault ground motions. The objective of this paper is to improve the accuracy of the pushover method in these situations by introducing a new load pattern into the common pushover procedure. Several pushover analyses are performed for six existing reinforced concrete buildings that possess a variety of natural periods. Then, a comparison is made between the pushover analyses' results (with four new load patterns) and those of FEMA (Federal Emergency Management Agency)-356 with reference to nonlinear dynamic time-history analyses. The comparison shows that, generally, the proposed pushover method yields better results than all FEMA-356 pushover analysis procedures for all investigated response quantities and is a closer match to the nonlinear time-history responses. In general, the method is able to reproduce the essential response features providing a reasonable measure of the likely contribution of higher modes in all phases of the response.
Colon Capsule Endoscopy for the Detection of Colorectal Polyps: An Economic Analysis
Palimaka, Stefan; Blackhouse, Gord; Goeree, Ron
2015-01-01
Background Colorectal cancer is a leading cause of mortality and morbidity in Ontario. Most cases of colorectal cancer are preventable through early diagnosis and the removal of precancerous polyps. Colon capsule endoscopy is a non-invasive test for detecting colorectal polyps. Objectives The objectives of this analysis were to evaluate the cost-effectiveness and the impact on the Ontario health budget of implementing colon capsule endoscopy for detecting advanced colorectal polyps among adult patients who have been referred for computed tomographic (CT) colonography. Methods We performed an original cost-effectiveness analysis to assess the additional cost of CT colonography and colon capsule endoscopy resulting from misdiagnoses. We generated diagnostic accuracy data from a clinical evidence-based analysis (reported separately), and we developed a deterministic Markov model to estimate the additional long-term costs and life-years lost due to false-negative results. We then also performed a budget impact analysis using data from Ontario administrative sources. One-year costs were estimated for CT colonography and colon capsule endoscopy (replacing all CT colonography procedures, and replacing only those CT colonography procedures in patients with an incomplete colonoscopy within the previous year). We conducted this analysis from the payer perspective. Results Using the point estimates of diagnostic accuracy from the head-to-head study between colon capsule endoscopy and CT colonography, we found the additional cost of false-positive results for colon capsule endoscopy to be $0.41 per patient, while additional false-negatives for the CT colonography arm generated an added cost of $116 per patient, with 0.0096 life-years lost per patient due to cancer. This results in an additional cost of $26,750 per life-year gained for colon capsule endoscopy compared with CT colonography. The total 1-year cost to replace all CT colonography procedures with colon capsule endoscopy in Ontario is about $2.72 million; replacing only those CT colonography procedures in patients with an incomplete colonoscopy in the previous year would cost about $740,600 in the first year. Limitations The difference in accuracy between colon capsule endoscopy and CT colonography was not statistically significant for the detection of advanced adenomas (≥ 10 mm in diameter), according to the head-to-head clinical study from which the diagnostic accuracy was taken. This leads to uncertainty in the economic analysis, with results highly sensitive to changes in diagnostic accuracy. Conclusions The cost-effectiveness of colon capsule endoscopy for use in patients referred for CT colonography is $26,750 per life-year, assuming an increased sensitivity of colon capsule endoscopy. Replacement of CT colonography with colon capsule endoscopy is associated with moderate costs to the health care system. PMID:26366240
Input Files and Procedures for Analysis of SMA Hybrid Composite Beams in MSC.Nastran and ABAQUS
NASA Technical Reports Server (NTRS)
Turner, Travis L.; Patel, Hemant D.
2005-01-01
A thermoelastic constitutive model for shape memory alloys (SMAs) and SMA hybrid composites (SMAHCs) was recently implemented in the commercial codes MSC.Nastran and ABAQUS. The model is implemented and supported within the core of the commercial codes, so no user subroutines or external calculations are necessary. The model and resulting structural analysis has been previously demonstrated and experimentally verified for thermoelastic, vibration and acoustic, and structural shape control applications. The commercial implementations are described in related documents cited in the references, where various results are also shown that validate the commercial implementations relative to a research code. This paper is a companion to those documents in that it provides additional detail on the actual input files and solution procedures and serves as a repository for ASCII text versions of the input files necessary for duplication of the available results.
Kataoka, Heloneida C; Cole, Nina D; Flint, Douglas A
2006-12-01
In a laboratory study, 318 student participants (148 male, 169 female, and one who did not report sex; M age 25.0, SD = 6.0) in introductory organizational behavior classes responded to scenarios in which performance appraisal resulted in either employee promotion or termination. Each scenario had varying levels of three procedural justice criteria for performance appraisal. For both promotion and termination outcomes, analysis showed that, as the number of criteria increased, perceptions of procedural fairness increased. A comparison between the two outcomes showed that perceptions of fairness were significantly stronger for the promotion outcome than for termination.
Applications of remote sensing, volume 3
NASA Technical Reports Server (NTRS)
Landgrebe, D. A. (Principal Investigator)
1977-01-01
The author has identified the following significant results. Of the four change detection techniques (post classification comparison, delta data, spectral/temporal, and layered spectral temporal), the post classification comparison was selected for further development. This was based upon test performances of the four change detection method, straightforwardness of the procedures, and the output products desired. A standardized modified, supervised classification procedure for analyzing the Texas coastal zone data was compiled. This procedure was developed in order that all quadrangles in the study are would be classified using similar analysis techniques to allow for meaningful comparisons and evaluations of the classifications.
Reflectance of vegetation, soil, and water
NASA Technical Reports Server (NTRS)
Wiegand, C. L. (Principal Investigator)
1973-01-01
There are no author-identified significant results in this report. This report deals with the selection of the best channels from the 24-channel aircraft data to represent crop and soil conditions. A three-step procedure has been developed that involves using univariate statistics and an F-ratio test to indicate the best 14 channels. From the 14, the 10 best channels are selected by a multivariate stochastic process. The third step involves the pattern recognition procedures developed in the data analysis plan. Indications are that the procedures in use are satsifactory and will extract the desired information from the data.
NASA Technical Reports Server (NTRS)
Dailey, C. L.; Abotteen, K. M. (Principal Investigator)
1980-01-01
The development and testing of an analysis procedure which was developed to improve the consistency and objectively of crop identification using Landsat data is described. The procedure was developed to identify corn and soybean crops in the U.S. corn belt region. The procedure consists of a series of decision points arranged in a tree-like structure, the branches of which lead an analyst to crop labels. The specific decision logic is designed to maximize the objectively of the identification process and to promote the possibility of future automation. Significant results are summarized.
Analysis of vibrational load influence upon passengers in trains with a compulsory body tilt
NASA Astrophysics Data System (ADS)
Antipin, D. Ya; Kobishchanov, V. V.; Lapshin, V. F.; Mitrakov, A. S.; Shorokhov, S. G.
2017-02-01
The procedure for forecasting the vibrational load influence upon passengers of trains of rolling stocks equipped with a system of a compulsory body tilt on railroad curves is offered. The procedure is based on the use of computer simulation methods and application of solid-state models of anthropometrical mannequins. As a result of the carried out investigations, there are substantiated criteria of the comfort level estimate for passengers in the rolling-stock under consideration. The procedure is approved by the example of the promising domestic rolling stock with a compulsory body tilt on railroad curves.
Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kleijnen, J.P.C.; Helton, J.C.
1999-04-01
The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are consideredmore » for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.« less
Pontis, Alessandro; Sedda, Federica; Mereu, Liliana; Podda, Mauro; Melis, Gian Benedetto; Pisanu, Adolfo; Angioni, Stefano
2016-09-01
To critically appraise published randomized controlled trials (RCTs) comparing laparo-endoscopic single site (LESS) and multi-port laparoscopic (MPL) in gynecologic operative surgery; the aim was to assess feasibility, safety, and potential benefits of LESS in comparison to MPL. A systematic review and meta-analysis of eleven RCTs. Women undergoing operative LESS and MPL gynecologic procedure (hysterectomy, cystectomy, salpingectomy, salpingo-oophorectomy, myomectomy). Outcomes evaluated were as follows: postoperative overall morbidity, postoperative pain evaluation at 6, 12, 24 and 48 h, cosmetic patient satisfaction, conversion rate, body mass index (BMI), operative time, blood loss, hemoglobin drop, postoperative hospital stay. Eleven RCTs comprising 956 women with gynecologic surgical disease randomized to either LESS (477) or MPL procedures (479) were analyzed systematically. The LESS approach is a surgical procedure with longer operative and better cosmetic results time than MPL but without statistical significance. Operative outcomes, postoperative recovery, postoperative morbidity and patient satisfaction are similar in LESS and MPL. LESS may be considered an alternative to MPL with comparable feasibility and safety in gynecologic operative procedures. However, it does not offer the expected advantages in terms of postoperative pain and cosmetic satisfaction.
Di Nardo, Francesco; Mengoni, Michele; Morettini, Micaela
2013-05-01
Present study provides a novel MATLAB-based parameter estimation procedure for individual assessment of hepatic insulin degradation (HID) process from standard frequently-sampled intravenous glucose tolerance test (FSIGTT) data. Direct access to the source code, offered by MATLAB, enabled us to design an optimization procedure based on the alternating use of Gauss-Newton's and Levenberg-Marquardt's algorithms, which assures the full convergence of the process and the containment of computational time. Reliability was tested by direct comparison with the application, in eighteen non-diabetic subjects, of well-known kinetic analysis software package SAAM II, and by application on different data. Agreement between MATLAB and SAAM II was warranted by intraclass correlation coefficients ≥0.73; no significant differences between corresponding mean parameter estimates and prediction of HID rate; and consistent residual analysis. Moreover, MATLAB optimization procedure resulted in a significant 51% reduction of CV% for the worst-estimated parameter by SAAM II and in maintaining all model-parameter CV% <20%. In conclusion, our MATLAB-based procedure was suggested as a suitable tool for the individual assessment of HID process. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Cassetta, Michele; Perrotti, Vittoria; Calasso, Sabrina; Piattelli, Adriano; Sinjari, Bruna; Iezzi, Giovanna
2015-10-01
The aim of this study was to perform a 2 months clinical and histological comparison of autologous bone, porcine bone, and a 50 : 50 mixture in maxillary sinus augmentation procedures. A total of 10 consecutive patients, undergoing two-stage sinus augmentation procedures using 100% autologous bone (Group A), 100% porcine bone (Group B), and a 50 : 50 mixture of autologous and porcine bone (Group C) were included in this study. After a 2-month healing period, at the time of implant insertion, clinical evaluation was performed and bone core biopsies were harvested and processed for histological analysis. The postoperative healing was uneventful regardless of the materials used for the sinus augmentation procedures. The histomorphometrical analysis revealed comparable percentages of newly formed bone, marrow spaces, and residual grafted material in the three groups. The clinical and histological results of this study indicated that porcine bone alone or in combination with autologous bone are biocompatible and osteoconductive materials and can be successfully used in sinus augmentation procedures. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Makhni, Eric C; Lamba, Nayan; Swart, Eric; Steinhaus, Michael E; Ahmad, Christopher S; Romeo, Anthony A; Verma, Nikhil N
2016-09-01
To compare the cost-effectiveness of arthroscopic revision instability repair and Latarjet procedure in treating patients with recurrent instability after initial arthroscopic instability repair. An expected-value decision analysis of revision arthroscopic instability repair compared with Latarjet procedure for recurrent instability followed by failed repair attempt was modeled. Inputs regarding procedure cost, clinical outcomes, and health utilities were derived from the literature. Compared with revision arthroscopic repair, Latarjet was less expensive ($13,672 v $15,287) with improved clinical outcomes (43.78 v 36.76 quality-adjusted life-years). Both arthroscopic repair and Latarjet were cost-effective compared with nonoperative treatment (incremental cost-effectiveness ratios of 3,082 and 1,141, respectively). Results from sensitivity analyses indicate that under scenarios of high rates of stability postoperatively, along with improved clinical outcome scores, revision arthroscopic repair becomes increasingly cost-effective. Latarjet procedure for failed instability repair is a cost-effective treatment option, with lower costs and improved clinical outcomes compared with revision arthroscopic instability repair. However, surgeons must still incorporate clinical judgment into treatment algorithm formation. Level IV, expected value decision analysis. Copyright © 2016. Published by Elsevier Inc.
Cheng, Keding; Sloan, Angela; McCorrister, Stuart; Peterson, Lorea; Chui, Huixia; Drebot, Mike; Nadon, Celine; Knox, J David; Wang, Gehua
2014-12-01
The need for rapid and accurate H typing is evident during Escherichia coli outbreak situations. This study explores the transition of MS-H, a method originally developed for rapid H antigen typing of E. coli using LC-MS/MS of flagella digest of reference strains and some clinical strains, to E. coli isolates in clinical scenario through quantitative analysis and method validation. Motile and nonmotile strains were examined in batches to simulate clinical sample scenario. Various LC-MS/MS batch run procedures and MS-H typing rules were compared and summarized through quantitative analysis of MS-H data output for a standard method development. Label-free quantitative data analysis of MS-H typing was proven very useful for examining the quality of MS-H result and the effects of some sample carryovers from motile E. coli isolates. Based on this, a refined procedure and protein identification rule specific for clinical MS-H typing was established and validated. With LC-MS/MS batch run procedure and database search parameter unique for E. coli MS-H typing, the standard procedure maintained high accuracy and specificity in clinical situations, and its potential to be used in a clinical setting was clearly established. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Surgeon specialization and operative mortality in United States: retrospective analysis
Dalton, Maurice; Cutler, David M; Birkmeyer, John D; Chandra, Amitabh
2016-01-01
Objective To measure the association between a surgeon’s degree of specialization in a specific procedure and patient mortality. Design Retrospective analysis of Medicare data. Setting US patients aged 66 or older enrolled in traditional fee for service Medicare. Participants 25 152 US surgeons who performed one of eight procedures (carotid endarterectomy, coronary artery bypass grafting, valve replacement, abdominal aortic aneurysm repair, lung resection, cystectomy, pancreatic resection, or esophagectomy) on 695 987 patients in 2008-13. Main outcome measure Relative risk reduction in risk adjusted and volume adjusted 30 day operative mortality between surgeons in the bottom quarter and top quarter of surgeon specialization (defined as the number of times the surgeon performed the specific procedure divided by his/her total operative volume across all procedures). Results For all four cardiovascular procedures and two out of four cancer resections, a surgeon’s degree of specialization was a significant predictor of operative mortality independent of the number of times he or she performed that procedure: carotid endarterectomy (relative risk reduction between bottom and top quarter of surgeons 28%, 95% confidence interval 0% to 48%); coronary artery bypass grafting (15%, 4% to 25%); valve replacement (46%, 37% to 53%); abdominal aortic aneurysm repair (42%, 29% to 53%); lung resection (28%, 5% to 46%); and cystectomy (41%, 8% to 63%). In five procedures (carotid endarterectomy, valve replacement, lung resection, cystectomy, and esophagectomy), the relative risk reduction from surgeon specialization was greater than that from surgeon volume for that specific procedure. Furthermore, surgeon specialization accounted for 9% (coronary artery bypass grafting) to 100% (cystectomy) of the relative risk reduction otherwise attributable to volume in that specific procedure. Conclusion For several common procedures, surgeon specialization was an important predictor of operative mortality independent of volume in that specific procedure. When selecting a surgeon, patients, referring physicians, and administrators assigning operative workload may want to consider a surgeon’s procedure specific volume as well as the degree to which a surgeon specializes in that procedure. PMID:27444190
Fobil, Julius N.; Kumoji, Robert; Armah, Henry B.; Aryee, Eunice; Bilson, Francis; Carboo, Derick; Rodrigues, Frederick K.; Meyer, Christian G.; May, Juergen; Kraemer, Alexander
2011-01-01
The study of cause of death certification remains a largely neglected field in many developing countries, including Ghana. Yet, mortality information is crucial for establishing mortality patterns over time and for estimating mortality attributed to specific causes. In Ghana, autopsies remain the appropriate option for determining the cause of deaths occurring in homes and those occurring within 48 hours after admission into health facilities. Although these organ-based autopsies may generate convincing results and are considered the gold standard tools for ascertainments of causes of death, procedural and practical constraints could limit the extent to which autopsy results can be accepted and/or trusted. The objective of our study was to identify and characterise the procedural and practical constraints as well as to assess their potential effects on autopsy outcomes in Ghana. We interviewed 10 Ghanaian pathologists and collected and evaluated procedural manuals and operational procedures for the conduct of autopsies. A characterisation of the operational constraints and the Delphi analysis of their potential influence on the quality of mortality data led to a quantification of the validity threats as moderate (average expert panel score = 1) in the generality of the autopsy operations in Ghana. On the basis of the impressions of the expert panel, it was concluded that mortality data generated from autopsies in urban settings in Ghana were of sufficiently high quality to guarantee valid use in health analysis. PMID:28299049
A Finite Element Procedure for Calculating Fluid-Structure Interaction Using MSC/NASTRAN
NASA Technical Reports Server (NTRS)
Chargin, Mladen; Gartmeier, Otto
1990-01-01
This report is intended to serve two purposes. The first is to present a survey of the theoretical background of the dynamic interaction between a non-viscid, compressible fluid and an elastic structure is presented. Section one presents a short survey of the application of the finite element method (FEM) to the area of fluid-structure-interaction (FSI). Section two describes the mathematical foundation of the structure and fluid with special emphasis on the fluid. The main steps in establishing the finite element (FE) equations for the fluid structure coupling are discussed in section three. The second purpose is to demonstrate the application of MSC/NASTRAN to the solution of FSI problems. Some specific topics, such as fluid structure analogy, acoustic absorption, and acoustic contribution analysis are described in section four. Section five deals with the organization of the acoustic procedure flowchart. Section six includes the most important information that a user needs for applying the acoustic procedure to practical FSI problems. Beginning with some rules concerning the FE modeling of the coupled system, the NASTRAN USER DECKs for the different steps are described. The goal of section seven is to demonstrate the use of the acoustic procedure with some examples. This demonstration includes an analytic verification of selected FE results. The analytical description considers only some aspects of FSI and is not intended to be mathematically complete. Finally, section 8 presents an application of the acoustic procedure to vehicle interior acoustic analysis with selected results.
Khanna, Rajesh; Handa, Aashish; Virk, Rupam Kaur; Ghai, Deepika; Handa, Rajni Sharma; Goel, Asim
2017-01-01
Background: The process of cleaning and shaping the canal is not an easy goal to obtain, as canal curvature played a significant role during the instrumentation of the curved canals. Aim: The present in vivo study was conducted to evaluate procedural errors during the preparation of curved root canals using hand Nitiflex and rotary K3XF instruments. Materials and Methods: Procedural errors such as ledge formation, instrument separation, and perforation (apical, furcal, strip) were determined in sixty patients, divided into two groups. In Group I, thirty teeth in thirty patients were prepared using hand Nitiflex system, and in Group II, thirty teeth in thirty patients were prepared using K3XF rotary system. The evaluation was done clinically as well as radiographically. The results recorded from both groups were compiled and put to statistical analysis. Statistical Analysis: Chi-square test was used to compare the procedural errors (instrument separation, ledge formation, and perforation). Results: In the present study, both hand Nitiflex and rotary K3XF showed ledge formation and instrument separation. Although ledge formation and instrument separation by rotary K3XF file system was less as compared to hand Nitiflex. No perforation was seen in both the instrument groups. Conclusion: Canal curvature played a significant role during the instrumentation of the curved canals. Procedural errors such as ledge formation and instrument separation by rotary K3XF file system were less as compared to hand Nitiflex. PMID:29042727
SU-E-T-635: Process Mapping of Eye Plaque Brachytherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huynh, J; Kim, Y
Purpose: To apply a risk-based assessment and analysis technique (AAPM TG 100) to eye plaque brachytherapy treatment of ocular melanoma. Methods: The role and responsibility of personnel involved in the eye plaque brachytherapy is defined for retinal specialist, radiation oncologist, nurse and medical physicist. The entire procedure was examined carefully. First, major processes were identified and then details for each major process were followed. Results: Seventy-one total potential modes were identified. Eight major processes (corresponding detailed number of modes) are patient consultation (2 modes), pretreatment tumor localization (11), treatment planning (13), seed ordering and calibration (10), eye plaque assembly (10),more » implantation (11), removal (11), and deconstruction (3), respectively. Half of the total modes (36 modes) are related to physicist while physicist is not involved in processes such as during the actual procedure of suturing and removing the plaque. Conclusion: Not only can failure modes arise from physicist-related procedures such as treatment planning and source activity calibration, but it can also exist in more clinical procedures by other medical staff. The improvement of the accurate communication for non-physicist-related clinical procedures could potentially be an approach to prevent human errors. More rigorous physics double check would reduce the error for physicist-related procedures. Eventually, based on this detailed process map, failure mode and effect analysis (FMEA) will identify top tiers of modes by ranking all possible modes with risk priority number (RPN). For those high risk modes, fault tree analysis (FTA) will provide possible preventive action plans.« less
The Gap Procedure: for the identification of phylogenetic clusters in HIV-1 sequence data.
Vrbik, Irene; Stephens, David A; Roger, Michel; Brenner, Bluma G
2015-11-04
In the context of infectious disease, sequence clustering can be used to provide important insights into the dynamics of transmission. Cluster analysis is usually performed using a phylogenetic approach whereby clusters are assigned on the basis of sufficiently small genetic distances and high bootstrap support (or posterior probabilities). The computational burden involved in this phylogenetic threshold approach is a major drawback, especially when a large number of sequences are being considered. In addition, this method requires a skilled user to specify the appropriate threshold values which may vary widely depending on the application. This paper presents the Gap Procedure, a distance-based clustering algorithm for the classification of DNA sequences sampled from individuals infected with the human immunodeficiency virus type 1 (HIV-1). Our heuristic algorithm bypasses the need for phylogenetic reconstruction, thereby supporting the quick analysis of large genetic data sets. Moreover, this fully automated procedure relies on data-driven gaps in sorted pairwise distances to infer clusters, thus no user-specified threshold values are required. The clustering results obtained by the Gap Procedure on both real and simulated data, closely agree with those found using the threshold approach, while only requiring a fraction of the time to complete the analysis. Apart from the dramatic gains in computational time, the Gap Procedure is highly effective in finding distinct groups of genetically similar sequences and obviates the need for subjective user-specified values. The clusters of genetically similar sequences returned by this procedure can be used to detect patterns in HIV-1 transmission and thereby aid in the prevention, treatment and containment of the disease.
Duncan, James R; Kline, Benjamin; Glaiberman, Craig B
2007-04-01
To create and test methods of extracting efficiency data from recordings of simulated renal stent procedures. Task analysis was performed and used to design a standardized testing protocol. Five experienced angiographers then performed 16 renal stent simulations using the Simbionix AngioMentor angiographic simulator. Audio and video recordings of these simulations were captured from multiple vantage points. The recordings were synchronized and compiled. A series of efficiency metrics (procedure time, contrast volume, and tool use) were then extracted from the recordings. The intraobserver and interobserver variability of these individual metrics was also assessed. The metrics were converted to costs and aggregated to determine the fixed and variable costs of a procedure segment or the entire procedure. Task analysis and pilot testing led to a standardized testing protocol suitable for performance assessment. Task analysis also identified seven checkpoints that divided the renal stent simulations into six segments. Efficiency metrics for these different segments were extracted from the recordings and showed excellent intra- and interobserver correlations. Analysis of the individual and aggregated efficiency metrics demonstrated large differences between segments as well as between different angiographers. These differences persisted when efficiency was expressed as either total or variable costs. Task analysis facilitated both protocol development and data analysis. Efficiency metrics were readily extracted from recordings of simulated procedures. Aggregating the metrics and dividing the procedure into segments revealed potential insights that could be easily overlooked because the simulator currently does not attempt to aggregate the metrics and only provides data derived from the entire procedure. The data indicate that analysis of simulated angiographic procedures will be a powerful method of assessing performance in interventional radiology.
What the Tweets Say: A Critical Analysis of Twitter Research in Language Learning from 2009 to 2016
ERIC Educational Resources Information Center
Hattem, David; Lomicka, Lara
2016-01-01
This study presents an overview and critical analysis of the literature related to Twitter and language learning published from 2009 to 2016. Seventeen studies were selected for inclusion based on a four-phase identification procedure, which helped us to identify published studies that resulted in a content analysis of themes in the articles…
Self-referred whole-body CT imaging: current implications for health care consumers.
Illes, Judy; Fan, Ellen; Koenig, Barbara A; Raffin, Thomas A; Kann, Dylan; Atlas, Scott W
2003-08-01
To conduct an empirical analysis of self-referred whole-body computed tomography (CT) and develop a profile of the geographic and demographic distribution of centers, types of services and modalities, costs, and procedures for reporting results. An analysis was conducted of Web sites for imaging centers accepting self-referred patients identified by two widely used Internet search engines with large indexes. These Web sites were analyzed for geographic location, type of screening center, services, costs, and procedures for managing imaging results. Demographic data were extrapolated for analysis on the basis of center location. Descriptive statistics, such as frequencies, means, SDs, ranges, and CIs, were generated to describe the characteristics of the samples. Data were compared with national norms by using a distribution-free method for calculating a 95% CI (P <.05) for the median. Eighty-eight centers identified with the search methods were widely distributed across the United States, with a concentration on both coasts. Demographic analysis further situated them in areas of the country characterized by a population that consisted largely of European Americans (P <.05) and individuals of higher education (P <.05) and socioeconomic status (P <.05). Forty-seven centers offered whole-body screening; heart and lung examinations were most frequently offered. Procedures for reporting results were highly variable. The geographic distribution of the centers suggests target populations of educated health-conscious consumers who can assume high out-of-pocket costs. Guidelines developed from within the profession and further research are needed to ensure that benefits of these services outweigh risks to individuals and the health care system. Copyright RSNA, 2003.
1983-10-13
Acid, Tannin , and Lignin in Natural Waters. Water Res. 14, 373 (1980). 85. Willard,H.,Furman,N.H.,Bacon,E.K. A Short Course in Quantitative Analysis , Van...63 c. Experimental Procedure 64 2. Results of the Preliminary Investigation of the SDI 74 a. Results of Before and After Membrane Filtration Analysis ...Permanganate Demand Test A. Literature Review 1. Permanganate to Predict Fouling 81 2. Detection and Analysis of Permanganate 83 a. Spectrophotometry
Nonlinear probabilistic finite element models of laminated composite shells
NASA Technical Reports Server (NTRS)
Engelstad, S. P.; Reddy, J. N.
1993-01-01
A probabilistic finite element analysis procedure for laminated composite shells has been developed. A total Lagrangian finite element formulation, employing a degenerated 3-D laminated composite shell with the full Green-Lagrange strains and first-order shear deformable kinematics, forms the modeling foundation. The first-order second-moment technique for probabilistic finite element analysis of random fields is employed and results are presented in the form of mean and variance of the structural response. The effects of material nonlinearity are included through the use of a rate-independent anisotropic plasticity formulation with the macroscopic point of view. Both ply-level and micromechanics-level random variables can be selected, the latter by means of the Aboudi micromechanics model. A number of sample problems are solved to verify the accuracy of the procedures developed and to quantify the variability of certain material type/structure combinations. Experimental data is compared in many cases, and the Monte Carlo simulation method is used to check the probabilistic results. In general, the procedure is quite effective in modeling the mean and variance response of the linear and nonlinear behavior of laminated composite shells.
An IMU-to-Body Alignment Method Applied to Human Gait Analysis
Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo
2016-01-01
This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis. PMID:27973406
Han, Jubong; Lee, K B; Lee, Jong-Man; Park, Tae Soon; Oh, J S; Oh, Pil-Jei
2016-03-01
We discuss a new method to incorporate Type B uncertainty into least-squares procedures. The new method is based on an extension of the likelihood function from which a conventional least-squares function is derived. The extended likelihood function is the product of the original likelihood function with additional PDFs (Probability Density Functions) that characterize the Type B uncertainties. The PDFs are considered to describe one's incomplete knowledge on correction factors being called nuisance parameters. We use the extended likelihood function to make point and interval estimations of parameters in the basically same way as the least-squares function used in the conventional least-squares method is derived. Since the nuisance parameters are not of interest and should be prevented from appearing in the final result, we eliminate such nuisance parameters by using the profile likelihood. As an example, we present a case study for a linear regression analysis with a common component of Type B uncertainty. In this example we compare the analysis results obtained from using our procedure with those from conventional methods. Copyright © 2015. Published by Elsevier Ltd.
Dynamic variational asymptotic procedure for laminated composite shells
NASA Astrophysics Data System (ADS)
Lee, Chang-Yong
Unlike published shell theories, the main two parts of this thesis are devoted to the asymptotic construction of a refined theory for composite laminated shells valid over a wide range of frequencies and wavelengths. The resulting theory is applicable to shells each layer of which is made of materials with monoclinic symmetry. It enables one to analyze shell dynamic responses within both long-wavelength, low- and high-frequency vibration regimes. It also leads to energy functionals that are both positive definiteness and sufficient simplicity for all wavelengths. This whole procedure was first performed analytically. From the insight gained from the procedure, a finite element version of the analysis was then developed; and a corresponding computer program, DVAPAS, was developed. DVAPAS can obtain the generalized 2-D constitutive law and recover accurately the 3-D results for stress and strain in composite shells. Some independent works will be needed to develop the corresponding 2-D surface analysis associated with the present theory and to continue towards full verification and validation of the present process by comparison with available published works.
Criado-Fornelio, A; Buling, A; Barba-Carretero, J C
2009-02-01
We developed and validated a real-time polymerase chain reaction (PCR) assay using fluorescent hybridization probes and melting curve analysis to identify the PKD1 exon 29 (C-->A) mutation, which is implicated in polycystic kidney disease of cats. DNA was isolated from peripheral blood of 20 Persian cats. The employ of the new real-time PCR and melting curve analysis in these samples indicated that 13 cats (65%) were wild type homozygotes and seven cats (35%) were heterozygotes. Both PCR-RFLP and sequencing procedures were in full agreement with real-time PCR test results. Sequence analysis showed that the mutant gene had the expected base change compared to the wild type gene. The new procedure is not only very reliable but also faster than the techniques currently applied for diagnosis of the mutation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gillespie, B.M.; Stromatt, R.W.; Ross, G.A.
This data package contains the results obtained by Pacific Northwest Laboratory (PNL) staff in the characterization of samples for the 101-SY Hydrogen Safety Project. The samples were submitted for analysis by Westinghouse Hanford Company (WHC) under the Technical Project Plan (TPP) 17667 and the Quality Assurance Plan MCS-027. They came from a core taken during Window C'' after the May 1991 gas release event. The analytical procedures required for analysis were defined in the Test Instructions (TI) prepared by the PNL 101-SY Analytical Chemistry Laboratory (ACL) Project Management Office in accordance with the TPP and the QA Plan. The requestedmore » analysis for these samples was volatile organic analysis. The quality control (QC) requirements for each sample are defined in the Test Instructions for each sample. The QC requirements outlined in the procedures and requested in the WHC statement of work were followed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gillespie, B.M.; Stromatt, R.W.; Ross, G.A.
This data package contains the results obtained by Pacific Northwest Laboratory (PNL) staff in the characterization of samples for the 101-SY Hydrogen Safety Project. The samples were submitted for analysis by Westinghouse Hanford Company (WHC) under the Technical Project Plan (TPP) 17667 and the Quality Assurance Plan MCS-027. They came from a core taken during Window ``C`` after the May 1991 gas release event. The analytical procedures required for analysis were defined in the Test Instructions (TI) prepared by the PNL 101-SY Analytical Chemistry Laboratory (ACL) Project Management Office in accordance with the TPP and the QA Plan. The requestedmore » analysis for these samples was volatile organic analysis. The quality control (QC) requirements for each sample are defined in the Test Instructions for each sample. The QC requirements outlined in the procedures and requested in the WHC statement of work were followed.« less
Brief functional analysis and treatment of a vocal tic.
Watson, T S; Sterling, H E
1998-01-01
This study sought to extend functional methodology to the assessment and treatment of habits. After a descriptive assessment indicated that coughing occurred while eating, a brief functional analysis suggested that social attention was the maintaining variable. Results demonstrated that treatment, derived from the assessment and analysis data, rapidly eliminated the cough. We discuss the appropriateness of using functional analysis procedures for deriving treatments for habits in a clinical setting.
2008-03-01
investigated, as well as the methodology used . Chapter IV presents the data collection and analysis procedures, and the resulting analysis and...interpolate the data, although a non-interpolating model is possible. For this research Design and Analysis of Computer Experiments (DACE) is used ...followed by the analysis . 4.1. Testing Approach The initial SMOMADS algorithm used for this research was acquired directly from Walston [70]. The
NASA Technical Reports Server (NTRS)
Lovejoy, Andrew E.; Hilburger, Mark W.
2013-01-01
This document outlines a Modeling and Analysis Plan (MAP) to be followed by the SBKF analysts. It includes instructions on modeling and analysis formulation and execution, model verification and validation, identifying sources of error and uncertainty, and documentation. The goal of this MAP is to provide a standardized procedure that ensures uniformity and quality of the results produced by the project and corresponding documentation.
Matalon, Shanna A; Chikarmane, Sona A; Yeh, Eren D; Smith, Stacy E; Mayo-Smith, William W; Giess, Catherine S
2018-03-19
Increased attention to quality and safety has led to a re-evaluation of the classic apprenticeship model for procedural training. Many have proposed simulation as a supplementary teaching tool. The purpose of this study was to assess radiology resident exposure to procedural training and procedural simulation. An IRB-exempt online survey was distributed to current radiology residents in the United States by e-mail. Survey results were summarized using frequency and percentages. Chi-square tests were used for statistical analysis where appropriate. A total of 353 current residents completed the survey. 37% (n = 129/353) of respondents had never used procedure simulation. Of the residents who had used simulation, most did not do so until after having already performed procedures on patients (59%, n = 132/223). The presence of a dedicated simulation center was reported by over half of residents (56%, n = 196/353) and was associated with prior simulation experience (P = 0.007). Residents who had not had procedural simulation were somewhat likely or highly likely (3 and 4 on a 4-point Likert-scale) to participate if it were available (81%, n = 104/129). Simulation training was associated with higher comfort levels in performing procedures (P < 0.001). Although procedural simulation training is associated with higher comfort levels when performing procedures, there is variable use in radiology resident training and its use is not currently optimized. Given the increased emphasis on patient safety, these results suggest the need to increase procedural simulation use during residency, including an earlier introduction to simulation before patient exposure. Copyright © 2018 Elsevier Inc. All rights reserved.
Traeger, Adrian C; Skinner, Ian W; Hübscher, Markus; Lee, Hopin; Moseley, G Lorimer; Nicholas, Michael K; Henschke, Nicholas; Refshauge, Kathryn M; Blyth, Fiona M; Main, Chris J; Hush, Julia M; Pearce, Garry; Lo, Serigne; McAuley, James H
Statistical analysis plans increase the transparency of decisions made in the analysis of clinical trial results. The purpose of this paper is to detail the planned analyses for the PREVENT trial, a randomized, placebo-controlled trial of patient education for acute low back pain. We report the pre-specified principles, methods, and procedures to be adhered to in the main analysis of the PREVENT trial data. The primary outcome analysis will be based on Mixed Models for Repeated Measures (MMRM), which can test treatment effects at specific time points, and the assumptions of this analysis are outlined. We also outline the treatment of secondary outcomes and planned sensitivity analyses. We provide decisions regarding the treatment of missing data, handling of descriptive and process measure data, and blinded review procedures. Making public the pre-specified statistical analysis plan for the PREVENT trial minimizes the potential for bias in the analysis of trial data, and in the interpretation and reporting of trial results. ACTRN12612001180808 (https://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?ACTRN=12612001180808). Copyright © 2017 Associação Brasileira de Pesquisa e Pós-Graduação em Fisioterapia. Publicado por Elsevier Editora Ltda. All rights reserved.
Use of a Modified Chaining Procedure with Textual Prompts to Establish Intraverbal Storytelling.
Valentino, Amber L; Conine, Daniel E; Delfs, Caitlin H; Furlow, Christopher M
2015-06-01
Echoic, tact, and textual transfer procedures have been proven successful in establishing simple intraverbals (Braam and Poling Applied Research in Mental Retardation, 4, 279-302, 1983; Luciano Applied Research in Mental Retardation, 102, 346-357, 1986; Watkins et al. The Analysis of Verbal Behavior, 7, 69-81, 1989). However, these strategies may be ineffective for some children due to the complexity of the targeted intraverbals. The current study investigated the use of a novel procedure which included a modified chaining procedure and textual prompts to establish intraverbal behavior in the form of telling short stories. Visual prompts and rule statements were used with some of the participants in order to produce the desired behavior change. Results indicated that the procedure was effective for teaching retelling of short stories in three children with autism.
User's operating procedures. Volume 2: Scout project financial analysis program
NASA Technical Reports Server (NTRS)
Harris, C. G.; Haris, D. K.
1985-01-01
A review is presented of the user's operating procedures for the Scout Project Automatic Data system, called SPADS. SPADS is the result of the past seven years of software development on a Prime mini-computer located at the Scout Project Office, NASA Langley Research Center, Hampton, Virginia. SPADS was developed as a single entry, multiple cross-reference data management and information retrieval system for the automation of Project office tasks, including engineering, financial, managerial, and clerical support. This volume, two (2) of three (3), provides the instructions to operate the Scout Project Financial Analysis program in data retrieval and file maintenance via the user friendly menu drivers.
The use of optimization techniques to design controlled diffusion compressor blading
NASA Technical Reports Server (NTRS)
Sanger, N. L.
1982-01-01
A method for automating compressor blade design using numerical optimization, and applied to the design of a controlled diffusion stator blade row is presented. A general purpose optimization procedure is employed, based on conjugate directions for locally unconstrained problems and on feasible directions for locally constrained problems. Coupled to the optimizer is an analysis package consisting of three analysis programs which calculate blade geometry, inviscid flow, and blade surface boundary layers. The optimizing concepts and selection of design objective and constraints are described. The procedure for automating the design of a two dimensional blade section is discussed, and design results are presented.
Data on DNA gel sample load, gel electrophoresis, PCR and cost analysis.
Kuhn, Ramona; Böllmann, Jörg; Krahl, Kathrin; Bryant, Isaac Mbir; Martienssen, Marion
2018-02-01
The data presented in this article provide supporting information to the related research article "Comparison of ten different DNA extraction procedures with respect to their suitability for environmental samples" (Kuhn et al., 2017) [1]. In that article, we compared the suitability of ten selected DNA extraction methods based on DNA quality, purity, quantity and applicability to universal PCR. Here we provide the data on the specific DNA gel sample load, all unreported gel images of crude DNA and PCR results, and the complete cost analysis for all tested extraction procedures and in addition two commercial DNA extraction kits for soil and water.
Experimental and numerical research on forging with torsion
NASA Astrophysics Data System (ADS)
Petrov, Mikhail A.; Subich, Vadim N.; Petrov, Pavel A.
2017-10-01
Increasing the efficiency of the technological operations of blank production is closely related to the computer-aided technologies (CAx). On the one hand, the practical result represents reality exactly. On the other hand, the development procedure of new process development demands unrestricted resources, which are limited on the SMEs. The tools of CAx were successfully applied for development of new process of forging with torsion and result analysis as well. It was shown, that the theoretical calculations find the confirmation both in praxis and during numerical simulation. The mostly used constructional materials were under study. The torque angles were stated. The simulated results were evaluated by experimental procedure.
Statistical Reform in School Psychology Research: A Synthesis
ERIC Educational Resources Information Center
Swaminathan, Hariharan; Rogers, H. Jane
2007-01-01
Statistical reform in school psychology research is discussed in terms of research designs, measurement issues, statistical modeling and analysis procedures, interpretation and reporting of statistical results, and finally statistics education.
NASA Technical Reports Server (NTRS)
Haralick, R. H. (Principal Investigator); Bosley, R. J.
1974-01-01
The author has identified the following significant results. A procedure was developed to extract cross-band textural features from ERTS MSS imagery. Evolving from a single image texture extraction procedure which uses spatial dependence matrices to measure relative co-occurrence of nearest neighbor grey tones, the cross-band texture procedure uses the distribution of neighboring grey tone N-tuple differences to measure the spatial interrelationships, or co-occurrences, of the grey tone N-tuples present in a texture pattern. In both procedures, texture is characterized in such a way as to be invariant under linear grey tone transformations. However, the cross-band procedure complements the single image procedure by extracting texture information and spectral information contained in ERTS multi-images. Classification experiments show that when used alone, without spectral processing, the cross-band texture procedure extracts more information than the single image texture analysis. Results show an improvement in average correct classification from 86.2% to 88.8% for ERTS image no. 1021-16333 with the cross-band texture procedure. However, when used together with spectral features, the single image texture plus spectral features perform better than the cross-band texture plus spectral features, with an average correct classification of 93.8% and 91.6%, respectively.
Application and Evaluation of an Expert Judgment Elicitation Procedure for Correlations.
Zondervan-Zwijnenburg, Mariëlle; van de Schoot-Hubeek, Wenneke; Lek, Kimberley; Hoijtink, Herbert; van de Schoot, Rens
2017-01-01
The purpose of the current study was to apply and evaluate a procedure to elicit expert judgments about correlations, and to update this information with empirical data. The result is a face-to-face group elicitation procedure with as its central element a trial roulette question that elicits experts' judgments expressed as distributions. During the elicitation procedure, a concordance probability question was used to provide feedback to the experts on their judgments. We evaluated the elicitation procedure in terms of validity and reliability by means of an application with a small sample of experts. Validity means that the elicited distributions accurately represent the experts' judgments. Reliability concerns the consistency of the elicited judgments over time. Four behavioral scientists provided their judgments with respect to the correlation between cognitive potential and academic performance for two separate populations enrolled at a specific school in the Netherlands that provides special education to youth with severe behavioral problems: youth with autism spectrum disorder (ASD), and youth with diagnoses other than ASD. Measures of face-validity, feasibility, convergent validity, coherence, and intra-rater reliability showed promising results. Furthermore, the current study illustrates the use of the elicitation procedure and elicited distributions in a social science application. The elicited distributions were used as a prior for the correlation, and updated with data for both populations collected at the school of interest. The current study shows that the newly developed elicitation procedure combining the trial roulette method with the elicitation of correlations is a promising tool, and that the results of the procedure are useful as prior information in a Bayesian analysis.
Direct and conceptual replications of the taxometric analysis of type a behavior.
Wilmot, Michael P; Haslam, Nick; Tian, Jingyuan; Ones, Deniz S
2018-05-17
We present direct and conceptual replications of the influential taxometric analysis of Type A Behavior (TAB; Strube, 1989), which reported evidence for the latent typology of the construct. Study 1, the direct replication (N = 2,373), duplicated sampling and methodological procedures of the original study, but results showed that the item indicators used in the original study lacked sufficient validity to unambiguously determine latent structure. Using improved factorial subscale indicators to further test the question, multiple taxometric procedures, in combination with parallel analyses of simulated data, failed to replicate the original typological finding. Study 2, the conceptual replication, tested the latent structure of the wider construct of TAB using the sample from the Caerphilly Prospective Study (N = 2,254), which contains responses to the three most widely used self-report measures of TAB: the Jenkins Activity Survey, Bortner scale, and Framingham scale. Factorial subscale indicators were derived from the measures and submitted to multiple taxometric procedures. Results of Study 2 converged with those of Study 1, providing clear evidence of latent dimensional structure. Overall, results suggest there is no evidence for the type in TAB. Findings imply that theoretical models of TAB, assessment practices, and data analytic procedures that assume a typology should be replaced by dimensional models, factorial subscale measures, and corresponding statistical approaches. Specific subscale measures that tap multiple Big Five trait domains, and show evidence of predictive utility, are also recommended. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Pressure Mapping and Efficiency Analysis of an EPPLER 857 Hydrokinetic Turbine
NASA Astrophysics Data System (ADS)
Clark, Tristan
A conceptual energy ship is presented to provide renewable energy. The ship, driven by the wind, drags a hydrokinetic turbine through the water. The power generated is used to run electrolysis on board, taking the resultant hydrogen back to shore to be used as an energy source. The basin efficiency (Power/thrust*velocity) of the Hydrokinetic Turbine (HTK) plays a vital role in this process. In order to extract the maximum allowable power from the flow, the blades need to be optimized. The structural analysis of the blade is important, as the blade will undergo high pressure loads from the water. A procedure for analysis of a preliminary Hydrokinetic Turbine blade design is developed. The blade was designed by a non-optimized Blade Element Momentum Theory (BEMT) code. Six simulations were run, with varying mesh resolution, turbulence models, and flow region size. The procedure was developed that provides detailed explanation for the entire process, from geometry and mesh generation to post-processing analysis tools. The efficiency results from the simulations are used to study the mesh resolution, flow region size, and turbulence models. The results are compared to the BEMT model design targets. Static pressure maps are created that can be used for structural analysis of the blades.
Magnetic navigation system for percutaneous coronary intervention
Qi, Zhiyong; Wu, Bangwei; Luo, Xinping; Zhu, Jun; Shi, Haiming; Jin, Bo
2016-01-01
Abstract Background: Magnetic navigation system (MNS) allows calculation of the vessel coordinates in real space within the patient's chest for percutaneous coronary intervention (PCI). However, its impact on the procedural parameters and clinical outcomes is still a matter of debate. To derive a more precise estimation of the relationship, a meta-analysis was performed. Methods and Results: Studies exploring the advantages of MNS were identified in English-language articles by search of Medline, Web of Science, and Cochrane Library Databases (inception to October 2015). A standardized protocol was used to extract details on study design, region origin, demographic data, lesion type, and clinical outcomes. The main outcome measures were contrast consumption, procedural success rate, contrast used for wire crossing, procedure time to cross the lesions, and the fluoroscopy time fluoroscopy time. A total of 12 clinical trials involving 2174 patients were included for analysis (902 patients in the magnetic PCI group and 1272 in the conventional PCI group). Overall, contrast consumption was decreased by 40.45 mL (95% confidence interval [CI] −70.98 to −9.92, P = 0.009) in magnetic PCI group compared with control group. In addition, magnetic PCI was associated with significantly decreasing procedural time by 2.17 minutes (95% CI −3.91 to −0.44, P = 0.01) and the total fluoroscopy time was significantly decreased by 1.43 minutes (95% CI −2.29 to −0.57, P = 0.001) in magnetic PCI group. However, procedural success rate, contrast used for wire crossing, procedure time to cross the lesions, and the fluoroscopy time to cross the lesions demonstrated that no statistically difference was observed between 2 groups. Conclusion: The present meta-analysis indicated an improvement of overall contrast consumption, total procedural time, and fluoroscopy time in magnetic PCI group. However, no significant advantages were observed associated with procedural success rate. PMID:27442645
Evaluation of microarray data normalization procedures using spike-in experiments
Rydén, Patrik; Andersson, Henrik; Landfors, Mattias; Näslund, Linda; Hartmanová, Blanka; Noppa, Laila; Sjöstedt, Anders
2006-01-01
Background Recently, a large number of methods for the analysis of microarray data have been proposed but there are few comparisons of their relative performances. By using so-called spike-in experiments, it is possible to characterize the analyzed data and thereby enable comparisons of different analysis methods. Results A spike-in experiment using eight in-house produced arrays was used to evaluate established and novel methods for filtration, background adjustment, scanning, channel adjustment, and censoring. The S-plus package EDMA, a stand-alone tool providing characterization of analyzed cDNA-microarray data obtained from spike-in experiments, was developed and used to evaluate 252 normalization methods. For all analyses, the sensitivities at low false positive rates were observed together with estimates of the overall bias and the standard deviation. In general, there was a trade-off between the ability of the analyses to identify differentially expressed genes (i.e. the analyses' sensitivities) and their ability to provide unbiased estimators of the desired ratios. Virtually all analysis underestimated the magnitude of the regulations; often less than 50% of the true regulations were observed. Moreover, the bias depended on the underlying mRNA-concentration; low concentration resulted in high bias. Many of the analyses had relatively low sensitivities, but analyses that used either the constrained model (i.e. a procedure that combines data from several scans) or partial filtration (a novel method for treating data from so-called not-found spots) had with few exceptions high sensitivities. These methods gave considerable higher sensitivities than some commonly used analysis methods. Conclusion The use of spike-in experiments is a powerful approach for evaluating microarray preprocessing procedures. Analyzed data are characterized by properties of the observed log-ratios and the analysis' ability to detect differentially expressed genes. If bias is not a major problem; we recommend the use of either the CM-procedure or partial filtration. PMID:16774679
NASA Technical Reports Server (NTRS)
Holms, A. G.
1977-01-01
A statistical decision procedure called chain pooling had been developed for model selection in fitting the results of a two-level fixed-effects full or fractional factorial experiment not having replication. The basic strategy included the use of one nominal level of significance for a preliminary test and a second nominal level of significance for the final test. The subject has been reexamined from the point of view of using as many as three successive statistical model deletion procedures in fitting the results of a single experiment. The investigation consisted of random number studies intended to simulate the results of a proposed aircraft turbine-engine rotor-burst-protection experiment. As a conservative approach, population model coefficients were chosen to represent a saturated 2 to the 4th power experiment with a distribution of parameter values unfavorable to the decision procedures. Three model selection strategies were developed.
Determination of Lead in Blood by Atomic Absorption Spectrophotometry1
Selander, Stig; Cramér, Kim
1968-01-01
Lead in blood was determined by atomic absorption spectrophotometry, using a wet ashing procedure and a procedure in which the proteins were precipitated with trichloroacetic acid. In both methods the lead was extracted into isobutylmethylketone before measurement, using ammonium pyrrolidine dithiocarbamate as chelator. The simpler precipitation procedure was shown to give results identical with those obtained with the ashing technique. In addition, blood specimens were examined by the precipitation method and by spectral analysis, which method includes wet ashing of the samples, with good agreement. All analyses were done on blood samples from `normal' persons or from lead-exposed workers, and no additions of inorganic lead were made. The relatively simple protein precipitation technique gave accurate results and is suitable for the large-scale control of lead-exposed workers. PMID:5663425
Optimization of reinforced concrete slabs
NASA Technical Reports Server (NTRS)
Ferritto, J. M.
1979-01-01
Reinforced concrete cells composed of concrete slabs and used to limit the effects of accidental explosions during hazardous explosives operations are analyzed. An automated design procedure which considers the dynamic nonlinear behavior of the reinforced concrete of arbitrary geometrical and structural configuration subjected to dynamic pressure loading is discussed. The optimum design of the slab is examined using an interior penalty function. The optimization procedure is presented and the results are discussed and compared with finite element analysis.
Liu, Chao; Cox, Ronald B; Washburn, Isaac J; Croff, Julie M; Crethar, Hugh C
2017-07-01
Requiring parental consent may result in sampling biases that confound scientific conclusions and stifle the representation of children most at risk for adverse outcomes. This study aims to investigate whether active parental consent, compared with passive parental consent, creates a bias in response rate, demographic makeup, and adverse outcomes in adolescent samples. A meta-analysis was performed on peer-reviewed articles and unpublished dissertations from 1975 to 2016 in five computerized databases ERIC, PsycINFO, MEDLINE, PubMed and ProQuest. Quantitative studies were retained if they included the following keywords: active consent (or informed consent or parental consent), passive consent (or waiver of consent), risk behavior, adolescen*. Fifteen studies were identified with a total number of 104,074 children. Results showed (1) response rates were significantly lower for studies using active consent procedure than those using passive consent procedure (Z = 3.05, p = .002); (2) more females, younger participants, and less African-Americans were included in studies using active consent procedures than studies using passive procedures (Z = -2.73, p = .006; Z = -12.06, p < .00001; Z = 2.19, p = .03, respectively); (3) studies with passive consent procedures showed higher rates of self-reported substance use than studies using active consent procedures (Z = 3.07, p = .002). Requiring active parental consent can lead to a systematic bias in the sample where the population under study is misrepresented. Institutional review board committees should collaborate with researchers to find solutions that protect minors without silencing the voice of high-risk youth in the literature. Copyright © 2017 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Advanced approach to the analysis of a series of in-situ nuclear forward scattering experiments
NASA Astrophysics Data System (ADS)
Vrba, Vlastimil; Procházka, Vít; Smrčka, David; Miglierini, Marcel
2017-03-01
This study introduces a sequential fitting procedure as a specific approach to nuclear forward scattering (NFS) data evaluation. Principles and usage of this advanced evaluation method are described in details and its utilization is demonstrated on NFS in-situ investigations of fast processes. Such experiments frequently consist of hundreds of time spectra which need to be evaluated. The introduced procedure allows the analysis of these experiments and significantly decreases the time needed for the data evaluation. The key contributions of the study are the sequential use of the output fitting parameters of a previous data set as the input parameters for the next data set and the model suitability crosscheck option of applying the procedure in ascending and descending directions of the data sets. Described fitting methodology is beneficial for checking of model validity and reliability of obtained results.
Development of a thermal and structural analysis procedure for cooled radial turbines
NASA Technical Reports Server (NTRS)
Kumar, Ganesh N.; Deanna, Russell G.
1988-01-01
A procedure for computing the rotor temperature and stress distributions in a cooled radial turbine is considered. Existing codes for modeling the external mainstream flow and the internal cooling flow are used to compute boundary conditions for the heat transfer and stress analyses. An inviscid, quasi three-dimensional code computes the external free stream velocity. The external velocity is then used in a boundary layer analysis to compute the external heat transfer coefficients. Coolant temperatures are computed by a viscous one-dimensional internal flow code for the momentum and energy equation. These boundary conditions are input to a three-dimensional heat conduction code for calculation of rotor temperatures. The rotor stress distribution may be determined for the given thermal, pressure and centrifugal loading. The procedure is applied to a cooled radial turbine which will be tested at the NASA Lewis Research Center. Representative results from this case are included.
NASA Technical Reports Server (NTRS)
Ghista, D. N.; Hamid, M. S.
1977-01-01
The three-dimensional left ventricular chamber geometrical model is developed from single plane cineangiocardiogram. This left ventricular model is loaded by an internal pressure monitored by cardiac catheterization. The resulting stresses in the left ventricular model chamber's wall are determined by computerized finite element procedure. For the discretization of this left ventricular model structure, a 20-node, isoparametric finite element is employed. The analysis and formulation of the computerised procedure is presented in the paper, along with the detailed algorithms and computer programs. The procedure is applied to determine the stresses in a left ventricle at an instant, during systole. Next, a portion (represented by a finite element) of this left ventricular chamber is simulated as being infarcted by making its active-state modulus value equal to its passive-state value; the neighbouring elements are shown to relieve the 'infarcted' element of stress by themselves taking on more stress.
Kuo, Benjamin J; Vissoci, Joao Ricardo N; Egger, Joseph R; Smith, Emily R; Grant, Gerald A; Haglund, Michael M; Rice, Henry E
2017-03-01
OBJECTIVE Existing studies have shown a high overall rate of adverse events (AEs) following pediatric neurosurgical procedures. However, little is known regarding the morbidity of specific procedures or the association with risk factors to help guide quality improvement (QI) initiatives. The goal of this study was to describe the 30-day mortality and AE rates for pediatric neurosurgical procedures by using the American College of Surgeons (ACS) National Surgical Quality Improvement Program-Pediatrics (NSQIP-Peds) database platform. METHODS Data on 9996 pediatric neurosurgical patients were acquired from the 2012-2014 NSQIP-Peds participant user file. Neurosurgical cases were analyzed by the NSQIP-Peds targeted procedure categories, including craniotomy/craniectomy, defect repair, laminectomy, shunts, and implants. The primary outcome measure was 30-day mortality, with secondary outcomes including individual AEs, composite morbidity (all AEs excluding mortality and unplanned reoperation), surgical-site infection, and unplanned reoperation. Univariate analysis was performed between individual AEs and patient characteristics using Fischer's exact test. Associations between individual AEs and continuous variables (duration from admission to operation, work relative value unit, and operation time) were examined using the Student t-test. Patient characteristics and continuous variables associated with any AE by univariate analysis were used to develop category-specific multivariable models through backward stepwise logistic regression. RESULTS The authors analyzed 3383 craniotomy/craniectomy, 242 defect repair, 1811 laminectomy, and 4560 shunt and implant cases and found a composite overall morbidity of 30.2%, 38.8%, 10.2%, and 10.7%, respectively. Unplanned reoperation rates were highest for defect repair (29.8%). The mortality rate ranged from 0.1% to 1.2%. Preoperative ventilator dependence was a significant predictor of any AE for all procedure groups, whereas admission from outside hospital transfer was a significant predictor of any AE for all procedure groups except craniotomy/craniectomy. CONCLUSIONS This analysis of NSQIP-Peds, a large risk-adjusted national data set, confirms low perioperative mortality but high morbidity for pediatric neurosurgical procedures. These data provide a baseline understanding of current expected clinical outcomes for pediatric neurosurgical procedures, identify the need for collecting neurosurgery-specific risk factors and complications, and should support targeted QI programs and clinical management interventions to improve care of children.
Evaluation of management measures of software development. Volume 1: Analysis summary
NASA Technical Reports Server (NTRS)
Page, J.; Card, D.; Mcgarry, F.
1982-01-01
The conceptual model, the data classification scheme, and the analytic procedures are explained. The analytic results are summarized and specific software measures for collection and monitoring are recommended.
Nutrition Studies with Earthworms.
ERIC Educational Resources Information Center
Tobaga, Leandro
1980-01-01
Describes experiments which demonstrate how different diets affect the growth rate of earthworms. Procedures for feeding baby worms are outlined, the analysis of results are discussed, and various modifications of the exercise are provided. (CS)
50 CFR 648.21 - Procedures for determining initial annual amounts.
Code of Federal Regulations, 2010 CFR
2010-10-01
...; virtual population analysis results; levels of noncompliance by harvesters or individual states; impact of...: (A) Total world export potential of mackerel producing countries. (B) Total world import demand of...
Lincoln, Tricia A.; Horan-Ross, Debra A.; McHale, Michael R.; Lawrence, Gregory B.
2001-01-01
A laboratory for analysis of low-ionic strength water has been developed at the U.S. Geological Survey (USGS) office in Troy, N.Y., to analyze samples collected by USGS projects in the Northeast. The laboratory's quality-assurance program is based on internal and interlaboratory quality-assurance samples and quality-control procedures developed to ensure proper sample collection, processing, and analysis. The quality-assurance/quality-control data are stored in the laboratory's SAS data-management system, which provides efficient review, compilation, and plotting of quality-assurance/quality-control data. This report presents and discusses samples analyzed from July 1993 through June 1995. Quality-control results for 18 analytical procedures were evaluated for bias and precision. Control charts show that data from seven of the analytical procedures were biased throughout the analysis period for either high-concentration or low-concentration samples but were within control limits; these procedures were: acid-neutralizing capacity, dissolved inorganic carbon, dissolved organic carbon (soil expulsions), chloride, magnesium, nitrate (colorimetric method), and pH. Three of the analytical procedures were occasionally biased but were within control limits; they were: calcium (high for high-concentration samples for May 1995), dissolved organic carbon (high for highconcentration samples from January through September 1994), and fluoride (high in samples for April and June 1994). No quality-control sample has been developed for the organic monomeric aluminum procedure. Results from the filter-blank and analytical-blank analyses indicate that all analytical procedures in which blanks were run were within control limits, although values for a few blanks were outside the control limits. Blanks were not analyzed for acid-neutralizing capacity, dissolved inorganic carbon, fluoride, nitrate (colorimetric method), or pH. Sampling and analysis precision are evaluated herein in terms of the coefficient of variation obtained for triplicate samples in 14 of the 18 procedures. Data-quality objectives were met by more than 90 percent of the samples analyzed in all procedures except total monomeric aluminum (85 percent of samples met objectives), total aluminum (70 percent of samples met objectives), and dissolved organic carbon (85 percent of samples met objectives). Triplicate samples were not analyzed for ammonium, fluoride, dissolved inorganic carbon, or nitrate (colorimetric method). Results of the USGS interlaboratory Standard Reference Sample Program indicated high data quality with a median result of 3.6 of a possible 4.0. Environment Canada's LRTAP interlaboratory study results indicated that more than 85 percent of the samples met data-quality objectives in 6 of the 12 analyses; exceptions were calcium, dissolved organic carbon, chloride, pH, potassium, and sodium. Data-quality objectives were not met for calcium samples in one LRTAP study, but 94 percent of samples analyzed were within control limits for the remaining studies. Data-quality objectives were not met by 35 percent of samples analyzed for dissolved organic carbon, but 94 percent of sample values were within 20 percent of the most probable value. Data-quality objectives were not met for 30 percent of samples analyzed for chloride, but 90 percent of sample values were within 20 percent of the most probable value. Measurements of samples with a pH above 6.0 were biased high in 54 percent of the samples, although 85 percent of the samples met data-quality objectives for pH measurements below 6.0. Data-quality objectives for potassium and sodium were not met in one study (only 33 percent of the samples analyzed met the objectives), although 85 percent of the sample values were within control limits for the other studies. Measured sodium values were above the upper control limit in all studies. Results from blind reference-sample analyses indicated that data
NASA Technical Reports Server (NTRS)
Lovejoy, Andrew E.; Jegley, Dawn C. (Technical Monitor)
2007-01-01
Structures often comprise smaller substructures that are connected to each other or attached to the ground by a set of finite connections. Under static loading one or more of these connections may exceed allowable limits and be deemed to fail. Of particular interest is the structural response when a connection is severed (failed) while the structure is under static load. A transient failure analysis procedure was developed by which it is possible to examine the dynamic effects that result from introducing a discrete failure while a structure is under static load. The failure is introduced by replacing a connection load history by a time-dependent load set that removes the connection load at the time of failure. The subsequent transient response is examined to determine the importance of the dynamic effects by comparing the structural response with the appropriate allowables. Additionally, this procedure utilizes a standard finite element transient analysis that is readily available in most commercial software, permitting the study of dynamic failures without the need to purchase software specifically for this purpose. The procedure is developed and explained, demonstrated on a simple cantilever box example, and finally demonstrated on a real-world example, the American Airlines Flight 587 (AA587) vertical tail plane (VTP).
Sto Domingo, N D; Refsgaard, A; Mark, O; Paludan, B
2010-01-01
The potential devastating effects of urban flooding have given high importance to thorough understanding and management of water movement within catchments, and computer modelling tools have found widespread use for this purpose. The state-of-the-art in urban flood modelling is the use of a coupled 1D pipe and 2D overland flow model to simultaneously represent pipe and surface flows. This method has been found to be accurate for highly paved areas, but inappropriate when land hydrology is important. The objectives of this study are to introduce a new urban flood modelling procedure that is able to reflect system interactions with hydrology, verify that the new procedure operates well, and underline the importance of considering the complete water cycle in urban flood analysis. A physically-based and distributed hydrological model was linked to a drainage network model for urban flood analysis, and the essential components and concepts used were described in this study. The procedure was then applied to a catchment previously modelled with the traditional 1D-2D procedure to determine if the new method performs similarly well. Then, results from applying the new method in a mixed-urban area were analyzed to determine how important hydrologic contributions are to flooding in the area.
Mendiratta-Lala, Mishal; Williams, Todd R; Mendiratta, Vivek; Ahmed, Hafeez; Bonnett, John W
2015-04-01
The purpose of this study was to evaluate the effectiveness of a multifaceted simulation-based resident training for CT-guided fluoroscopic procedures by measuring procedural and technical skills, radiation dose, and procedure times before and after simulation training. A prospective analysis included 40 radiology residents and eight staff radiologists. Residents took an online pretest to assess baseline procedural knowledge. Second-through fourth-year residents' baseline technical skills with a procedural phantom were evaluated. First-through third-year residents then underwent formal didactic and simulation-based procedural and technical training with one of two interventional radiologists and followed the training with 1 month of supervised phantom-based practice. Thereafter, residents underwent final written and practical examinations. The practical examination included essential items from a 20-point checklist, including site and side marking, consent, time-out, and sterile technique along with a technical skills portion assessing pedal steps, radiation dose, needle redirects, and procedure time. The results indicated statistically significant improvement in procedural and technical skills after simulation training. For residents, the median number of pedal steps decreased by three (p=0.001), median dose decreased by 15.4 mGy (p<0.001), median procedure time decreased by 4.0 minutes (p<0.001), median number of needle redirects decreased by 1.0 (p=0.005), and median number of 20-point checklist items successfully completed increased by three (p<0.001). The results suggest that procedural skills can be acquired and improved by simulation-based training of residents, regardless of experience. CT simulation training decreases procedural time, decreases radiation dose, and improves resident efficiency and confidence, which may transfer to clinical practice with improved patient care and safety.
Transient Evoked and Distortion Product Otoacoustic Emissions in a Group of Neonates
Silva, Giovanna Cesar; Delecrode, Camila Ribas; Kemp, Adriana Tahara; Martins, Fabiana; Cardoso, Ana Claudia Vieira
2015-01-01
Introduction The most commonly used method in neonatal hearing screening programs is transient evoked otoacoustic emissions in the first stage of the process. There are few studies comparing transient evoked otoacoustic emissions with distortion product, but some authors have investigated the issue. Objective To correlate the results of transient evoked and distortion product otoacoustic emissions in a Brazilian maternity hospital. Methods This is a cross-sectional, comparative, and prospective study. The study included 579 newborns, ranging from 6 to 54 days of age, born in a low-risk maternity hospital and assessed for hearing loss. All neonates underwent hearing screening by transient evoked and distortion product otoacoustic emissions. The results were analyzed using the Spearman correlation test to relate the two procedures. Results The pass index on transient evoked otoacoustic emissions was 95% and on distortion product otoacoustic emissions was 91%. The comparison of the two procedures showed that 91% of neonates passed on both procedures, 4.5% passed only on transient evoked otoacoustic emissions, 0.5% passed only on distortion product otoacoustic emissions, and 4% failed on both procedures. The inferential analysis showed a significant strong positive relationship between the two procedures. Conclusion The failure rate was higher in distortion product otoacoustic emissions when compared with transient evoked; however, there was correlation between the results of the procedures. PMID:26157501
Quantitative 13C NMR characterization of fast pyrolysis oils
Happs, Renee M.; Lisa, Kristina; Ferrell, III, Jack R.
2016-10-20
Quantitative 13C NMR analysis of model catalytic fast pyrolysis (CFP) oils following literature procedures showed poor agreement for aromatic hydrocarbons between NMR measured concentrations and actual composition. Furthermore, modifying integration regions based on DEPT analysis for aromatic carbons resulted in better agreement. Solvent effects were also investigated for hydrotreated CFP oil.
Descriptive Analysis of Teachers' Responses to Problem Behavior Following Training
ERIC Educational Resources Information Center
Addison, Laura; Lerman, Dorothea C.
2009-01-01
The procedures described by Sloman et al. (2005) were extended to an analysis of teachers' responses to problem behavior after they had been taught to withhold potential sources of positive and negative reinforcement following instances of problem behavior. Results were consistent with those reported previously, suggesting that escape from child…
ERIC Educational Resources Information Center
Zhang, Zhidong
2016-01-01
This study explored an alternative assessment procedure to examine learning trajectories of matrix multiplication. It took rule-based analytical and cognitive task analysis methods specifically to break down operation rules for a given matrix multiplication. Based on the analysis results, a hierarchical Bayesian network, an assessment model,…
Quantitative 13C NMR characterization of fast pyrolysis oils
DOE Office of Scientific and Technical Information (OSTI.GOV)
Happs, Renee M.; Lisa, Kristina; Ferrell, III, Jack R.
Quantitative 13C NMR analysis of model catalytic fast pyrolysis (CFP) oils following literature procedures showed poor agreement for aromatic hydrocarbons between NMR measured concentrations and actual composition. Furthermore, modifying integration regions based on DEPT analysis for aromatic carbons resulted in better agreement. Solvent effects were also investigated for hydrotreated CFP oil.
Trillsch, F; Mahner, S; Vettorazzi, E; Woelber, L; Reuss, A; Baumann, K; Keyver-Paik, M-D; Canzler, U; Wollschlaeger, K; Forner, D; Pfisterer, J; Schroeder, W; Muenstedt, K; Richter, B; Fotopoulou, C; Schmalfeldt, B; Burges, A; Ewald-Riegler, N; de Gregorio, N; Hilpert, F; Fehm, T; Meier, W; Hillemanns, P; Hanker, L; Hasenburg, A; Strauss, H-G; Hellriegel, M; Wimberger, P; Kommoss, S; Kommoss, F; Hauptmann, S; du Bois, A
2015-01-01
Background: Incomplete surgical staging is a negative prognostic factor for patients with borderline ovarian tumours (BOT). However, little is known about the prognostic impact of each individual staging procedure. Methods: Clinical parameters of 950 patients with BOT (confirmed by central reference pathology) treated between 1998 and 2008 at 24 German AGO centres were analysed. In 559 patients with serous BOT and adequate ovarian surgery, further recommended staging procedures (omentectomy, peritoneal biopsies, cytology) were evaluated applying Cox regression models with respect to progression-free survival (PFS). Results: For patients with one missing staging procedure, the hazard ratio (HR) for recurrence was 1.25 (95%-CI 0.66–2.39; P=0.497). This risk increased with each additional procedure skipped reaching statistical significance in case of two (HR 1.95; 95%-CI 1.06–3.58; P=0.031) and three missing steps (HR 2.37; 95%-CI 1.22–4.64; P=0.011). The most crucial procedure was omentectomy which retained a statistically significant impact on PFS in multiple analysis (HR 1.91; 95%-CI 1.15–3.19; P=0.013) adjusting for previously established prognostic factors as FIGO stage, tumour residuals, and fertility preservation. Conclusion: Individual surgical staging procedures contribute to the prognosis for patients with serous BOT. In this analysis, recurrence risk increased with each skipped surgical step. This should be considered when re-staging procedures following incomplete primary surgery are discussed. PMID:25562434
Loarie, Thomas M; Applegate, David; Kuenne, Christopher B; Choi, Lawrence J; Horowitz, Diane P
2003-01-01
Market segmentation analysis identifies discrete segments of the population whose beliefs are consistent with exhibited behaviors such as purchase choice. This study applies market segmentation analysis to low myopes (-1 to -3 D with less than 1 D cylinder) in their consideration and choice of a refractive surgery procedure to discover opportunities within the market. A quantitative survey based on focus group research was sent to a demographically balanced sample of myopes using contact lenses and/or glasses. A variable reduction process followed by a clustering analysis was used to discover discrete belief-based segments. The resulting segments were validated both analytically and through in-market testing. Discontented individuals who wear contact lenses are the primary target for vision correction surgery. However, 81% of the target group is apprehensive about laser in situ keratomileusis (LASIK). They are nervous about the procedure and strongly desire reversibility and exchangeability. There exists a large untapped opportunity for vision correction surgery within the low myope population. Market segmentation analysis helped determine how to best meet this opportunity through repositioning existing procedures or developing new vision correction technology, and could also be applied to identify opportunities in other vision correction populations.
Computer-aided operations engineering with integrated models of systems and operations
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Ryan, Dan; Fleming, Land
1994-01-01
CONFIG 3 is a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operation of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. Integration is supported among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. Support is provided for integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems. CONFIG supports abstracted qualitative and symbolic modeling, for early conceptual design. System models are component structure models with operating modes, with embedded time-related behavior models. CONFIG supports failure modeling and modeling of state or configuration changes that result in dynamic changes in dependencies among components. Operations and procedure models are activity structure models that interact with system models. CONFIG is designed to support evaluation of system operability, diagnosability and fault tolerance, and analysis of the development of system effects of problems over time, including faults, failures, and procedural or environmental difficulties.
Atmospheric model development in support of SEASAT. Volume 1: Summary of findings
NASA Technical Reports Server (NTRS)
Kesel, P. G.
1977-01-01
Atmospheric analysis and prediction models of varying (grid) resolution were developed. The models were tested using real observational data for the purpose of assessing the impact of grid resolution on short range numerical weather prediction. The discretionary model procedures were examined so that the computational viability of SEASAT data might be enhanced during the conduct of (future) sensitivity tests. The analysis effort covers: (1) examining the procedures for allowing data to influence the analysis; (2) examining the effects of varying the weights in the analysis procedure; (3) testing and implementing procedures for solving the minimization equation in an optimal way; (4) describing the impact of grid resolution on analysis; and (5) devising and implementing numerous practical solutions to analysis problems, generally.
Ropkins, K; Beck, A J
2002-08-01
Hazard analysis by critical control points (HACCP) is a systematic approach to the identification, assessment and control of hazards. Effective HACCP requires the consideration of all hazards, i.e., chemical, microbiological and physical. However, to-date most 'in-place' HACCP procedures have tended to focus on the control of microbiological and physical food hazards. In general, the chemical component of HACCP procedures is either ignored or limited to applied chemicals, e.g., food additives and pesticides. In this paper we discuss the application of HACCP to a broader range of chemical hazards, using organic chemical contaminants as examples, and the problems that are likely to arise in the food manufacturing sector. Chemical HACCP procedures are likely to result in many of the advantages previously identified for microbiological HACCP procedures: more effective, efficient and economical than conventional end-point-testing methods. However, the high costs of analytical monitoring of chemical contaminants and a limited understanding of formulation and process optimisation as means of controlling chemical contamination of foods are likely to prevent chemical HACCP becoming as effective as microbiological HACCP.
ERIC Educational Resources Information Center
Parry-Jones, R.
1980-01-01
Described are some new uses and procedures of titration procedures. Topics included are titration in non-aqueous solvents, thermometric titration and catalytic methods for end-point detection, titration finish in organic elemental analysis, and sub-micro analysis and automatic titration procedures. (CS)
Fiorella, David; Derdeyn, Colin P; Lynn, Michael J; Barnwell, Stanley L; Hoh, Brian L.; Levy, Elad I.; Harrigan, Mark R.; Klucznik, Richard P.; McDougall, Cameron G.; Pride, G. Lee; Zaidat, Osama O.; Lutsep, Helmi L.; Waters, Michael F.; Hourihane, J. Maurice; Alexandrov, Andrei V.; Chiu, David; Clark, Joni M.; Johnson, Mark D.; Torbey, Michel T.; Rumboldt, Zoran; Cloft, Harry J.; Turan, Tanya N.; Lane, Bethany F.; Janis, L. Scott; Chimowitz, Marc I.
2012-01-01
Background and Purpose Enrollment in the SAMMPRIS trial was halted due to the high risk of stroke or death within 30 days of enrollment in the percutaneous transluminal angioplasty and stenting (PTAS) arm relative to the medical arm. This analysis focuses on the patient and procedural factors that may have been associated with peri-procedural cerebrovascular events in the trial. Methods Bivariate and multivariate analyses were performed to evaluate whether patient and procedural variables were associated with cerebral ischemic or hemorrhagic events occurring within 30 days of enrollment (termed peri-procedural) in the PTAS arm. Results Of 224 patients randomized to PTAS, 213 underwent angioplasty alone (n=5) or with stenting (n=208). Of these, 13 had hemorrhagic strokes (7 parenchymal, 6 subarachnoid), 19 had ischemic stroke, and 2 had cerebral infarcts with temporary signs (CITS) within the peri-procedural period. Ischemic events were categorized as perforator occlusions (13), embolic (4), mixed perforator and embolic (2), and delayed stent occlusion (2). Multivariate analyses showed that higher percent stenosis, lower modified Rankin score, and clopidogrel load associated with an activated clotting time above the target range were associated (p ≤ 0.05) with hemorrhagic stroke. Non-smoking, basilar artery stenosis, diabetes, and older age were associated (p ≤ 0.05) with ischemic events. Conclusions Peri-procedural strokes in SAMMPRIS had multiple causes with the most common being perforator occlusion. Although risk factors for peri-procedural strokes could be identified, excluding patients with these features from undergoing PTAS to lower the procedural risk would limit PTAS to a small subset of patients. Moreover, given the small number of events, the present data should be used for hypothesis generation rather than to guide patient selection in clinical practice. PMID:22984008
Belgiu, Mariana; Dr Guţ, Lucian; Strobl, Josef
2014-01-01
The increasing availability of high resolution imagery has triggered the need for automated image analysis techniques, with reduced human intervention and reproducible analysis procedures. The knowledge gained in the past might be of use to achieving this goal, if systematically organized into libraries which would guide the image analysis procedure. In this study we aimed at evaluating the variability of digital classifications carried out by three experts who were all assigned the same interpretation task. Besides the three classifications performed by independent operators, we developed an additional rule-based classification that relied on the image classifications best practices found in the literature, and used it as a surrogate for libraries of object characteristics. The results showed statistically significant differences among all operators who classified the same reference imagery. The classifications carried out by the experts achieved satisfactory results when transferred to another area for extracting the same classes of interest, without modification of the developed rules.
Belgiu, Mariana; Drǎguţ, Lucian; Strobl, Josef
2014-01-01
The increasing availability of high resolution imagery has triggered the need for automated image analysis techniques, with reduced human intervention and reproducible analysis procedures. The knowledge gained in the past might be of use to achieving this goal, if systematically organized into libraries which would guide the image analysis procedure. In this study we aimed at evaluating the variability of digital classifications carried out by three experts who were all assigned the same interpretation task. Besides the three classifications performed by independent operators, we developed an additional rule-based classification that relied on the image classifications best practices found in the literature, and used it as a surrogate for libraries of object characteristics. The results showed statistically significant differences among all operators who classified the same reference imagery. The classifications carried out by the experts achieved satisfactory results when transferred to another area for extracting the same classes of interest, without modification of the developed rules. PMID:24623959
Visual accumulation tube for size analysis of sands
Colby, B.C.; Christensen, R.P.
1956-01-01
The visual-accumulation-tube method was developed primarily for making size analyses of the sand fractions of suspended-sediment and bed-material samples. Because the fundamental property governing the motion of a sediment particle in a fluid is believed to be its fall velocity. the analysis is designed to determine the fall-velocity-frequency distribution of the individual particles of the sample. The analysis is based on a stratified sedimentation system in which the sample is introduced at the top of a transparent settling tube containing distilled water. The procedure involves the direct visual tracing of the height of sediment accumulation in a contracted section at the bottom of the tube. A pen records the height on a moving chart. The method is simple and fast, provides a continuous and permanent record, gives highly reproducible results, and accurately determines the fall-velocity characteristics of the sample. The apparatus, procedure, results, and accuracy of the visual-accumulation-tube method for determining the sedimentation-size distribution of sands are presented in this paper.
NASA Astrophysics Data System (ADS)
Belgiu, Mariana; ǎguţ, Lucian, , Dr; Strobl, Josef
2014-01-01
The increasing availability of high resolution imagery has triggered the need for automated image analysis techniques, with reduced human intervention and reproducible analysis procedures. The knowledge gained in the past might be of use to achieving this goal, if systematically organized into libraries which would guide the image analysis procedure. In this study we aimed at evaluating the variability of digital classifications carried out by three experts who were all assigned the same interpretation task. Besides the three classifications performed by independent operators, we developed an additional rule-based classification that relied on the image classifications best practices found in the literature, and used it as a surrogate for libraries of object characteristics. The results showed statistically significant differences among all operators who classified the same reference imagery. The classifications carried out by the experts achieved satisfactory results when transferred to another area for extracting the same classes of interest, without modification of the developed rules.
NASA Astrophysics Data System (ADS)
Zong, Yali; Hu, Naigang; Duan, Baoyan; Yang, Guigeng; Cao, Hongjun; Xu, Wanye
2016-03-01
Inevitable manufacturing errors and inconsistency between assumed and actual boundary conditions can affect the shape precision and cable tensions of a cable-network antenna, and even result in failure of the structure in service. In this paper, an analytical sensitivity analysis method of the shape precision and cable tensions with respect to the parameters carrying uncertainty was studied. Based on the sensitivity analysis, an optimal design procedure was proposed to alleviate the effects of the parameters that carry uncertainty. The validity of the calculated sensitivities is examined by those computed by a finite difference method. Comparison with a traditional design method shows that the presented design procedure can remarkably reduce the influence of the uncertainties on the antenna performance. Moreover, the results suggest that especially slender front net cables, thick tension ties, relatively slender boundary cables and high tension level can improve the ability of cable-network antenna structures to resist the effects of the uncertainties on the antenna performance.
NASA Technical Reports Server (NTRS)
Kirk, R. G.; Gunter, E. J.
1972-01-01
A steady state analysis of the shaft and the bearing housing motion was made by assuming synchronous precession of the system. The conditions under which the support system would act as a dynamic vibration absorber at the rotor critical speed were studied; plots of the rotor and support amplitudes, phase angles, and forces transmitted were evaluated by the computer, and the performance curves were automatically plotted by a CalComp plotter unit. Curves are presented on the optimization of the support housing characteristics to attenuate the rotor unbalance response over the entire rotor speed range. The complete transient motion including rotor unbalance was examined by integrating the equations of motion numerically using a modified fourth order Runge-Kutta procedure, and the resulting whirl orbits were plotted by the CalComp plotter unit. The results of the transient analysis are discussed with regards to the design optimization procedure derived from the steady-state analysis.
Optimization of multi-element airfoils for maximum lift
NASA Technical Reports Server (NTRS)
Olsen, L. E.
1979-01-01
Two theoretical methods are presented for optimizing multi-element airfoils to obtain maximum lift. The analyses assume that the shapes of the various high lift elements are fixed. The objective of the design procedures is then to determine the optimum location and/or deflection of the leading and trailing edge devices. The first analysis determines the optimum horizontal and vertical location and the deflection of a leading edge slat. The structure of the flow field is calculated by iteratively coupling potential flow and boundary layer analysis. This design procedure does not require that flow separation effects be modeled. The second analysis determines the slat and flap deflection required to maximize the lift of a three element airfoil. This approach requires that the effects of flow separation from one or more of the airfoil elements be taken into account. The theoretical results are in good agreement with results of a wind tunnel test used to corroborate the predicted optimum slat and flap positions.
Surgical task analysis of simulated laparoscopic cholecystectomy with a navigation system.
Sugino, T; Kawahira, H; Nakamura, R
2014-09-01
Advanced surgical procedures, which have become complex and difficult, increase the burden of surgeons. Quantitative analysis of surgical procedures can improve training, reduce variability, and enable optimization of surgical procedures. To this end, a surgical task analysis system was developed that uses only surgical navigation information. Division of the surgical procedure, task progress analysis, and task efficiency analysis were done. First, the procedure was divided into five stages. Second, the operating time and progress rate were recorded to document task progress during specific stages, including the dissecting task. Third, the speed of the surgical instrument motion (mean velocity and acceleration), as well as the size and overlap ratio of the approximate ellipse of the location log data distribution, was computed to estimate the task efficiency during each stage. These analysis methods were evaluated based on experimental validation with two groups of surgeons, i.e., skilled and "other" surgeons. The performance metrics and analytical parameters included incidents during the operation, the surgical environment, and the surgeon's skills or habits. Comparison of groups revealed that skilled surgeons tended to perform the procedure in less time and involved smaller regions; they also manipulated the surgical instruments more gently. Surgical task analysis developed for quantitative assessment of surgical procedures and surgical performance may provide practical methods and metrics for objective evaluation of surgical expertise.
[The structural functional analysis of functioning of day-hospitals of the Russian Federation].
2012-01-01
The article deals with the results of structural functional analysis of functioning of day-hospitals in the Russian Federation. The dynamic analysis is presented concerning day-hospitals' network, capacity; financial support, beds stock structure, treated patients structure, volumes of diagnostic tests and curative procedures. The need in developing of population medical care in conditions of day-hospitals is demonstrated.
Double Cross-Validation in Multiple Regression: A Method of Estimating the Stability of Results.
ERIC Educational Resources Information Center
Rowell, R. Kevin
In multiple regression analysis, where resulting predictive equation effectiveness is subject to shrinkage, it is especially important to evaluate result replicability. Double cross-validation is an empirical method by which an estimate of invariance or stability can be obtained from research data. A procedure for double cross-validation is…
Monitoring Air Quality with Leaf Yeasts.
ERIC Educational Resources Information Center
Richardson, D. H. S.; And Others
1985-01-01
Proposes that leaf yeast serve as quick, inexpensive, and effective techniques for monitoring air quality. Outlines procedures and provides suggestions for data analysis. Includes results from sample school groups who employed this technique. (ML)
14 CFR 25.331 - Symmetric maneuvering conditions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Conditions § 25.331 Symmetric maneuvering conditions. (a) Procedure. For the analysis of the maneuvering... factor (at point A2 in § 25.333(b)), or the resulting tailplane normal load reaches its maximum...
Determination of Sulfur in Fuel Oils: An Instrumental Analysis Experiment.
ERIC Educational Resources Information Center
Graham, Richard C.; And Others
1982-01-01
Chromatographic techniques are used in conjunction with a Parr oxygen combustion bomb to determine sulfur in fuel oils. Experimental procedures and results are discussed including an emphasis on safety considerations. (SK)
An ocean data assimilation system and reanalysis of the World Ocean hydrophysical fields
NASA Astrophysics Data System (ADS)
Zelenko, A. A.; Vil'fand, R. M.; Resnyanskii, Yu. D.; Strukov, B. S.; Tsyrulnikov, M. D.; Svirenko, P. I.
2016-07-01
A new version of the ocean data assimilation system (ODAS) developed at the Hydrometcentre of Russia is presented. The assimilation is performed following the sequential scheme analysis-forecast-analysis. The main components of the ODAS are procedures for operational observation data processing, a variational analysis scheme, and an ocean general circulation model used to estimate the first guess fields involved in the analysis. In situ observations of temperature and salinity in the upper 1400-m ocean layer obtained from various observational platforms are used as input data. In the new ODAS version, the horizontal resolution of the assimilating model and of the output products is increased, the previous 2D-Var analysis scheme is replaced by a more general 3D-Var scheme, and a more flexible incremental analysis updating procedure is introduced to correct the model calculations. A reanalysis of the main World Ocean hydrophysical fields over the 2005-2015 period has been performed using the updated ODAS. The reanalysis results are compared with data from independent sources.
Total hydrocarbon content (THC) testing in liquid oxygen (LOX) systems
NASA Astrophysics Data System (ADS)
Meneghelli, B. J.; Obregon, R. E.; Ross, H. R.; Hebert, B. J.; Sass, J. P.; Dirschka, G. E.
2015-12-01
The measured Total Hydrocarbon Content (THC) levels in liquid oxygen (LOX) systems at Stennis Space Center (SSC) have shown wide variations. Examples of these variations include the following: 1) differences between vendor-supplied THC values and those obtained using standard SSC analysis procedures; and 2) increasing THC values over time at an active SSC test stand in both storage and run vessels. A detailed analysis of LOX sampling techniques, analytical instrumentation, and sampling procedures will be presented. Additional data obtained on LOX system operations and LOX delivery trailer THC values during the past 12-24 months will also be discussed. Field test results showing THC levels and the distribution of the THC's in the test stand run tank, modified for THC analysis via dip tubes, will be presented.
Total Hydrocarbon Content (THC) Testing in Liquid Oxygen (LOX)
NASA Technical Reports Server (NTRS)
Meneghelli, B. J.; Obregon, R. E.; Ross, H. R.; Hebert, B. J.; Sass, J. P.; Dirschka, G. E.
2016-01-01
The measured Total Hydrocarbon Content (THC) levels in liquid oxygen (LOX) systems at Stennis Space Center (SSC) have shown wide variations. Examples of these variations include the following: 1) differences between vendor-supplied THC values and those obtained using standard SSC analysis procedures; and 2) increasing THC values over time at an active SSC test stand in both storage and run vessels. A detailed analysis of LOX sampling techniques, analytical instrumentation, and sampling procedures will be presented. Additional data obtained on LOX system operations and LOX delivery trailer THC values during the past 12-24 months will also be discussed. Field test results showing THC levels and the distribution of the THC's in the test stand run tank, modified for THC analysis via dip tubes, will be presented.
Understanding logistic regression analysis.
Sperandei, Sandro
2014-01-01
Logistic regression is used to obtain odds ratio in the presence of more than one explanatory variable. The procedure is quite similar to multiple linear regression, with the exception that the response variable is binomial. The result is the impact of each variable on the odds ratio of the observed event of interest. The main advantage is to avoid confounding effects by analyzing the association of all variables together. In this article, we explain the logistic regression procedure using examples to make it as simple as possible. After definition of the technique, the basic interpretation of the results is highlighted and then some special issues are discussed.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-06
... Establishing Test Procedures for the Analysis of Pollutants Under the Clean Water Act; Analysis and Sampling... for use as an alternative oil and grease method. Some comments were specific to the sampling...-side comparison using the specific procedures (e.g. sampling frequency, number of samples, QA/QC, and...
Procedures for numerical analysis of circadian rhythms
REFINETTI, ROBERTO; LISSEN, GERMAINE CORNÉ; HALBERG, FRANZ
2010-01-01
This article reviews various procedures used in the analysis of circadian rhythms at the populational, organismal, cellular and molecular levels. The procedures range from visual inspection of time plots and actograms to several mathematical methods of time series analysis. Computational steps are described in some detail, and additional bibliographic resources and computer programs are listed. PMID:23710111
The purpose of this SOP is to describe the methodology used for the analysis of the 3M OVM 3500 Organic Vapor Monitors for volatile organic compounds (VOCs), using solvent extraction and standard gas chromatography/mass spectrometry (GC/MS) analysis procedures. This procedure was...
The Synthesis and Chemiluminescence of a Stable 1,2-Dioxetane.
ERIC Educational Resources Information Center
Meijer, E. W.; Wynberg, Hans
1982-01-01
Background information, laboratory procedures, and discussion of results are provided for the synthesis and chemiluminescence of adamantylideneadamantane-1,2-dioxetane (I). Results provided were obtained during a normal junior level organic laboratory course. All intermediates and products were identified using routine spectroscopic analysis.…
The Pollution Detectives, Part III: Roadside Lead Pollution.
ERIC Educational Resources Information Center
Sanderson, Phil
1989-01-01
Described is a simple test tube method developed lead analysis of samples of roadside soil. The relationship between the results and the traffic flow indicate car exhausts are the major source of lead pollution. Materials and procedures are detailed. An example of results is provided. (Author/CW)
Funding analysis of bilateral autologous free-flap breast reconstructions in Australia.
Sinha, Shiba; Ruskin, Olivia; McCombe, David; Morrison, Wayne; Webb, Angela
2015-08-01
Bilateral breast reconstructions are being increasingly performed. Autologous free-flap reconstructions represent the gold standard for post-mastectomy breast reconstruction but are resource intensive. This study aims to investigate the difference between hospital reimbursement and true cost of bilateral autologous free-flap reconstructions. Retrospective analysis of patients who underwent bilateral autologous free-flap reconstructions at a single Australian tertiary referral centre was performed. Hospital reimbursement was determined from coding analysis. A true cost analysis was also performed. Comparisons were made considering the effect of timing, indication and complications of the procedure. Forty-six bilateral autologous free-flap procedures were performed (87 deep inferior epigastric perforators (DIEPs), four superficial inferior epigastric artery perforator flaps (SIEAs) and one muscle-sparing free transverse rectus abdominis myocutaneous flap (MS-TRAM)). The mean funding discrepancy between hospital reimbursement and actual cost was $12,137 ± $8539 (mean ± standard deviation (SD)) (n = 46). Twenty-four per cent (n = 11) of the cases had been coded inaccurately. If these cases were excluded from analysis, the mean funding discrepancy per case was $9168 ± $7453 (n = 35). Minor and major complications significantly increased the true cost and funding discrepancy (p = 0.02). Bilateral free-flap breast reconstructions performed in Australian public hospitals result in a funding discrepancy. Failure to be economically viable threatens the provision of this procedure in the public system. Plastic surgeons and hospital managers need to adopt measures in order to make these gold-standard procedures cost neutral. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Suggestions for presenting the results of data analyses
Anderson, David R.; Link, William A.; Johnson, Douglas H.; Burnham, Kenneth P.
2001-01-01
We give suggestions for the presentation of research results from frequentist, information-theoretic, and Bayesian analysis paradigms, followed by several general suggestions. The information-theoretic and Bayesian methods offer alternative approaches to data analysis and inference compared to traditionally used methods. Guidance is lacking on the presentation of results under these alternative procedures and on nontesting aspects of classical frequentists methods of statistical analysis. Null hypothesis testing has come under intense criticism. We recommend less reporting of the results of statistical tests of null hypotheses in cases where the null is surely false anyway, or where the null hypothesis is of little interest to science or management.
NASA Technical Reports Server (NTRS)
Oconnell, R. F.; Hassig, H. J.; Radovcich, N. A.
1976-01-01
Results of a study of the development of flutter modules applicable to automated structural design of advanced aircraft configurations, such as a supersonic transport, are presented. Automated structural design is restricted to automated sizing of the elements of a given structural model. It includes a flutter optimization procedure; i.e., a procedure for arriving at a structure with minimum mass for satisfying flutter constraints. Methods of solving the flutter equation and computing the generalized aerodynamic force coefficients in the repetitive analysis environment of a flutter optimization procedure are studied, and recommended approaches are presented. Five approaches to flutter optimization are explained in detail and compared. An approach to flutter optimization incorporating some of the methods discussed is presented. Problems related to flutter optimization in a realistic design environment are discussed and an integrated approach to the entire flutter task is presented. Recommendations for further investigations are made. Results of numerical evaluations, applying the five methods of flutter optimization to the same design task, are presented.
Cumulative uncertainty in measured streamflow and water quality data for small watersheds
Harmel, R.D.; Cooper, R.J.; Slade, R.M.; Haney, R.L.; Arnold, J.G.
2006-01-01
The scientific community has not established an adequate understanding of the uncertainty inherent in measured water quality data, which is introduced by four procedural categories: streamflow measurement, sample collection, sample preservation/storage, and laboratory analysis. Although previous research has produced valuable information on relative differences in procedures within these categories, little information is available that compares the procedural categories or presents the cumulative uncertainty in resulting water quality data. As a result, quality control emphasis is often misdirected, and data uncertainty is typically either ignored or accounted for with an arbitrary margin of safety. Faced with the need for scientifically defensible estimates of data uncertainty to support water resource management, the objectives of this research were to: (1) compile selected published information on uncertainty related to measured streamflow and water quality data for small watersheds, (2) use a root mean square error propagation method to compare the uncertainty introduced by each procedural category, and (3) use the error propagation method to determine the cumulative probable uncertainty in measured streamflow, sediment, and nutrient data. Best case, typical, and worst case "data quality" scenarios were examined. Averaged across all constituents, the calculated cumulative probable uncertainty (??%) contributed under typical scenarios ranged from 6% to 19% for streamflow measurement, from 4% to 48% for sample collection, from 2% to 16% for sample preservation/storage, and from 5% to 21% for laboratory analysis. Under typical conditions, errors in storm loads ranged from 8% to 104% for dissolved nutrients, from 8% to 110% for total N and P, and from 7% to 53% for TSS. Results indicated that uncertainty can increase substantially under poor measurement conditions and limited quality control effort. This research provides introductory scientific estimates of uncertainty in measured water quality data. The results and procedures presented should also assist modelers in quantifying the "quality"of calibration and evaluation data sets, determining model accuracy goals, and evaluating model performance.
NASA Astrophysics Data System (ADS)
Zolfaghari, M. R.; Ajamy, A.; Asgarian, B.
2015-12-01
The primary goal of seismic reassessment procedures in oil platform codes is to determine the reliability of a platform under extreme earthquake loading. Therefore, in this paper, a simplified method is proposed to assess seismic performance of existing jacket-type offshore platforms (JTOP) in regions ranging from near-elastic to global collapse. The simplified method curve exploits well agreement between static pushover (SPO) curve and the entire summarized interaction incremental dynamic analysis (CI-IDA) curve of the platform. Although the CI-IDA method offers better understanding and better modelling of the phenomenon, it is a time-consuming and challenging task. To overcome the challenges, the simplified procedure, a fast and accurate approach, is introduced based on SPO analysis. Then, an existing JTOP in the Persian Gulf is presented to illustrate the procedure, and finally a comparison is made between the simplified method and CI-IDA results. The simplified method is very informative and practical for current engineering purposes. It is able to predict seismic performance elasticity to global dynamic instability with reasonable accuracy and little computational effort.
Removal of uranium from soil sample digests for ICP-OES analysis of trace metals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foust, R.D. Jr.; Bidabad, M.
1996-10-01
An analytical procedure has been developed to quantitatively remove uranium from soil sample digests, permitting ICP-OES analysis of trace metals. The procedure involves digesting a soil sample with standard procedures (EPA SW-846, Method 3050), and passing the sample digestate through commercially available resin (U/TEVA{sm_bullet}Spec, Eichrom Industries, Inc.) containing diarryl amylphosphonate as the stationary phase. Quantitative removal of uranium was achieved with soil samples containing up to 60% uranium, and percent recoveries averaged better than 85% for 9 of the 10 metals evaluated (Ag, As, Cd. Cr, Cu, Ni, Pb, Se and Tl). The U/TEVA{sm_bullet}Spec column was regenerated by washing withmore » 200 mL of a 0.01 M oxalic acid/0.02 M nitric acid solution, permitting re-use of the column. GFAAS analysis of a sample spiked with 56.5% uranium, after treatment of the digestate with a U/TEVA{sm_bullet}Spec resin column, resulted in percent recoveries of 97% or better for all target metals.« less
Trimpin, Sarah; Deinzer, Max L
2007-01-01
A mini ball mill (MBM) solvent-free matrix-assisted laser desorption/ionization (MALDI) mass spectrometry (MS) method allows for the analysis of bacteriorhodopsin (BR), an integral membrane protein that previously presented special analytical problems. For well-defined signals in the molecular ion region of the analytes, a desalting procedure of the MBM sample directly on the MALDI target plate was used to reduce adduction by sodium and other cations that are normally attendant with hydrophobic peptides and proteins as a result of the sample preparation procedure. Mass analysis of the intact hydrophobic protein and the few hydrophobic and hydrophilic tryptic peptides available in the digest is demonstrated with this robust new approach. MS and MS/MS spectra of BR tryptic peptides and intact protein were generally superior to the traditional solvent-based method using the desalted "dry" MALDI preparation procedure. The solvent-free method expands the range of peptides that can be effectively analyzed by MALDI-MS to those that are hydrophobic and solubility-limited.
NASA Astrophysics Data System (ADS)
Wang, Ke; Guo, Ping; Luo, A.-Li
2017-03-01
Spectral feature extraction is a crucial procedure in automated spectral analysis. This procedure starts from the spectral data and produces informative and non-redundant features, facilitating the subsequent automated processing and analysis with machine-learning and data-mining techniques. In this paper, we present a new automated feature extraction method for astronomical spectra, with application in spectral classification and defective spectra recovery. The basic idea of our approach is to train a deep neural network to extract features of spectra with different levels of abstraction in different layers. The deep neural network is trained with a fast layer-wise learning algorithm in an analytical way without any iterative optimization procedure. We evaluate the performance of the proposed scheme on real-world spectral data. The results demonstrate that our method is superior regarding its comprehensive performance, and the computational cost is significantly lower than that for other methods. The proposed method can be regarded as a new valid alternative general-purpose feature extraction method for various tasks in spectral data analysis.
Effects of laser-aided circumferential supracrestal fiberotomy on root surfaces.
Lee, Ji-Won; Park, Ki-Ho; Chung, Jong-Hyuk; Kim, Su-Jung
2011-11-01
To evaluate and compare the effects of circumferential supracrestal fiberotomy in vivo (using diode, CO(2), and Er∶YAG lasers) on the morphology and chemical composition of the root surface. Forty healthy premolar teeth, intended for extraction for orthodontic reasons, were used in this study. Root surfaces were treated using different laser methods, as follows: (1) control; (2) Er∶YAG laser (2.94 µm, 100 mJ, 10 Hz); (3) diode laser (808 nm, 1.2 W, continuous wave); and (4) CO(2) laser (10.6 µm, 3 W, continuous wave). Subsequently, the teeth were removed and subjected to scanning electron microscopic (SEM) examination and energy dispersive x-ray (EDX) spectrometric analysis. SEM analysis indicated that no thermal changes, including melting or carbonization, were observed following the lasing procedures. EDX analysis showed that the laser procedures resulted in similar mineral contents (weight % of calcium and phosphate) as compared to those in the control group. Based on these findings, we concluded that laser-aided procedures, when used at appropriate laser settings, preserve the original morphology and chemical composition of cementum.
Selective reduction of pregnancy: a legal analysis.
Hall, A
1996-01-01
This article examines the technique and legality of induced abortion of one or more fetuses in a multiple pregnancy, where the aim is the destruction of some but not all of the fetuses present (selective reduction of pregnancy). It concludes that since the legal status of the procedure in English law is unclear, it may be a criminal offence to perform selective reduction even where there is an ostensible clinical need. Moreover if the procedure is carried out negligently, and any infant damaged as a result is subsequently born alive, he or she may have a civil claim against the practitioner who carried out the procedure. PMID:8910784
NASA Technical Reports Server (NTRS)
Anderson, O. L.
1974-01-01
A finite-difference procedure for computing the turbulent, swirling, compressible flow in axisymmetric ducts is described. Arbitrary distributions of heat and mass transfer at the boundaries can be treated, and the effects of struts, inlet guide vanes, and flow straightening vanes can be calculated. The calculation procedure is programmed in FORTRAN 4 and has operated successfully on the UNIVAC 1108, IBM 360, and CDC 6600 computers. The analysis which forms the basis of the procedure, a detailed description of the computer program, and the input/output formats are presented. The results of sample calculations performed with the computer program are compared with experimental data.
The augmentation algorithm and molecular phylogenetic trees
NASA Technical Reports Server (NTRS)
Holmquist, R.
1978-01-01
Moore's (1977) augmentation procedure is discussed, and it is concluded that the procedure is valid for obtaining estimates of the total number of fixed nucleotide substitutions both theoretically and in practice, for both simulated and real data, and in agreement, for experimentally dense data sets, with stochastic estimates of the divergence, provided the restrictions on codon mutability resulting from natural selection are explicitly allowed for. Tateno and Nei's (1978) critique that the augmentation procedure has a systematic bias toward overestimation of the total number of nucleotide replacements is disputed, and a data analysis suggests that ancestral sequences inferred by the method of parsimony contain a large number of incorrectly assigned nucleotides.
NASA Technical Reports Server (NTRS)
Hilburger, Mark W.; Starnes, James H., Jr.
2004-01-01
The results of a parametric study of the effects of initial imperfections on the buckling and postbuckling response of three unstiffened thinwalled compression-loaded graphite-epoxy cylindrical shells with different orthotropic and quasi-isotropic shell-wall laminates are presented. The imperfections considered include initial geometric shell-wall midsurface imperfections, shell-wall thickness variations, local shell-wall ply-gaps associated with the fabrication process, shell-end geometric imperfections, nonuniform applied end loads, and variations in the boundary conditions including the effects of elastic boundary conditions. A high-fidelity nonlinear shell analysis procedure that accurately accounts for the effects of these imperfections on the nonlinear responses and buckling loads of the shells is described. The analysis procedure includes a nonlinear static analysis that predicts stable response characteristics of the shells and a nonlinear transient analysis that predicts unstable response characteristics.
NASA Astrophysics Data System (ADS)
Zeng, Z.; Birnbaum, S.
2006-12-01
An English lesson plan exploring stress analysis of En Echelon veins and vortex structures used in the bilingual course in Structural Geology at the National Science Training Base of China is described. Two mechanical models are introduced in class and both mathematical and mechanical analyses are conducted. Samples, pictures and case studies are selected from Britain, Switzerland, and China. These case studies are augmented from the previous research results of the first author. Students are guided through the entire thought process, including methods and procedures used in the stress analysis of geologic structures. The teaching procedures are also illustrated. The method showed is effective to help students to get the initial knowledge of quantitative analysis for the formation of geological structures. This work is supported by the Ministry of Education of China, the Education Bureau of Hubei Province of China and China University of Geosciences (Wuhan).
Pan, Rui; Wang, Hansheng; Li, Runze
2016-01-01
This paper is concerned with the problem of feature screening for multi-class linear discriminant analysis under ultrahigh dimensional setting. We allow the number of classes to be relatively large. As a result, the total number of relevant features is larger than usual. This makes the related classification problem much more challenging than the conventional one, where the number of classes is small (very often two). To solve the problem, we propose a novel pairwise sure independence screening method for linear discriminant analysis with an ultrahigh dimensional predictor. The proposed procedure is directly applicable to the situation with many classes. We further prove that the proposed method is screening consistent. Simulation studies are conducted to assess the finite sample performance of the new procedure. We also demonstrate the proposed methodology via an empirical analysis of a real life example on handwritten Chinese character recognition. PMID:28127109
Data & Tools | Bioenergy | NREL
Procedures NREL develops lab procedures to help researchers perform analyses for biofuels and bio-oils . Biomass Compositional Analysis Bio-Oil Analysis Microalgae Compositional Analysis Biomass Feedstock and
How family carers engage with technical health procedures in the home: a grounded theory study
McDonald, Janet; McKinlay, Eileen; Keeling, Sally; Levack, William
2015-01-01
Objectives To explore the experiences of family carers who manage technical health procedures at home and describe their learning process. Design A qualitative study using grounded theory. Participants New Zealand family carers (21 women, 5 men) who managed technical health procedures such as enteral feeding, peritoneal dialysis, tracheostomy care, a central venous line or urinary catheter. In addition, 15 health professionals involved in teaching carers were interviewed. Methods Semistructured interviews were coded soon after completion and preliminary analysis influenced subsequent interviews. Additional data were compared with existing material and as analysis proceeded, initial codes were grouped into higher order concepts until a core concept was described. Interviewing continued until no new ideas emerged and concepts were well defined. Results The response of carers to the role of managing technical health procedures in the home is presented in terms of five dispositions: (1) Embracing care, (2) Resisting, (3) Reluctant acceptance, (4) Relinquishing and (5) Being overwhelmed. These dispositions were not static and carers commonly changed between them. Embracing care included cognitive understanding of the purpose and benefits of a procedure; accepting a ‘technical’ solution; practical management; and an emotional response. Accepting embrace is primarily motivated by perceived benefits for the recipient. It may also be driven by a lack of alternatives. Resisting or reluctant acceptance results from a lack of understanding about the procedure or willingness to manage it. Carers need adequate support to avoid becoming overwhelmed, and there are times when it is appropriate to encourage them to relinquish care for the sake of their own needs. Conclusions The concept of embracing care encourages health professionals to extend their attention beyond simply the practical aspects of technical procedures to assessing and addressing carers’ emotional and behavioural responses to health technology during the training process. PMID:26150143
Jahn, I; Foraita, R
2008-01-01
In Germany gender-sensitive approaches are part of guidelines for good epidemiological practice as well as health reporting. They are increasingly claimed to realize the gender mainstreaming strategy in research funding by the federation and federal states. This paper focuses on methodological aspects of data analysis, as an empirical data example of which serves the health report of Bremen, a population-based cross-sectional study. Health reporting requires analysis and reporting methods that are able to discover sex/gender issues of questions, on the one hand, and consider how results can adequately be communicated, on the other hand. The core question is: Which consequences do a different inclusion of the category sex in different statistical analyses for identification of potential target groups have on the results? As evaluation methods logistic regressions as well as a two-stage procedure were exploratively conducted. This procedure combines graphical models with CHAID decision trees and allows for visualising complex results. Both methods are analysed by stratification as well as adjusted by sex/gender and compared with each other. As a result, only stratified analyses are able to detect differences between the sexes and within the sex/gender groups as long as one cannot resort to previous knowledge. Adjusted analyses can detect sex/gender differences only if interaction terms have been included in the model. Results are discussed from a statistical-epidemiological perspective as well as in the context of health reporting. As a conclusion, the question, if a statistical method is gender-sensitive, can only be answered by having concrete research questions and known conditions. Often, an appropriate statistic procedure can be chosen after conducting a separate analysis for women and men. Future gender studies deserve innovative study designs as well as conceptual distinctiveness with regard to the biological and the sociocultural elements of the category sex/gender.
Allocation of healthcare dollars: analysis of nonneonatal circumcisions in Florida.
Gutwein, Luke G; Alvarez, Juan F; Gutwein, Jenny L; Kays, David W; Islam, Saleem
2013-09-01
Circumcision remains a controversial operation. Most procedures are performed in the neonatal period and avoid general anesthesia. Legislation driven by policy statements from the American Academy of Pediatrics led to significant changes in circumcisions in Florida with a shift to nonneonatal procedures as a result of costs. We sought to study the prevalence and financial implications of nonneonatal circumcisions in Florida. A retrospective population study was performed using the Florida Agency for Health Care Administration outpatient procedure database. We queried for patients 0 to 17 years of age undergoing circumcision between 2003 and 2008. Demographics, charges, and insurance status were analyzed. From 2003 to 2008, 31,741 outpatient circumcisions were performed. Publicly funded circumcisions accounted for 17,537 charging the state $6,263 on average for each circumcision at an expense of $111.8 million for the 5-year time period analyzed. Publicly funded circumcision procedures increased more than sixfold (P < 0.0001) than those covered by private insurance. Black circumcision procedures increased 77.3 per cent, whereas white circumcisions increased 28.7 per cent. There has been a significant increase in the number of nonneonatal circumcisions performed. This has resulted in an increase in economic health care. Public funding of neonatal circumcision could result in significant cost savings and avoid potential complications of general anesthesia.
Qiu, Xing; Hu, Rui; Wu, Zhixin
2014-01-01
Normalization procedures are widely used in high-throughput genomic data analyses to remove various technological noise and variations. They are known to have profound impact to the subsequent gene differential expression analysis. Although there has been some research in evaluating different normalization procedures, few attempts have been made to systematically evaluate the gene detection performances of normalization procedures from the bias-variance trade-off point of view, especially with strong gene differentiation effects and large sample size. In this paper, we conduct a thorough study to evaluate the effects of normalization procedures combined with several commonly used statistical tests and MTPs under different configurations of effect size and sample size. We conduct theoretical evaluation based on a random effect model, as well as simulation and biological data analyses to verify the results. Based on our findings, we provide some practical guidance for selecting a suitable normalization procedure under different scenarios. PMID:24941114
Cappel, J; Lüders, K
1980-01-24
Indication, standard operative procedure and technique of ambulant diagnostic mammary surgery is shown in detail from 15 years of experience. In respect of reducing costs intraoperative quick-frozen sections and in case of malignancy radical mastectomy immediately after biopsy during continuous general anesthesia is important. By retrospective analysis we could prove that operative results in treatment of benign breast tumors by out-patient surgery or otherwise for in-patients do not differ. According to an inquiry patients preferred by 60% out-patient procedure.
Multianalyte imaging in one-shot format sensors for natural waters.
Lapresta-Fernández, A; Huertas, Rafael; Melgosa, Manuel; Capitán-Vallvey, L F
2009-03-23
A one-shot multisensor based on ionophore-chromoionophore chemistry for optical monitoring of potassium, magnesium and hardness in water is presented. The analytical procedure uses a black and white non-cooled CCD camera for image acquisition of the one-shot multisensor after reaction, followed by data treatment for quantitation using the grey value pixel average from a defined region of interest from each sensing area to build the analytical parameter 1-alpha. In optimised experimental conditions, the procedure shows a large linear range, up to 6 orders using the linearised model and good detection limits: 9.92 x 10(-5)mM, 1.86 x 10(-3)mM and 1.30 x 10(-2)mgL(-1) of CaCO(3) for potassium, magnesium and hardness, respectively. This analysis system exhibits good precision in terms of relative standard deviation (RSD%) from 2.3 to 3.8 for potassium, from 5.0 to 6.8 for magnesium and from 5.4 to 5.9 for hardness. The trueness of this multisensor procedure was demonstrated comparing it with results obtained by a DAD spectrophotometer used as a reference. Finally, it was satisfactorily applied to the analysis of these analytes in miscellaneous samples, such as water and beverage samples from different origins, validating the results against atomic absorption spectrometry (AAS) as the reference procedure.
Error monitoring issues for common channel signaling
NASA Astrophysics Data System (ADS)
Hou, Victor T.; Kant, Krishna; Ramaswami, V.; Wang, Jonathan L.
1994-04-01
Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signaling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (1) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (2) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (3) an analysis of the performance ofthe SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (4) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; (5) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of Advanced Intelligent Network (AIN) and Personal Communications Service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies.
Sabatini, Linda M; Mathews, Charles; Ptak, Devon; Doshi, Shivang; Tynan, Katherine; Hegde, Madhuri R; Burke, Tara L; Bossler, Aaron D
2016-05-01
The increasing use of advanced nucleic acid sequencing technologies for clinical diagnostics and therapeutics has made vital understanding the costs of performing these procedures and their value to patients, providers, and payers. The Association for Molecular Pathology invested in a cost and value analysis of specific genomic sequencing procedures (GSPs) newly coded by the American Medical Association Current Procedural Terminology Editorial Panel. Cost data and work effort, including the development and use of data analysis pipelines, were gathered from representative laboratories currently performing these GSPs. Results were aggregated to generate representative cost ranges given the complexity and variability of performing the tests. Cost-impact models for three clinical scenarios were generated with assistance from key opinion leaders: impact of using a targeted gene panel in optimizing care for patients with advanced non-small-cell lung cancer, use of a targeted gene panel in the diagnosis and management of patients with sensorineural hearing loss, and exome sequencing in the diagnosis and management of children with neurodevelopmental disorders of unknown genetic etiology. Each model demonstrated value by either reducing health care costs or identifying appropriate care pathways. The templates generated will aid laboratories in assessing their individual costs, considering the value structure in their own patient populations, and contributing their data to the ongoing dialogue regarding the impact of GSPs on improving patient care. Copyright © 2016 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Reliability of sensor-based real-time workflow recognition in laparoscopic cholecystectomy.
Kranzfelder, Michael; Schneider, Armin; Fiolka, Adam; Koller, Sebastian; Reiser, Silvano; Vogel, Thomas; Wilhelm, Dirk; Feussner, Hubertus
2014-11-01
Laparoscopic cholecystectomy is a very common minimally invasive surgical procedure that may be improved by autonomous or cooperative assistance support systems. Model-based surgery with a precise definition of distinct procedural tasks (PT) of the operation was implemented and tested to depict and analyze the process of this procedure. Reliability of real-time workflow recognition in laparoscopic cholecystectomy ([Formula: see text] cases) was evaluated by continuous sensor-based data acquisition. Ten PTs were defined including begin/end preparation calots' triangle, clipping/cutting cystic artery and duct, begin/end gallbladder dissection, begin/end hemostasis, gallbladder removal, and end of operation. Data acquisition was achieved with continuous instrument detection, room/table light status, intra-abdominal pressure, table tilt, irrigation/aspiration volume and coagulation/cutting current application. Two independent observers recorded start and endpoint of each step by analysis of the sensor data. The data were cross-checked with laparoscopic video recordings serving as gold standard for PT identification. Bland-Altman analysis revealed for 95% of cases a difference of annotation results within the limits of agreement ranging from [Formula: see text]309 s (PT 7) to +368 s (PT 5). Laparoscopic video and sensor data matched to a greater or lesser extent within the different procedural tasks. In the majority of cases, the observer results exceeded those obtained from the laparoscopic video. Empirical knowledge was required to detect phase transit. A set of sensors used to monitor laparoscopic cholecystectomy procedures was sufficient to enable expert observers to reliably identify each PT. In the future, computer systems may automate the task identification process provided a more robust data inflow is available.
Zoppetti, Nicola; Bogi, Andrea; Pinto, Iole; Andreuccetti, Daniele
2015-02-01
In this paper, a procedure is described for the assessment of human exposure to magnetic fields with complex waveforms generated by arc-welding equipment. The work moves from the analysis of relevant guidelines and technical standards, underlining their strengths and their limits. Then, the procedure is described with particular attention to the techniques used to treat complex waveform fields. Finally, the procedure is applied to concrete cases encountered in the workplace. The discussion of the results highlights the critical points in the procedure, as well as those related to the evolution of the technical and exposure standards. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Energy Navigation: Simulation Evaluation and Benefit Analysis
NASA Technical Reports Server (NTRS)
Williams, David H.; Oseguera-Lohr, Rosa M.; Lewis, Elliot T.
2011-01-01
This paper presents results from two simulation studies investigating the use of advanced flight-deck-based energy navigation (ENAV) and conventional transport-category vertical navigation (VNAV) for conducting a descent through a busy terminal area, using Continuous Descent Arrival (CDA) procedures. This research was part of the Low Noise Flight Procedures (LNFP) element within the Quiet Aircraft Technology (QAT) Project, and the subsequent Airspace Super Density Operations (ASDO) research focus area of the Airspace Project. A piloted simulation study addressed development of flight guidance, and supporting pilot and Air Traffic Control (ATC) procedures for high density terminal operations. The procedures and charts were designed to be easy to understand, and to make it easy for the crew to make changes via the Flight Management Computer Control-Display Unit (FMC-CDU) to accommodate changes from ATC.
False Discovery Control in Large-Scale Spatial Multiple Testing
Sun, Wenguang; Reich, Brian J.; Cai, T. Tony; Guindani, Michele; Schwartzman, Armin
2014-01-01
Summary This article develops a unified theoretical and computational framework for false discovery control in multiple testing of spatial signals. We consider both point-wise and cluster-wise spatial analyses, and derive oracle procedures which optimally control the false discovery rate, false discovery exceedance and false cluster rate, respectively. A data-driven finite approximation strategy is developed to mimic the oracle procedures on a continuous spatial domain. Our multiple testing procedures are asymptotically valid and can be effectively implemented using Bayesian computational algorithms for analysis of large spatial data sets. Numerical results show that the proposed procedures lead to more accurate error control and better power performance than conventional methods. We demonstrate our methods for analyzing the time trends in tropospheric ozone in eastern US. PMID:25642138
NASA Astrophysics Data System (ADS)
Pope, Crystal L.; Crenshaw, D. Michael; Fischer, Travis C.
2016-01-01
We present a preliminary analysis of the inflows and outflows in the narrow-line regions of nearby (z<0.1) AGN using observations from the Gemini-North telescope's Near-Infared Integral Field Spectrograph (NIFS). In addition to the standard reduction procedure for NIFS data cubes, these observations were treated for multiple sources of noise and artifacts from the adaptive optics observations and the NIFS instrument. This procedure included the following steps: correction of the differential atmospheric refraction, spatial resampling, low-pass Butterworth spatial filtering, removal of the "instrumental fingerprint", and the Richardson-Lucy deconvolution. We compare measurements from NIFS data cubes with and without the additional correction procedures to determine the effect of this data treatment on our scientific results.
Solution of elliptic PDEs by fast Poisson solvers using a local relaxation factor
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
1986-01-01
A large class of two- and three-dimensional, nonseparable elliptic partial differential equations (PDEs) is presently solved by means of novel one-step (D'Yakanov-Gunn) and two-step (accelerated one-step) iterative procedures, using a local, discrete Fourier analysis. In addition to being easily implemented and applicable to a variety of boundary conditions, these procedures are found to be computationally efficient on the basis of the results of numerical comparison with other established methods, which lack the present one's: (1) insensitivity to grid cell size and aspect ratio, and (2) ease of convergence rate estimation by means of the coefficient of the PDE being solved. The two-step procedure is numerically demonstrated to outperform the one-step procedure in the case of PDEs with variable coefficients.
Bon-EV: an improved multiple testing procedure for controlling false discovery rates.
Li, Dongmei; Xie, Zidian; Zand, Martin; Fogg, Thomas; Dye, Timothy
2017-01-03
Stability of multiple testing procedures, defined as the standard deviation of total number of discoveries, can be used as an indicator of variability of multiple testing procedures. Improving stability of multiple testing procedures can help to increase the consistency of findings from replicated experiments. Benjamini-Hochberg's and Storey's q-value procedures are two commonly used multiple testing procedures for controlling false discoveries in genomic studies. Storey's q-value procedure has higher power and lower stability than Benjamini-Hochberg's procedure. To improve upon the stability of Storey's q-value procedure and maintain its high power in genomic data analysis, we propose a new multiple testing procedure, named Bon-EV, to control false discovery rate (FDR) based on Bonferroni's approach. Simulation studies show that our proposed Bon-EV procedure can maintain the high power of the Storey's q-value procedure and also result in better FDR control and higher stability than Storey's q-value procedure for samples of large size(30 in each group) and medium size (15 in each group) for either independent, somewhat correlated, or highly correlated test statistics. When sample size is small (5 in each group), our proposed Bon-EV procedure has performance between the Benjamini-Hochberg procedure and the Storey's q-value procedure. Examples using RNA-Seq data show that the Bon-EV procedure has higher stability than the Storey's q-value procedure while maintaining equivalent power, and higher power than the Benjamini-Hochberg's procedure. For medium or large sample sizes, the Bon-EV procedure has improved FDR control and stability compared with the Storey's q-value procedure and improved power compared with the Benjamini-Hochberg procedure. The Bon-EV multiple testing procedure is available as the BonEV package in R for download at https://CRAN.R-project.org/package=BonEV .
A Field-Effect Transistor (FET) model for ASAP
NASA Technical Reports Server (NTRS)
Ming, L.
1965-01-01
The derivation of the circuitry of a field effect transistor (FET) model, the procedure for adapting the model to automated statistical analysis program (ASAP), and the results of applying ASAP on this model are described.
Preliminary analysis techniques for ring and stringer stiffened cylindrical shells
NASA Technical Reports Server (NTRS)
Graham, J.
1993-01-01
This report outlines methods of analysis for the buckling of thin-walled circumferentially and longitudinally stiffened cylindrical shells. Methods of analysis for the various failure modes are presented in one cohesive package. Where applicable, more than one method of analysis for a failure mode is presented along with standard practices. The results of this report are primarily intended for use in launch vehicle design in the elastic range. A Microsoft Excel worksheet with accompanying macros has been developed to automate the analysis procedures.
Systematic procedure for designing processes with multiple environmental objectives.
Kim, Ki-Joo; Smith, Raymond L
2005-04-01
Evaluation of multiple objectives is very important in designing environmentally benign processes. It requires a systematic procedure for solving multiobjective decision-making problems due to the complex nature of the problems, the need for complex assessments, and the complicated analysis of multidimensional results. In this paper, a novel systematic procedure is presented for designing processes with multiple environmental objectives. This procedure has four steps: initialization, screening, evaluation, and visualization. The first two steps are used for systematic problem formulation based on mass and energy estimation and order of magnitude analysis. In the third step, an efficient parallel multiobjective steady-state genetic algorithm is applied to design environmentally benign and economically viable processes and to provide more accurate and uniform Pareto optimal solutions. In the last step a new visualization technique for illustrating multiple objectives and their design parameters on the same diagram is developed. Through these integrated steps the decision-maker can easily determine design alternatives with respect to his or her preferences. Most importantly, this technique is independent of the number of objectives and design parameters. As a case study, acetic acid recovery from aqueous waste mixtures is investigated by minimizing eight potential environmental impacts and maximizing total profit. After applying the systematic procedure, the most preferred design alternatives and their design parameters are easily identified.
Influence of Freezing and Storage Procedure on Human Urine Samples in NMR-Based Metabolomics
Rist, Manuela J.; Muhle-Goll, Claudia; Görling, Benjamin; Bub, Achim; Heissler, Stefan; Watzl, Bernhard; Luy, Burkhard
2013-01-01
It is consensus in the metabolomics community that standardized protocols should be followed for sample handling, storage and analysis, as it is of utmost importance to maintain constant measurement conditions to identify subtle biological differences. The aim of this work, therefore, was to systematically investigate the influence of freezing procedures and storage temperatures and their effect on NMR spectra as a potentially disturbing aspect for NMR-based metabolomics studies. Urine samples were collected from two healthy volunteers, centrifuged and divided into aliquots. Urine aliquots were frozen either at −20 °C, on dry ice, at −80 °C or in liquid nitrogen and then stored at −20 °C, −80 °C or in liquid nitrogen vapor phase for 1–5 weeks before NMR analysis. Results show spectral changes depending on the freezing procedure, with samples frozen on dry ice showing the largest deviations. The effect was found to be based on pH differences, which were caused by variations in CO2 concentrations introduced by the freezing procedure. Thus, we recommend that urine samples should be frozen at −20 °C and transferred to lower storage temperatures within one week and that freezing procedures should be part of the publication protocol. PMID:24957990
Influence of Freezing and Storage Procedure on Human Urine Samples in NMR-Based Metabolomics.
Rist, Manuela J; Muhle-Goll, Claudia; Görling, Benjamin; Bub, Achim; Heissler, Stefan; Watzl, Bernhard; Luy, Burkhard
2013-04-09
It is consensus in the metabolomics community that standardized protocols should be followed for sample handling, storage and analysis, as it is of utmost importance to maintain constant measurement conditions to identify subtle biological differences. The aim of this work, therefore, was to systematically investigate the influence of freezing procedures and storage temperatures and their effect on NMR spectra as a potentially disturbing aspect for NMR-based metabolomics studies. Urine samples were collected from two healthy volunteers, centrifuged and divided into aliquots. Urine aliquots were frozen either at -20 °C, on dry ice, at -80 °C or in liquid nitrogen and then stored at -20 °C, -80 °C or in liquid nitrogen vapor phase for 1-5 weeks before NMR analysis. Results show spectral changes depending on the freezing procedure, with samples frozen on dry ice showing the largest deviations. The effect was found to be based on pH differences, which were caused by variations in CO2 concentrations introduced by the freezing procedure. Thus, we recommend that urine samples should be frozen at -20 °C and transferred to lower storage temperatures within one week and that freezing procedures should be part of the publication protocol.
Analysis of dynamic multiplicity fluctuations at PHOBOS
NASA Astrophysics Data System (ADS)
Chai, Zhengwei; PHOBOS Collaboration; Back, B. B.; Baker, M. D.; Ballintijn, M.; Barton, D. S.; Betts, R. R.; Bickley, A. A.; Bindel, R.; Budzanowski, A.; Busza, W.; Carroll, A.; Chai, Z.; Decowski, M. P.; García, E.; George, N.; Gulbrandsen, K.; Gushue, S.; Halliwell, C.; Hamblen, J.; Heintzelman, G. A.; Henderson, C.; Hofman, D. J.; Hollis, R. S.; Holynski, R.; Holzman, B.; Iordanova, A.; Johnson, E.; Kane, J. L.; Katzy, J.; Khan, N.; Kucewicz, W.; Kulinich, P.; Kuo, C. M.; Lin, W. T.; Manly, S.; McLeod, D.; Mignerey, A. C.; Nouicer, R.; Olszewski, A.; Pak, R.; Park, I. C.; Pernegger, H.; Reed, C.; Remsberg, L. P.; Reuter, M.; Roland, C.; Roland, G.; Rosenberg, L.; Sagerer, J.; Sarin, P.; Sawicki, P.; Skulski, W.; Steinberg, P.; Stephans, G. S. F.; Sukhanov, A.; Tang, J. L.; Trzupek, A.; Vale, C.; van Nieuwenhuizen, G. J.; Verdier, R.; Wolfs, F. L. H.; Wosiek, B.; Wozniak, K.; Wuosmaa, A. H.; Wyslouch, B.
2005-01-01
This paper presents the analysis of the dynamic fluctuations in the inclusive charged particle multiplicity measured by PHOBOS for Au+Au collisions at surdsNN = 200GeV within the pseudo-rapidity range of -3 < η < 3. First the definition of the fluctuations observables used in this analysis is presented, together with the discussion of their physics meaning. Then the procedure for the extraction of dynamic fluctuations is described. Some preliminary results are included to illustrate the correlation features of the fluctuation observable. New dynamic fluctuations results will be available in a later publication.
The Retrospective Iterated Analysis Scheme for Nonlinear Chaotic Dynamics
NASA Technical Reports Server (NTRS)
Todling, Ricardo
2002-01-01
Atmospheric data assimilation is the name scientists give to the techniques of blending atmospheric observations with atmospheric model results to obtain an accurate idea of what the atmosphere looks like at any given time. Because two pieces of information are used, observations and model results, the outcomes of data assimilation procedure should be better than what one would get by using one of these two pieces of information alone. There is a number of different mathematical techniques that fall under the data assimilation jargon. In theory most these techniques accomplish about the same thing. In practice, however, slight differences in the approaches amount to faster algorithms in some cases, more economical algorithms in other cases, and even give better overall results in yet some other cases because of practical uncertainties not accounted for by theory. Therefore, the key is to find the most adequate data assimilation procedure for the problem in hand. In our Data Assimilation group we have been doing extensive research to try and find just such data assimilation procedure. One promising possibility is what we call retrospective iterated analysis (RIA) scheme. This procedure has recently been implemented and studied in the context of a very large data assimilation system built to help predict and study weather and climate. Although the results from that study suggest that the RIA scheme produces quite reasonable results, a complete evaluation of the scheme is very difficult due to the complexity of that problem. The present work steps back a little bit and studies the behavior of the RIA scheme in the context of a small problem. The problem is small enough to allow full assessment of the quality of the RIA scheme, but it still has some of the complexity found in nature, namely, its chaotic-type behavior. We find that the RIA performs very well for this small but still complex problem which is a result that seconds the results of our early studies.
The contribution of molecular relaxation in nitrogen to the absorption of sound in the atmosphere
NASA Technical Reports Server (NTRS)
Zuckerwar, A. J.; Meredith, R. W.
1980-01-01
Results and statistical analysis are presented for sound absorption in N2-H2O binary mixtures at room temperature. Experimental procedure, temperature effects, and preliminary results are presented for sound absorption in N2-H2O binary mixtures at elevated temperatures.
10 CFR 436.13 - Presuming cost-effectiveness results.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Methodology and Procedures for Life Cycle Cost Analyses § 436.13 Presuming cost-effectiveness results. (a) If... life cycle cost-effective without further analysis. (b) A Federal agency may presume that an investment in an energy or water conservation measure retrofit to an existing Federal building is not life cycle...
ERIC Educational Resources Information Center
Ruscio, John; Ruscio, Ayelet Meron; Meron, Mati
2007-01-01
Meehl's taxometric method was developed to distinguish categorical and continuous constructs. However, taxometric output can be difficult to interpret because expected results for realistic data conditions and differing procedural implementations have not been derived analytically or studied through rigorous simulations. By applying bootstrap…