NASA standard: Trend analysis techniques
NASA Technical Reports Server (NTRS)
1988-01-01
This Standard presents descriptive and analytical techniques for NASA trend analysis applications. Trend analysis is applicable in all organizational elements of NASA connected with, or supporting, developmental/operational programs. Use of this Standard is not mandatory; however, it should be consulted for any data analysis activity requiring the identification or interpretation of trends. Trend Analysis is neither a precise term nor a circumscribed methodology, but rather connotes, generally, quantitative analysis of time-series data. For NASA activities, the appropriate and applicable techniques include descriptive and graphical statistics, and the fitting or modeling of data by linear, quadratic, and exponential models. Usually, but not always, the data is time-series in nature. Concepts such as autocorrelation and techniques such as Box-Jenkins time-series analysis would only rarely apply and are not included in this Standard. The document presents the basic ideas needed for qualitative and quantitative assessment of trends, together with relevant examples. A list of references provides additional sources of information.
Development of Gold Standard Ion-Selective Electrode-Based Methods for Fluoride Analysis
Martínez-Mier, E.A.; Cury, J.A.; Heilman, J.R.; Katz, B.P.; Levy, S.M.; Li, Y.; Maguire, A.; Margineda, J.; O’Mullane, D.; Phantumvanit, P.; Soto-Rojas, A.E.; Stookey, G.K.; Villa, A.; Wefel, J.S.; Whelton, H.; Whitford, G.M.; Zero, D.T.; Zhang, W.; Zohouri, V.
2011-01-01
Background/Aims: Currently available techniques for fluoride analysis are not standardized. Therefore, this study was designed to develop standardized methods for analyzing fluoride in biological and nonbiological samples used for dental research. Methods A group of nine laboratories analyzed a set of standardized samples for fluoride concentration using their own methods. The group then reviewed existing analytical techniques for fluoride analysis, identified inconsistencies in the use of these techniques and conducted testing to resolve differences. Based on the results of the testing undertaken to define the best approaches for the analysis, the group developed recommendations for direct and microdiffusion methods using the fluoride ion-selective electrode. Results Initial results demonstrated that there was no consensus regarding the choice of analytical techniques for different types of samples. Although for several types of samples, the results of the fluoride analyses were similar among some laboratories, greater differences were observed for saliva, food and beverage samples. In spite of these initial differences, precise and true values of fluoride concentration, as well as smaller differences between laboratories, were obtained once the standardized methodologies were used. Intraclass correlation coefficients ranged from 0.90 to 0.93, for the analysis of a certified reference material, using the standardized methodologies. Conclusion The results of this study demonstrate that the development and use of standardized protocols for F analysis significantly decreased differences among laboratories and resulted in more precise and true values. PMID:21160184
Practical semen analysis: from A to Z
Brazil, Charlene
2010-01-01
Accurate semen analysis is critical for decisions about patient care, as well as for studies addressing overall changes in semen quality, contraceptive efficacy and effects of toxicant exposure. The standardization of semen analysis is very difficult for many reasons, including the use of subjective techniques with no standards for comparison, poor technician training, problems with proficiency testing and a reluctance to change techniques. The World Health Organization (WHO) Semen handbook (2010) offers a vastly improved set of standardized procedures, all at a level of detail that will preclude most misinterpretations. However, there is a limit to what can be learned from words and pictures alone. A WHO-produced DVD that offers complete demonstrations of each technique along with quality assurance standards for motility, morphology and concentration assessments would enhance the effectiveness of the manual. However, neither the manual nor a DVD will help unless there is general acknowledgement of the critical need to standardize techniques and rigorously pursue quality control to ensure that laboratories actually perform techniques 'according to WHO' instead of merely reporting that they have done so. Unless improvements are made, patient results will continue to be compromised and comparison between studies and laboratories will have limited merit. PMID:20111076
Advanced Undergraduate Experiments in Thermoanalytical Chemistry.
ERIC Educational Resources Information Center
Hill, J. O.; Magee, R. J.
1988-01-01
Describes several experiments using the techniques of thermal analysis and thermometric titrimetry. Defines thermal analysis and several recent branches of the technique. Notes most of the experiments use simple equipment and standard laboratory techniques. (MVL)
NASA Technical Reports Server (NTRS)
Carden, J. L.; Browner, R.
1982-01-01
The preparation and analysis of standardized waste samples for controlled ecological life support systems (CELSS) are considered. Analysis of samples from wet oxidation experiments, the development of ion chromatographic techniques utilizing conventional high pressure liquid chromatography (HPLC) equipment, and an investigation of techniques for interfacing an ion chromatograph (IC) with an inductively coupled plasma optical emission spectrometer (ICPOES) are discussed.
NASA standard: Trend analysis techniques
NASA Technical Reports Server (NTRS)
1990-01-01
Descriptive and analytical techniques for NASA trend analysis applications are presented in this standard. Trend analysis is applicable in all organizational elements of NASA connected with, or supporting, developmental/operational programs. This document should be consulted for any data analysis activity requiring the identification or interpretation of trends. Trend analysis is neither a precise term nor a circumscribed methodology: it generally connotes quantitative analysis of time-series data. For NASA activities, the appropriate and applicable techniques include descriptive and graphical statistics, and the fitting or modeling of data by linear, quadratic, and exponential models. Usually, but not always, the data is time-series in nature. Concepts such as autocorrelation and techniques such as Box-Jenkins time-series analysis would only rarely apply and are not included in this document. The basic ideas needed for qualitative and quantitative assessment of trends along with relevant examples are presented.
A simple 2D composite image analysis technique for the crystal growth study of L-ascorbic acid.
Kumar, Krishan; Kumar, Virender; Lal, Jatin; Kaur, Harmeet; Singh, Jasbir
2017-06-01
This work was destined for 2D crystal growth studies of L-ascorbic acid using the composite image analysis technique. Growth experiments on the L-ascorbic acid crystals were carried out by standard (optical) microscopy, laser diffraction analysis, and composite image analysis. For image analysis, the growth of L-ascorbic acid crystals was captured as digital 2D RGB images, which were then processed to composite images. After processing, the crystal boundaries emerged as white lines against the black (cancelled) background. The crystal boundaries were well differentiated by peaks in the intensity graphs generated for the composite images. The lengths of crystal boundaries measured from the intensity graphs of composite images were in good agreement (correlation coefficient "r" = 0.99) with the lengths measured by standard microscopy. On the contrary, the lengths measured by laser diffraction were poorly correlated with both techniques. Therefore, the composite image analysis can replace the standard microscopy technique for the crystal growth studies of L-ascorbic acid. © 2017 Wiley Periodicals, Inc.
1993-06-18
the exception. In the Standardized Aquatic Microcosm and the Mixed Flask Culture (MFC) microcosms, multivariate analysis and clustering methods...rule rather than the exception. In the Standardized Aquatic Microcosm and the Mixed Flask Culture (MFC) microcosms, multivariate analysis and...experiments using two microcosm protocols. We use nonmetric clustering, a multivariate pattern recognition technique developed by Matthews and Heame (1991
Amberg, Alexander; Barrett, Dave; Beale, Michael H.; Beger, Richard; Daykin, Clare A.; Fan, Teresa W.-M.; Fiehn, Oliver; Goodacre, Royston; Griffin, Julian L.; Hankemeier, Thomas; Hardy, Nigel; Harnly, James; Higashi, Richard; Kopka, Joachim; Lane, Andrew N.; Lindon, John C.; Marriott, Philip; Nicholls, Andrew W.; Reily, Michael D.; Thaden, John J.; Viant, Mark R.
2013-01-01
There is a general consensus that supports the need for standardized reporting of metadata or information describing large-scale metabolomics and other functional genomics data sets. Reporting of standard metadata provides a biological and empirical context for the data, facilitates experimental replication, and enables the re-interrogation and comparison of data by others. Accordingly, the Metabolomics Standards Initiative is building a general consensus concerning the minimum reporting standards for metabolomics experiments of which the Chemical Analysis Working Group (CAWG) is a member of this community effort. This article proposes the minimum reporting standards related to the chemical analysis aspects of metabolomics experiments including: sample preparation, experimental analysis, quality control, metabolite identification, and data pre-processing. These minimum standards currently focus mostly upon mass spectrometry and nuclear magnetic resonance spectroscopy due to the popularity of these techniques in metabolomics. However, additional input concerning other techniques is welcomed and can be provided via the CAWG on-line discussion forum at http://msi-workgroups.sourceforge.net/ or http://Msi-workgroups-feedback@lists.sourceforge.net. Further, community input related to this document can also be provided via this electronic forum. PMID:24039616
Determination of fiber volume in graphite/epoxy materials using computer image analysis
NASA Technical Reports Server (NTRS)
Viens, Michael J.
1990-01-01
The fiber volume of graphite/epoxy specimens was determined by analyzing optical images of cross sectioned specimens using image analysis software. Test specimens were mounted and polished using standard metallographic techniques and examined at 1000 times magnification. Fiber volume determined using the optical imaging agreed well with values determined using the standard acid digestion technique. The results were found to agree within 5 percent over a fiber volume range of 45 to 70 percent. The error observed is believed to arise from fiber volume variations within the graphite/epoxy panels themselves. The determination of ply orientation using image analysis techniques is also addressed.
Wood, Jessica L; Steiner, Robert R
2011-06-01
Forensic analysis of pharmaceutical preparations requires a comparative analysis with a standard of the suspected drug in order to identify the active ingredient. Purchasing analytical standards can be expensive or unattainable from the drug manufacturers. Direct Analysis in Real Time (DART™) is a novel, ambient ionization technique, typically coupled with a JEOL AccuTOF™ (accurate mass) mass spectrometer. While a fast and easy technique to perform, a drawback of using DART™ is the lack of component separation of mixtures prior to ionization. Various in-house pharmaceutical preparations were purified using thin-layer chromatography (TLC) and mass spectra were subsequently obtained using the AccuTOF™- DART™ technique. Utilizing TLC prior to sample introduction provides a simple, low-cost solution to acquiring mass spectra of the purified preparation. Each spectrum was compared against an in-house molecular formula list to confirm the accurate mass elemental compositions. Spectra of purified ingredients of known pharmaceuticals were added to an in-house library for use as comparators for casework samples. Resolving isomers from one another can be accomplished using collision-induced dissociation after ionization. Challenges arose when the pharmaceutical preparation required an optimized TLC solvent to achieve proper separation and purity of the standard. Purified spectra were obtained for 91 preparations and included in an in-house drug standard library. Primary standards would only need to be purchased when pharmaceutical preparations not previously encountered are submitted for comparative analysis. TLC prior to DART™ analysis demonstrates a time efficient and cost saving technique for the forensic drug analysis community. Copyright © 2011 John Wiley & Sons, Ltd. Copyright © 2011 John Wiley & Sons, Ltd.
Image-Based 3d Reconstruction and Analysis for Orthodontia
NASA Astrophysics Data System (ADS)
Knyaz, V. A.
2012-08-01
Among the main tasks of orthodontia are analysis of teeth arches and treatment planning for providing correct position for every tooth. The treatment plan is based on measurement of teeth parameters and designing perfect teeth arch curve which teeth are to create after treatment. The most common technique for teeth moving uses standard brackets which put on teeth and a wire of given shape which is clamped by these brackets for producing necessary forces to every tooth for moving it in given direction. The disadvantages of standard bracket technique are low accuracy of tooth dimensions measurements and problems with applying standard approach for wide variety of complex orthodontic cases. The image-based technique for orthodontic planning, treatment and documenting aimed at overcoming these disadvantages is proposed. The proposed approach provides performing accurate measurements of teeth parameters needed for adequate planning, designing correct teeth position and monitoring treatment process. The developed technique applies photogrammetric means for teeth arch 3D model generation, brackets position determination and teeth shifting analysis.
Standardization of Analysis Sets for Reporting Results from ADNI MRI Data
Wyman, Bradley T.; Harvey, Danielle J.; Crawford, Karen; Bernstein, Matt A.; Carmichael, Owen; Cole, Patricia E.; Crane, Paul; DeCarli, Charles; Fox, Nick C.; Gunter, Jeffrey L.; Hill, Derek; Killiany, Ronald J.; Pachai, Chahin; Schwarz, Adam J.; Schuff, Norbert; Senjem, Matthew L.; Suhy, Joyce; Thompson, Paul M.; Weiner, Michael; Jack, Clifford R.
2013-01-01
The ADNI 3D T1-weighted MRI acquisitions provide a rich dataset for developing and testing analysis techniques for extracting structural endpoints. To promote greater rigor in analysis and meaningful comparison of different algorithms, the ADNI MRI Core has created standardized analysis sets of data comprising scans that met minimum quality control requirements. We encourage researchers to test and report their techniques against these data. Standard analysis sets of volumetric scans from ADNI-1 have been created, comprising: screening visits, 1 year completers (subjects who all have screening, 6 and 12 month scans), two year annual completers (screening, 1, and 2 year scans), two year completers (screening, 6 months, 1 year, 18 months (MCI only) and 2 years) and complete visits (screening, 6 months, 1 year, 18 months (MCI only), 2, and 3 year (normal and MCI only) scans). As the ADNI-GO/ADNI-2 data becomes available, updated standard analysis sets will be posted regularly. PMID:23110865
Breast density quantification with cone-beam CT: A post-mortem study
Johnson, Travis; Ding, Huanjun; Le, Huy Q.; Ducote, Justin L.; Molloi, Sabee
2014-01-01
Forty post-mortem breasts were imaged with a flat-panel based cone-beam x-ray CT system at 50 kVp. The feasibility of breast density quantification has been investigated using standard histogram thresholding and an automatic segmentation method based on the fuzzy c-means algorithm (FCM). The breasts were chemically decomposed into water, lipid, and protein immediately after image acquisition was completed. The percent fibroglandular volume (%FGV) from chemical analysis was used as the gold standard for breast density comparison. Both image-based segmentation techniques showed good precision in breast density quantification with high linear coefficients between the right and left breast of each pair. When comparing with the gold standard using %FGV from chemical analysis, Pearson’s r-values were estimated to be 0.983 and 0.968 for the FCM clustering and the histogram thresholding techniques, respectively. The standard error of the estimate (SEE) was also reduced from 3.92% to 2.45% by applying the automatic clustering technique. The results of the postmortem study suggested that breast tissue can be characterized in terms of water, lipid and protein contents with high accuracy by using chemical analysis, which offers a gold standard for breast density studies comparing different techniques. In the investigated image segmentation techniques, the FCM algorithm had high precision and accuracy in breast density quantification. In comparison to conventional histogram thresholding, it was more efficient and reduced inter-observer variation. PMID:24254317
Statistical Evaluation of Time Series Analysis Techniques
NASA Technical Reports Server (NTRS)
Benignus, V. A.
1973-01-01
The performance of a modified version of NASA's multivariate spectrum analysis program is discussed. A multiple regression model was used to make the revisions. Performance improvements were documented and compared to the standard fast Fourier transform by Monte Carlo techniques.
Molecular diagnosis of bloodstream infections: planning to (physically) reach the bedside.
Leggieri, N; Rida, A; François, P; Schrenzel, Jacques
2010-08-01
Faster identification of infecting microorganisms and treatment options is a first-ranking priority in the infectious disease area, in order to prevent inappropriate treatment and overuse of broad-spectrum antibiotics. Standard bacterial identification is intrinsically time-consuming, and very recently there has been a burst in the number of commercially available nonphenotype-based techniques and in the documentation of a possible clinical impact of these techniques. Matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) is now a standard diagnostic procedure on cultures and hold promises on spiked blood. Meanwhile, commercial PCR-based techniques have improved with the use of bacterial DNA enrichment methods, the diversity of amplicon analysis techniques (melting curve analysis, microarrays, gel electrophoresis, sequencing and analysis by mass spectrometry) leading to the ability to challenge bacterial culture as the gold standard for providing earlier diagnosis with a better 'clinical' sensitivity and additional prognostic information. Laboratory practice has already changed with MALDI-TOF MS, but a change in clinical practice, driven by emergent nucleic acid-based techniques, will need the demonstration of real-life applicability as well as robust clinical-impact-oriented studies.
Simulations of motor unit number estimation techniques
NASA Astrophysics Data System (ADS)
Major, Lora A.; Jones, Kelvin E.
2005-06-01
Motor unit number estimation (MUNE) is an electrodiagnostic procedure used to evaluate the number of motor axons connected to a muscle. All MUNE techniques rely on assumptions that must be fulfilled to produce a valid estimate. As there is no gold standard to compare the MUNE techniques against, we have developed a model of the relevant neuromuscular physiology and have used this model to simulate various MUNE techniques. The model allows for a quantitative analysis of candidate MUNE techniques that will hopefully contribute to consensus regarding a standard procedure for performing MUNE.
Peek, Mirjam Cl; Charalampoudis, Petros; Anninga, Bauke; Baker, Rose; Douek, Michael
2017-02-01
The combined technique (radioisotope and blue dye) is the gold standard for sentinel lymph node biopsy (SLNB) and there is wide variation in techniques and blue dyes used. We performed a systematic review and meta-analysis to assess the need for radioisotope and the optimal blue dye for SLNB. A total of 21 studies were included. The SLNB identification rates are high with all the commonly used blue dyes. Furthermore, methylene blue is superior to iso-sulfan blue and Patent Blue V with respect to false-negative rates. The combined technique remains the most accurate and effective technique for SLNB. In order to standardize the SLNB technique, comparative trials to determine the most effective blue dye and national guidelines are required.
Preparation and application of in-fibre internal standardization solid-phase microextraction.
Zhao, Wennan; Ouyang, Gangfeng; Pawliszyn, Janusz
2007-03-01
The in-fibre standardization method is a novel approach that has been developed for field sampling/sample preparation, in which an internal standard is pre-loaded onto a solid-phase microextraction (SPME) fibre for calibration of the extraction of target analytes in field samples. The same method can also be used for in-vial sample analysis. In this study, different techniques to load the standard to a non-porous SPME fibre were investigated. It was found that the appropriateness of the technique depends on the physical properties of the standards that are used for the analysis. Headspace extraction of the standard dissolved in pumping oil works well for volatile compounds. Conversely, headspace extraction of the pure standard is an effective approach for semi-volatile compounds. For compounds with low volatility, a syringe-fibre transfer method and direct extraction of the standard dissolved in a solvent exhibited a good reproducibility (<5% RSD). The main advantage of the approaches investigated in this study is that the standard generation vials can be reused for hundreds of analyses without exhibiting significant loss. Moreover, most of the standard loading processes studied can be performed automatically, which is efficient and precise. Finally, the standard loading technique and in-fibre standardization method were applied to a complex matrix (milk) and the results illustrated that the matrix effect can be effectively compensated for with this approach.
Design techniques for low-voltage analog integrated circuits
NASA Astrophysics Data System (ADS)
Rakús, Matej; Stopjaková, Viera; Arbet, Daniel
2017-08-01
In this paper, a review and analysis of different design techniques for (ultra) low-voltage integrated circuits (IC) are performed. This analysis shows that the most suitable design methods for low-voltage analog IC design in a standard CMOS process include techniques using bulk-driven MOS transistors, dynamic threshold MOS transistors and MOS transistors operating in weak or moderate inversion regions. The main advantage of such techniques is that there is no need for any modification of standard CMOS structure or process. Basic circuit building blocks like differential amplifiers or current mirrors designed using these approaches are able to operate with the power supply voltage of 600 mV (or even lower), which is the key feature towards integrated systems for modern portable applications.
Fallon, Nevada FORGE Fluid Geochemistry
Blankenship, Doug; Ayling, Bridget
2018-03-13
Fluid geochemistry analysis for wells supporting the Fallon FORGE project. Samples were collected from geothermal wells using standard geothermal water sampling techniques, including filtration and acidification of the cation sample to pH < 2 prior to geochemical analysis. Analyses after 2005 were done in reputable commercial laboratories that follow standard protocols for aqueous chemistry analysis.
Standard surface-reflectance model and illuminant estimation
NASA Technical Reports Server (NTRS)
Tominaga, Shoji; Wandell, Brian A.
1989-01-01
A vector analysis technique was adopted to test the standard reflectance model. A computational model was developed to determine the components of the observed spectra and an estimate of the illuminant was obtained without using a reference white standard. The accuracy of the standard model is evaluated.
ASTM standards for fire debris analysis: a review.
Stauffer, Eric; Lentini, John J
2003-03-12
The American Society for Testing and Materials (ASTM) recently updated its standards E 1387 and E 1618 for the analysis of fire debris. The changes in the classification of ignitable liquids are presented in this review. Furthermore, a new standard on extraction of fire debris with solid phase microextraction (SPME) was released. Advantages and drawbacks of this technique are presented and discussed. Also, the standard on cleanup by acid stripping has not been reapproved. Fire debris analysts that use the standards should be aware of these changes.
NASA Astrophysics Data System (ADS)
Łazarek, Łukasz; Antończak, Arkadiusz J.; Wójcik, Michał R.; Drzymała, Jan; Abramski, Krzysztof M.
2014-07-01
Laser-induced breakdown spectroscopy (LIBS), like many other spectroscopic techniques, is a comparative method. Typically, in qualitative analysis, synthetic certified standard with a well-known elemental composition is used to calibrate the system. Nevertheless, in all laser-induced techniques, such calibration can affect the accuracy through differences in the overall composition of the chosen standard. There are also some intermediate factors, which can cause imprecision in measurements, such as optical absorption, surface structure and thermal conductivity. In this work the calibration performed for the LIBS technique utilizes pellets made directly from the tested materials (old well-characterized samples). This choice produces a considerable improvement in the accuracy of the method. This technique was adopted for the determination of trace elements in industrial copper concentrates, standardized by conventional atomic absorption spectroscopy with a flame atomizer. A series of copper flotation concentrate samples was analyzed for three elements: silver, cobalt and vanadium. We also proposed a method of post-processing the measurement data to minimize matrix effects and permit reliable analysis. It has been shown that the described technique can be used in qualitative and quantitative analyses of complex inorganic materials, such as copper flotation concentrates. It was noted that the final validation of such methodology is limited mainly by the accuracy of the characterization of the standards.
Code of Federal Regulations, 2011 CFR
2011-07-01
...). (7) ASTM E 168-88, “Standard Practices for General Techniques of Infrared Quantitative Analysis,” IBR...-Visible Quantitative Analysis,” IBR approved for § 264.1063. (9) ASTM E 260-85, “Standard Practice for... materials are available for purchase from the Environmental Protection Agency, Research Triangle Park, NC...
ERIC Educational Resources Information Center
Fouladi, Rachel T.
2000-01-01
Provides an overview of standard and modified normal theory and asymptotically distribution-free covariance and correlation structure analysis techniques and details Monte Carlo simulation results on Type I and Type II error control. Demonstrates through the simulation that robustness and nonrobustness of structure analysis techniques vary as a…
Bairi, Venu Gopal; Lim, Jin-Hee; Quevedo, Ivan R; Mudalige, Thilak K; Linder, Sean W
2016-02-01
This investigation reports a rapid and simple screening technique for the quantification of titanium and zinc in commercial sunscreens using portable X-ray fluorescence spectroscopy (pXRF). A highly evolved technique, inductively coupled plasma-mass spectroscopy (ICP-MS) was chosen as a comparative technique to pXRF, and a good correlation (r 2 > 0.995) with acceptable variations (≤25%) in results between both techniques was observed. Analytical figures of merit such as detection limit, quantitation limit, and linear range of the method are reported for the pXRF technique. This method has a good linearity (r 2 > 0.995) for the analysis of titanium (Ti) in the range of 0.4-14.23 wt%, and zinc (Zn) in the range of 1.0-23.90 wt%. However, most commercial sunscreens contain organic ingredients, and these ingredients are known to cause matrix effects. The development of appropriate matrix matched working standards to obtain the calibration curve was found to be a major challenge for the pXRF measurements. In this study, we have overcome the matrix effect by using metal-free commercial sunscreens as a dispersing media for the preparation of working standards. An easy extension of this unique methodology for preparing working standards in different matrices was also reported. This method is simple, rapid, and cost-effective and, in comparison to conventional techniques (e.g., ICP-MS), did not generate toxic wastes during sample analysis.
NASA Astrophysics Data System (ADS)
Bairi, Venu Gopal; Lim, Jin-Hee; Quevedo, Ivan R.; Mudalige, Thilak K.; Linder, Sean W.
2016-02-01
This investigation reports a rapid and simple screening technique for the quantification of titanium and zinc in commercial sunscreens using portable X-ray fluorescence spectroscopy (pXRF). A highly evolved technique, inductively coupled plasma-mass spectroscopy (ICP-MS) was chosen as a comparative technique to pXRF, and a good correlation (r2 > 0.995) with acceptable variations (≤ 25%) in results between both techniques was observed. Analytical figures of merit such as detection limit, quantitation limit, and linear range of the method are reported for the pXRF technique. This method has a good linearity (r2 > 0.995) for the analysis of titanium (Ti) in the range of 0.4-14.23 wt%, and zinc (Zn) in the range of 1.0-23.90 wt%. However, most commercial sunscreens contain organic ingredients, and these ingredients are known to cause matrix effects. The development of appropriate matrix matched working standards to obtain the calibration curve was found to be a major challenge for the pXRF measurements. In this study, we have overcome the matrix effect by using metal-free commercial sunscreens as a dispersing media for the preparation of working standards. An easy extension of this unique methodology for preparing working standards in different matrices was also reported. This method is simple, rapid, and cost-effective and, in comparison to conventional techniques (e.g., ICP-MS), did not generate toxic wastes during sample analysis.
Zweerink, Alwin; Allaart, Cornelis P; Kuijer, Joost P A; Wu, LiNa; Beek, Aernout M; van de Ven, Peter M; Meine, Mathias; Croisille, Pierre; Clarysse, Patrick; van Rossum, Albert C; Nijveldt, Robin
2017-12-01
Although myocardial strain analysis is a potential tool to improve patient selection for cardiac resynchronization therapy (CRT), there is currently no validated clinical approach to derive segmental strains. We evaluated the novel segment length in cine (SLICE) technique to derive segmental strains from standard cardiovascular MR (CMR) cine images in CRT candidates. Twenty-seven patients with left bundle branch block underwent CMR examination including cine imaging and myocardial tagging (CMR-TAG). SLICE was performed by measuring segment length between anatomical landmarks throughout all phases on short-axis cines. This measure of frame-to-frame segment length change was compared to CMR-TAG circumferential strain measurements. Subsequently, conventional markers of CRT response were calculated. Segmental strains showed good to excellent agreement between SLICE and CMR-TAG (septum strain, intraclass correlation coefficient (ICC) 0.76; lateral wall strain, ICC 0.66). Conventional markers of CRT response also showed close agreement between both methods (ICC 0.61-0.78). Reproducibility of SLICE was excellent for intra-observer testing (all ICC ≥0.76) and good for interobserver testing (all ICC ≥0.61). The novel SLICE post-processing technique on standard CMR cine images offers both accurate and robust segmental strain measures compared to the 'gold standard' CMR-TAG technique, and has the advantage of being widely available. • Myocardial strain analysis could potentially improve patient selection for CRT. • Currently a well validated clinical approach to derive segmental strains is lacking. • The novel SLICE technique derives segmental strains from standard CMR cine images. • SLICE-derived strain markers of CRT response showed close agreement with CMR-TAG. • Future studies will focus on the prognostic value of SLICE in CRT candidates.
Toward a Standardized ODH Analysis Technique
Degraff, Brian D.
2016-12-01
Standardization of ODH analysis and mitigation policy thus represents an opportunity for the cryogenic community. There are several benefits for industry and government facilities to develop an applicable unified standard for ODH. The number of reviewers would increase, and review projects across different facilities would be simpler. Here, it would also present the opportunity for the community to broaden the development of expertise in modeling complicated flow geometries.
Code of Federal Regulations, 2012 CFR
2012-07-01
...). (7) ASTM E 168-88, “Standard Practices for General Techniques of Infrared Quantitative Analysis,” IBR...-Visible Quantitative Analysis,” IBR approved for § 264.1063. (9) ASTM E 260-85, “Standard Practice for..., Research Triangle Park, NC. (1) “Screening Procedures for Estimating the Air Quality Impact of Stationary...
Code of Federal Regulations, 2014 CFR
2014-07-01
...). (7) ASTM E 168-88, “Standard Practices for General Techniques of Infrared Quantitative Analysis,” IBR...-Visible Quantitative Analysis,” IBR approved for § 264.1063. (9) ASTM E 260-85, “Standard Practice for..., Research Triangle Park, NC. (1) “Screening Procedures for Estimating the Air Quality Impact of Stationary...
Code of Federal Regulations, 2013 CFR
2013-07-01
...). (7) ASTM E 168-88, “Standard Practices for General Techniques of Infrared Quantitative Analysis,” IBR...-Visible Quantitative Analysis,” IBR approved for § 264.1063. (9) ASTM E 260-85, “Standard Practice for..., Research Triangle Park, NC. (1) “Screening Procedures for Estimating the Air Quality Impact of Stationary...
Standardization of proton-induced x-ray emission technique for analysis of thick samples
NASA Astrophysics Data System (ADS)
Ali, Shad; Zeb, Johar; Ahad, Abdul; Ahmad, Ishfaq; Haneef, M.; Akbar, Jehan
2015-09-01
This paper describes the standardization of the proton-induced x-ray emission (PIXE) technique for finding the elemental composition of thick samples. For the standardization, three different samples of standard reference materials (SRMs) were analyzed using this technique and the data were compared with the already known data of these certified SRMs. These samples were selected in order to cover the maximum range of elements in the periodic table. Each sample was irradiated for three different values of collected beam charges at three different times. A proton beam of 2.57 MeV obtained using 5UDH-II Pelletron accelerator was used for excitation of x-rays from the sample. The acquired experimental data were analyzed using the GUPIXWIN software. The results show that the SRM data and the data obtained using the PIXE technique are in good agreement.
An Application of Indian Health Service Standards for Alcoholism Programs.
ERIC Educational Resources Information Center
Burns, Thomas R.
1984-01-01
Discusses Phoenix-area applications of 1981 Indian Health Service standards for alcoholism programs. Results of standard statistical techniques note areas of deficiency through application of a one-tailed z test at .05 level of significance. Factor analysis sheds further light on design of standards. Implications for revisions are suggested.…
Molenaar, Peter C M
2008-01-01
It is argued that general mathematical-statistical theorems imply that standard statistical analysis techniques of inter-individual variation are invalid to investigate developmental processes. Developmental processes have to be analyzed at the level of individual subjects, using time series data characterizing the patterns of intra-individual variation. It is shown that standard statistical techniques based on the analysis of inter-individual variation appear to be insensitive to the presence of arbitrary large degrees of inter-individual heterogeneity in the population. An important class of nonlinear epigenetic models of neural growth is described which can explain the occurrence of such heterogeneity in brain structures and behavior. Links with models of developmental instability are discussed. A simulation study based on a chaotic growth model illustrates the invalidity of standard analysis of inter-individual variation, whereas time series analysis of intra-individual variation is able to recover the true state of affairs. (c) 2007 Wiley Periodicals, Inc.
Approaches to answering critical CER questions.
Kinnier, Christine V; Chung, Jeanette W; Bilimoria, Karl Y
2015-01-01
While randomized controlled trials (RCTs) are the gold standard for research, many research questions cannot be ethically and practically answered using an RCT. Comparative effectiveness research (CER) techniques are often better suited than RCTs to address the effects of an intervention under routine care conditions, an outcome otherwise known as effectiveness. CER research techniques covered in this section include: effectiveness-oriented experimental studies such as pragmatic trials and cluster randomized trials, treatment response heterogeneity, observational and database studies including adjustment techniques such as sensitivity analysis and propensity score analysis, systematic reviews and meta-analysis, decision analysis, and cost effectiveness analysis. Each section describes the technique and covers the strengths and weaknesses of the approach.
NASA Technical Reports Server (NTRS)
Wingard, Charles D.; Whitaker, Ann F. (Technical Monitor)
2000-01-01
White Hypalon paint is brush-applied as a moisture barrier coating over cork surfaces on each of the two Space Shuttle SRBs. Fine cracks have been observed in the Hypalon coating three times historically on laboratory witness panels, but never on flight hardware. Samples of the cracked and standard ("good") Hypalon were removed from witness panel cork surfaces, and were tested in 1998 by Thermogravimetric Analysis (TGA), TMA and Differential Scanning Calorimetry (DSC) thermal analysis techniques. The TGA data showed that at 700C, where only paint pigment solids remain, the cracked material had about 9 weight percent more material remaining than the standard material, probably indicating incomplete mixing of the paint before it was brush-applied to produce the cracked material. Use of the TMA film/fiber technique showed that the average modulus (stiffness) vs. temperature was about 3 to 6 times higher for the cracked material than for the standard material. The TMA data also showed that an increase in coating thickness for the cracked Hypalon was not a factor in the anomaly.
Freeman, Karoline; Tsertsvadze, Alexander; Taylor-Phillips, Sian; McCarthy, Noel; Mistry, Hema; Manuel, Rohini; Mason, James
2017-01-01
Multiplex gastrointestinal pathogen panel (GPP) tests simultaneously identify bacterial, viral and parasitic pathogens from the stool samples of patients with suspected infectious gastroenteritis presenting in hospital or the community. We undertook a systematic review to compare the accuracy of GPP tests with standard microbiology techniques. Searches in Medline, Embase, Web of Science and the Cochrane library were undertaken from inception to January 2016. Eligible studies compared GPP tests with standard microbiology techniques in patients with suspected gastroenteritis. Quality assessment of included studies used tailored QUADAS-2. In the absence of a reference standard we analysed test performance taking GPP tests and standard microbiology techniques in turn as the benchmark test, using random effects meta-analysis of proportions. No study provided an adequate reference standard with which to compare the test accuracy of GPP and conventional tests. Ten studies informed a meta-analysis of positive and negative agreement. Positive agreement across all pathogens was 0.93 (95% CI 0.90 to 0.96) when conventional methods were the benchmark and 0.68 (95% CI: 0.58 to 0.77) when GPP provided the benchmark. Negative agreement was high in both instances due to the high proportion of negative cases. GPP testing produced a greater number of pathogen-positive findings than conventional testing. It is unclear whether these additional 'positives' are clinically important. GPP testing has the potential to simplify testing and accelerate reporting when compared to conventional microbiology methods. However the impact of GPP testing upon the management, treatment and outcome of patients is poorly understood and further studies are needed to evaluate the health economic impact of GPP testing compared with standard methods. The review protocol is registered with PROSPERO as CRD42016033320.
Pinkerton, Steven D.; Pearson, Cynthia R.; Eachus, Susan R.; Berg, Karina M.; Grimes, Richard M.
2008-01-01
Summary Maximizing our economic investment in HIV prevention requires balancing the costs of candidate interventions against their effects and selecting the most cost-effective interventions for implementation. However, many HIV prevention intervention trials do not collect cost information, and those that do use a variety of cost data collection methods and analysis techniques. Standardized cost data collection procedures, instrumentation, and analysis techniques are needed to facilitate the task of assessing intervention costs and to ensure comparability across intervention trials. This article describes the basic elements of a standardized cost data collection and analysis protocol and outlines a computer-based approach to implementing this protocol. Ultimately, the development of such a protocol would require contributions and “buy-in” from a diverse range of stakeholders, including HIV prevention researchers, cost-effectiveness analysts, community collaborators, public health decision makers, and funding agencies. PMID:18301128
ERIC Educational Resources Information Center
Van Atta, Robert E.; Van Atta, R. Lewis
1980-01-01
Provides a gas chromatography experiment that exercises the quantitative technique of standard addition to the analysis for a minor component, methyl salicylate, in a commercial product, "wintergreen rubbing alcohol." (CS)
Neutron activation analysis of certified samples by the absolute method
NASA Astrophysics Data System (ADS)
Kadem, F.; Belouadah, N.; Idiri, Z.
2015-07-01
The nuclear reactions analysis technique is mainly based on the relative method or the use of activation cross sections. In order to validate nuclear data for the calculated cross section evaluated from systematic studies, we used the neutron activation analysis technique (NAA) to determine the various constituent concentrations of certified samples for animal blood, milk and hay. In this analysis, the absolute method is used. The neutron activation technique involves irradiating the sample and subsequently performing a measurement of the activity of the sample. The fundamental equation of the activation connects several physical parameters including the cross section that is essential for the quantitative determination of the different elements composing the sample without resorting to the use of standard sample. Called the absolute method, it allows a measurement as accurate as the relative method. The results obtained by the absolute method showed that the values are as precise as the relative method requiring the use of standard sample for each element to be quantified.
,
1990-01-01
Various techniques were used to decipher the sedimentation history of Site 765, including Markov chain analysis of facies transitions, XRD analysis of clay and other minerals, and multivariate analysis of smear-slide data, in addition to the standard descriptive procedures employed by the shipboard sedimentologist. This chapter presents brief summaries of methodology and major findings of these three techniques, a summary of the sedimentation history, and a discussion of trends in sedimentation through time.
Towards generating ECSS-compliant fault tree analysis results via ConcertoFLA
NASA Astrophysics Data System (ADS)
Gallina, B.; Haider, Z.; Carlsson, A.
2018-05-01
Attitude Control Systems (ACSs) maintain the orientation of the satellite in three-dimensional space. ACSs need to be engineered in compliance with ECSS standards and need to ensure a certain degree of dependability. Thus, dependability analysis is conducted at various levels and by using ECSS-compliant techniques. Fault Tree Analysis (FTA) is one of these techniques. FTA is being automated within various Model Driven Engineering (MDE)-based methodologies. The tool-supported CHESS-methodology is one of them. This methodology incorporates ConcertoFLA, a dependability analysis technique enabling failure behavior analysis and thus FTA-results generation. ConcertoFLA, however, similarly to other techniques, still belongs to the academic research niche. To promote this technique within the space industry, we apply it on an ACS and discuss about its multi-faceted potentialities in the context of ECSS-compliant engineering.
Variance analysis refines overhead cost control.
Cooper, J C; Suver, J D
1992-02-01
Many healthcare organizations may not fully realize the benefits of standard cost accounting techniques because they fail to routinely report volume variances in their internal reports. If overhead allocation is routinely reported on internal reports, managers can determine whether billing remains current or lost charges occur. Healthcare organizations' use of standard costing techniques can lead to more realistic performance measurements and information system improvements that alert management to losses from unrecovered overhead in time for corrective action.
Double row equivalent for rotator cuff repair: A biomechanical analysis of a new technique.
Robinson, Sean; Krigbaum, Henry; Kramer, Jon; Purviance, Connor; Parrish, Robin; Donahue, Joseph
2018-06-01
There are numerous configurations of double row fixation for rotator cuff tears however, there remains to be a consensus on the best method. In this study, we evaluated three different double-row configurations, including a new method. Our primary question is whether the new anchor and technique compares in biomechanical strength to standard double row techniques. Eighteen prepared fresh frozen bovine infraspinatus tendons were randomized to one of three groups including the New Double Row Equivalent, Arthrex Speedbridge and a transosseous equivalent using standard Stabilynx anchors. Biomechanical testing was performed on humeri sawbones and ultimate load, strain, yield strength, contact area, contact pressure, and a survival plots were evaluated. The new double row equivalent method demonstrated increased survival as well as ultimate strength at 415N compared to the remainder testing groups as well as equivalent contact area and pressure to standard double row techniques. This new anchor system and technique demonstrated higher survival rates and loads to failure than standard double row techniques. This data provides us with a new method of rotator cuff fixation which should be further evaluated in the clinical setting. Basic science biomechanical study.
Iterative categorization (IC): a systematic technique for analysing qualitative data
2016-01-01
Abstract The processes of analysing qualitative data, particularly the stage between coding and publication, are often vague and/or poorly explained within addiction science and research more broadly. A simple but rigorous and transparent technique for analysing qualitative textual data, developed within the field of addiction, is described. The technique, iterative categorization (IC), is suitable for use with inductive and deductive codes and can support a range of common analytical approaches, e.g. thematic analysis, Framework, constant comparison, analytical induction, content analysis, conversational analysis, discourse analysis, interpretative phenomenological analysis and narrative analysis. Once the data have been coded, the only software required is a standard word processing package. Worked examples are provided. PMID:26806155
Campbell, Darren; Carnell, Sarah Maria; Eden, Russell John
2013-05-01
Contact angle, as a representative measure of surface wettability, is often employed to interpret contact lens surface properties. The literature is often contradictory and can lead to confusion. This literature review is part of a series regarding the analysis of hydrogel contact lenses using contact angle techniques. Here we present an overview of contact angle terminology, methodology, and analysis. Having discussed this background material, subsequent parts of the series will discuss the analysis of contact lens contact angles and evaluate differences in published laboratory results. The concepts of contact angle, wettability and wetting are presented as an introduction. Contact angle hysteresis is outlined and highlights the advantages in using dynamic analytical techniques over static methods. The surface free energy of a material illustrates how contact angle analysis is capable of providing supplementary surface characterization. Although single values are able to distinguish individual material differences, surface free energy and dynamic methods provide an improved understanding of material behavior. The frequently used sessile drop, captive bubble, and Wilhelmy plate techniques are discussed. Their use as both dynamic and static methods, along with the advantages and disadvantages of each technique, is explained. No single contact angle technique fully characterizes the wettability of a material surface, and the application of complimenting methods allows increased characterization. At present, there is not an ISO standard method designed for soft materials. It is important that each contact angle technique has a standard protocol, as small protocol differences between laboratories often contribute to a variety of published data that are not easily comparable.
PIXE and XRF Analysis of Roman Denarii
NASA Astrophysics Data System (ADS)
Fasano, Cecilia; Raddell, Mark; Manukyan, Khachatur; Stech, Edward; Wiescher, Michael
2017-09-01
A set of Roman Denarii from the republican to the imperial period (140BC-240AD) has been studied using X-ray fluorescent (XRF) scanning and proton induced x-ray emission (PIXE) techniques. XRF and PIXE are commonly used in the study of cultural heritage objects because they are nondestructive. The combination of these two methods is also unique because of the ability to penetrate the sample with a broader spectrum of depths and energies than either could achieve on its own. The coins are from a large span of Roman history and their analysis serves to follow the economic and political change of the era using the relative silver and copper contents in each sample. In addition to analyzing the samples, the study sought to compare these two common analysis techniques and to explore the use of a standard to examine any shortcomings in either of the methods. Data sets were compared and then adjusted to a calibration curve which was created from the analysis of a number of standard solutions. The concentrations of the standard solutions were confirmed using inductively coupled plasma spectroscopy. Through this we were able to assemble results which will progress the basis of understanding of PIXE and XRF techniques as well as increase the wealth of knowledge of Ancient Roman currency.
McDonald, Gene D; Storrie-Lombardi, Michael C
2006-02-01
The relative abundance of the protein amino acids has been previously investigated as a potential marker for biogenicity in meteoritic samples. However, these investigations were executed without a quantitative metric to evaluate distribution variations, and they did not account for the possibility of interdisciplinary systematic error arising from inter-laboratory differences in extraction and detection techniques. Principal component analysis (PCA), hierarchical cluster analysis (HCA), and stochastic probabilistic artificial neural networks (ANNs) were used to compare the distributions for nine protein amino acids previously reported for the Murchison carbonaceous chondrite, Mars meteorites (ALH84001, Nakhla, and EETA79001), prebiotic synthesis experiments, and terrestrial biota and sediments. These techniques allowed us (1) to identify a shift in terrestrial amino acid distributions secondary to diagenesis; (2) to detect differences in terrestrial distributions that may be systematic differences between extraction and analysis techniques in biological and geological laboratories; and (3) to determine that distributions in meteoritic samples appear more similar to prebiotic chemistry samples than they do to the terrestrial unaltered or diagenetic samples. Both diagenesis and putative interdisciplinary differences in analysis complicate interpretation of meteoritic amino acid distributions. We propose that the analysis of future samples from such diverse sources as meteoritic influx, sample return missions, and in situ exploration of Mars would be less ambiguous with adoption of standardized assay techniques, systematic inclusion of assay standards, and the use of a quantitative, probabilistic metric. We present here one such metric determined by sequential feature extraction and normalization (PCA), information-driven automated exploration of classification possibilities (HCA), and prediction of classification accuracy (ANNs).
NO TIME FOR DEAD TIME: TIMING ANALYSIS OF BRIGHT BLACK HOLE BINARIES WITH NuSTAR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bachetti, Matteo; Barret, Didier; Harrison, Fiona A.
Timing of high-count-rate sources with the NuSTAR Small Explorer Mission requires specialized analysis techniques. NuSTAR was primarily designed for spectroscopic observations of sources with relatively low count rates rather than for timing analysis of bright objects. The instrumental dead time per event is relatively long (∼2.5 msec) and varies event-to-event by a few percent. The most obvious effect is a distortion of the white noise level in the power density spectrum (PDS) that cannot be easily modeled with standard techniques due to the variable nature of the dead time. In this paper, we show that it is possible to exploitmore » the presence of two completely independent focal planes and use the cospectrum, the real part of the cross PDS, to obtain a good proxy of the white-noise-subtracted PDS. Thereafter, one can use a Monte Carlo approach to estimate the remaining effects of dead time, namely, a frequency-dependent modulation of the variance and a frequency-independent drop of the sensitivity to variability. In this way, most of the standard timing analysis can be performed, albeit with a sacrifice in signal-to-noise ratio relative to what would be achieved using more standard techniques. We apply this technique to NuSTAR observations of the black hole binaries GX 339–4, Cyg X-1, and GRS 1915+105.« less
Kernel analysis in TeV gamma-ray selection
NASA Astrophysics Data System (ADS)
Moriarty, P.; Samuelson, F. W.
2000-06-01
We discuss the use of kernel analysis as a technique for selecting gamma-ray candidates in Atmospheric Cherenkov astronomy. The method is applied to observations of the Crab Nebula and Markarian 501 recorded with the Whipple 10 m Atmospheric Cherenkov imaging system, and the results are compared with the standard Supercuts analysis. Since kernel analysis is computationally intensive, we examine approaches to reducing the computational load. Extension of the technique to estimate the energy of the gamma-ray primary is considered. .
Towards Effective Clustering Techniques for the Analysis of Electric Power Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Emilie A.; Cotilla Sanchez, Jose E.; Halappanavar, Mahantesh
2013-11-30
Clustering is an important data analysis technique with numerous applications in the analysis of electric power grids. Standard clustering techniques are oblivious to the rich structural and dynamic information available for power grids. Therefore, by exploiting the inherent topological and electrical structure in the power grid data, we propose new methods for clustering with applications to model reduction, locational marginal pricing, phasor measurement unit (PMU or synchrophasor) placement, and power system protection. We focus our attention on model reduction for analysis based on time-series information from synchrophasor measurement devices, and spectral techniques for clustering. By comparing different clustering techniques onmore » two instances of realistic power grids we show that the solutions are related and therefore one could leverage that relationship for a computational advantage. Thus, by contrasting different clustering techniques we make a case for exploiting structure inherent in the data with implications for several domains including power systems.« less
Determination of total sulfur in lichens and plants by combustion-infrared analysis
Jackson, L.L.; Engleman, E.E.; Peard, J.L.
1985-01-01
Sulfur was determined in plants and lichens by combustion of the sample and infrared detection of evolved sulfur dioxide using an automated sulfur analyzer. Vanadium pentaoxide was used as a combustion accelerator. Pelletization of the sample prior to combustion was not found to be advantageous. Washing studies showed that leaching of sulfur was not a major factor in the sample preparation. The combustion-IR analysis usually gave higher sulfur content than the turbidimetric analysis as well as shorter analysis time. Relative standard deviations of less than 7% were obtained by the combustion-IR technique when sulfur levels in plant material ranged from 0.05 to 0.70%. Determination of sulfur in National Bureau of Standards botanical reference materials showed good agreement between the combustion-IR technique and other instrumental procedures. Seven NBS botanical reference materials were analyzed.
Standards guide for space and earth sciences computer software
NASA Technical Reports Server (NTRS)
Mason, G.; Chapman, R.; Klinglesmith, D.; Linnekin, J.; Putney, W.; Shaffer, F.; Dapice, R.
1972-01-01
Guidelines for the preparation of systems analysis and programming work statements are presented. The data is geared toward the efficient administration of available monetary and equipment resources. Language standards and the application of good management techniques to software development are emphasized.
NASA Technical Reports Server (NTRS)
Cohen, S. C.
1980-01-01
A technique for fitting a straight line to a collection of data points is given. The relationships between the slopes and correlation coefficients, and between the corresponding standard deviations and correlation coefficient are given.
Park, Jin Ha; Lee, Jong Seok; Nam, Sang Beom; Ju, Jin Wu
2016-01-01
Purpose Supraglottic airway devices have been widely utilized as an alternative to tracheal intubation in various clinical situations. The rotation technique has been proposed to improve the insertion success rate of supraglottic airways. However, the clinical efficacy of this technique remains uncertain as previous results have been inconsistent, depending on the variable evaluated. Materials and Methods We systematically searched PubMed, Embase, and the Cochrane Central Register of Controlled Trials in April 2015 for randomized controlled trials that compared the rotation and standard techniques for inserting supraglottic airways. Results Thirteen randomized controlled trials (1505 patients, 753 with the rotation technique) were included. The success rate at the first attempt was significantly higher with the rotation technique than with the standard technique [relative risk (RR): 1.13; 95% confidence interval (CI): 1.05 to 1.23; p=0.002]. The rotation technique provided significantly higher overall success rates (RR: 1.06; 95% CI: 1.04 to 1.09; p<0.001). Device insertion was completed faster with the rotation technique (mean difference: -4.6 seconds; 95% CI: -7.37 to -1.74; p=0.002). The incidence of blood staining on the removed device (RR: 0.36; 95% CI: 0.27 to 0.47; p<0.001) was significantly lower with the rotation technique. Conclusion The rotation technique provided higher first-attempt and overall success rates, faster insertion, and a lower incidence of blood on the removed device, reflecting less mucosal trauma. Thus, it may be considered as an alternative to the standard technique when predicting or encountering difficulty in inserting supraglottic airways. PMID:27189296
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackson, L.L.; Engleman, E.E.; Peard, J.L.
Sulfur was determined in plants and lichens by combustion of the sample and infrared detection of evolved sulfur dioxide using an automated sulfur analyzer. Vanadium pentaoxide was used as a combustion accelerator. Pelletization of the sample prior to combustion was not found to be advantageous. Washing studies showed that leaching of sulfur was not a major factor in the sample preparation. The combustion-IR analysis usually gave higher sulfur content than the turbidimetric analysis as well as shorter analysis time. Relative standard deviations of less than 7% were obtained by the combustion-IR technique when sulfur levels in plant material range frommore » 0.05 to 0.70%. Determination of sulfur in National Bureau of Standards botanical reference materials showed good agreement between the combustion-IR technique and other instrumental procedures. Seven NBS botanical reference materials were analyzed.« less
Lawless, Craig; Hubbard, Simon J.; Fan, Jun; Bessant, Conrad; Hermjakob, Henning; Jones, Andrew R.
2012-01-01
Abstract New methods for performing quantitative proteome analyses based on differential labeling protocols or label-free techniques are reported in the literature on an almost monthly basis. In parallel, a correspondingly vast number of software tools for the analysis of quantitative proteomics data has also been described in the literature and produced by private companies. In this article we focus on the review of some of the most popular techniques in the field and present a critical appraisal of several software packages available to process and analyze the data produced. We also describe the importance of community standards to support the wide range of software, which may assist researchers in the analysis of data using different platforms and protocols. It is intended that this review will serve bench scientists both as a useful reference and a guide to the selection and use of different pipelines to perform quantitative proteomics data analysis. We have produced a web-based tool (http://www.proteosuite.org/?q=other_resources) to help researchers find appropriate software for their local instrumentation, available file formats, and quantitative methodology. PMID:22804616
Chemical measurement of urine volume
NASA Technical Reports Server (NTRS)
Sauer, R. L.
1978-01-01
Chemical method of measuring volume of urine samples using lithium chloride dilution technique, does not interfere with analysis, is faster, and more accurate than standard volumetric of specific gravity/weight techniques. Adaptation of procedure to urinalysis could prove generally practical for hospital mineral balance and catechoamine determinations.
The Timeseries Toolbox - A Web Application to Enable Accessible, Reproducible Time Series Analysis
NASA Astrophysics Data System (ADS)
Veatch, W.; Friedman, D.; Baker, B.; Mueller, C.
2017-12-01
The vast majority of data analyzed by climate researchers are repeated observations of physical process or time series data. This data lends itself of a common set of statistical techniques and models designed to determine trends and variability (e.g., seasonality) of these repeated observations. Often, these same techniques and models can be applied to a wide variety of different time series data. The Timeseries Toolbox is a web application designed to standardize and streamline these common approaches to time series analysis and modeling with particular attention to hydrologic time series used in climate preparedness and resilience planning and design by the U. S. Army Corps of Engineers. The application performs much of the pre-processing of time series data necessary for more complex techniques (e.g. interpolation, aggregation). With this tool, users can upload any dataset that conforms to a standard template and immediately begin applying these techniques to analyze their time series data.
2009-05-01
time transfer techniques has largely been due to the improvement in frequency standards. In this document, an effort was made to provide substantial...of RCC Document 214-94, contains definitions of frequency and timing terms, time transfer techniques and analysis, and behavior of crystal and atomic...Characteristics, May 2009 viii TTG Telecommunications and Timing Group TWSTFT Two-Way Satellite Time and Frequency Transfer U.S. United States USNO
Optimization Based Efficiencies in First Order Reliability Analysis
NASA Technical Reports Server (NTRS)
Peck, Jeffrey A.; Mahadevan, Sankaran
2003-01-01
This paper develops a method for updating the gradient vector of the limit state function in reliability analysis using Broyden's rank one updating technique. In problems that use commercial code as a black box, the gradient calculations are usually done using a finite difference approach, which becomes very expensive for large system models. The proposed method replaces the finite difference gradient calculations in a standard first order reliability method (FORM) with Broyden's Quasi-Newton technique. The resulting algorithm of Broyden updates within a FORM framework (BFORM) is used to run several example problems, and the results compared to standard FORM results. It is found that BFORM typically requires fewer functional evaluations that FORM to converge to the same answer.
An improved technique for the 2H/1H analysis of urines from diabetic volunteers
Coplen, T.B.; Harper, I.T.
1994-01-01
The H2-H2O ambient-temperature equilibration technique for the determination of 2H/1H ratios in urinary waters from diabetic subjects provides improved accuracy over the conventional Zn reduction technique. The standard deviation, ~ 1-2???, is at least a factor of three better than that of the Zn reduction technique on urinary waters from diabetic volunteers. Experiments with pure water and solutions containing glucose, urea and albumen indicate that there is no measurable bias in the hydrogen equilibration technique.The H2-H2O ambient-temperature equilibration technique for the determination of 2H/1H ratios in urinary waters from diabetic subjects provides improved accuracy over the conventional Zn reduction technique. The standard deviation, approximately 1-2%, is at least a factor of three better than that of the Zn reduction technique on urinary waters from diabetic volunteers. Experiments with pure water and solutions containing glucose, urea and albumen indicate that there is no measurable bias in the hydrogen equilibration technique.
Postmortem validation of breast density using dual-energy mammography
Molloi, Sabee; Ducote, Justin L.; Ding, Huanjun; Feig, Stephen A.
2014-01-01
Purpose: Mammographic density has been shown to be an indicator of breast cancer risk and also reduces the sensitivity of screening mammography. Currently, there is no accepted standard for measuring breast density. Dual energy mammography has been proposed as a technique for accurate measurement of breast density. The purpose of this study is to validate its accuracy in postmortem breasts and compare it with other existing techniques. Methods: Forty postmortem breasts were imaged using a dual energy mammography system. Glandular and adipose equivalent phantoms of uniform thickness were used to calibrate a dual energy basis decomposition algorithm. Dual energy decomposition was applied after scatter correction to calculate breast density. Breast density was also estimated using radiologist reader assessment, standard histogram thresholding and a fuzzy C-mean algorithm. Chemical analysis was used as the reference standard to assess the accuracy of different techniques to measure breast composition. Results: Breast density measurements using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm, and dual energy were in good agreement with the measured fibroglandular volume fraction using chemical analysis. The standard error estimates using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean, and dual energy were 9.9%, 8.6%, 7.2%, and 4.7%, respectively. Conclusions: The results indicate that dual energy mammography can be used to accurately measure breast density. The variability in breast density estimation using dual energy mammography was lower than reader assessment rankings, standard histogram thresholding, and fuzzy C-mean algorithm. Improved quantification of breast density is expected to further enhance its utility as a risk factor for breast cancer. PMID:25086548
Postmortem validation of breast density using dual-energy mammography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molloi, Sabee, E-mail: symolloi@uci.edu; Ducote, Justin L.; Ding, Huanjun
2014-08-15
Purpose: Mammographic density has been shown to be an indicator of breast cancer risk and also reduces the sensitivity of screening mammography. Currently, there is no accepted standard for measuring breast density. Dual energy mammography has been proposed as a technique for accurate measurement of breast density. The purpose of this study is to validate its accuracy in postmortem breasts and compare it with other existing techniques. Methods: Forty postmortem breasts were imaged using a dual energy mammography system. Glandular and adipose equivalent phantoms of uniform thickness were used to calibrate a dual energy basis decomposition algorithm. Dual energy decompositionmore » was applied after scatter correction to calculate breast density. Breast density was also estimated using radiologist reader assessment, standard histogram thresholding and a fuzzy C-mean algorithm. Chemical analysis was used as the reference standard to assess the accuracy of different techniques to measure breast composition. Results: Breast density measurements using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm, and dual energy were in good agreement with the measured fibroglandular volume fraction using chemical analysis. The standard error estimates using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean, and dual energy were 9.9%, 8.6%, 7.2%, and 4.7%, respectively. Conclusions: The results indicate that dual energy mammography can be used to accurately measure breast density. The variability in breast density estimation using dual energy mammography was lower than reader assessment rankings, standard histogram thresholding, and fuzzy C-mean algorithm. Improved quantification of breast density is expected to further enhance its utility as a risk factor for breast cancer.« less
Digital Dental X-ray Database for Caries Screening
NASA Astrophysics Data System (ADS)
Rad, Abdolvahab Ehsani; Rahim, Mohd Shafry Mohd; Rehman, Amjad; Saba, Tanzila
2016-06-01
Standard database is the essential requirement to compare the performance of image analysis techniques. Hence the main issue in dental image analysis is the lack of available image database which is provided in this paper. Periapical dental X-ray images which are suitable for any analysis and approved by many dental experts are collected. This type of dental radiograph imaging is common and inexpensive, which is normally used for dental disease diagnosis and abnormalities detection. Database contains 120 various Periapical X-ray images from top to bottom jaw. Dental digital database is constructed to provide the source for researchers to use and compare the image analysis techniques and improve or manipulate the performance of each technique.
NASA Technical Reports Server (NTRS)
Rocha, Camilo; Meseguer, Jose; Munoz, Cesar A.
2013-01-01
Combining symbolic techniques such as: (i) SMT solving, (ii) rewriting modulo theories, and (iii) model checking can enable the analysis of infinite-state systems outside the scope of each such technique. This paper proposes rewriting modulo SMT as a new technique combining the powers of (i)-(iii) and ideally suited to model and analyze infinite-state open systems; that is, systems that interact with a non-deterministic environment. Such systems exhibit both internal non-determinism due to the system, and external non-determinism due to the environment. They are not amenable to finite-state model checking analysis because they typically are infinite-state. By being reducible to standard rewriting using reflective techniques, rewriting modulo SMT can both naturally model and analyze open systems without requiring any changes to rewriting-based reachability analysis techniques for closed systems. This is illustrated by the analysis of a real-time system beyond the scope of timed automata methods.
Modeling Payload Stowage Impacts on Fire Risks On-Board the International Space Station
NASA Technical Reports Server (NTRS)
Anton, Kellie e.; Brown, Patrick F.
2010-01-01
The purpose of this presentation is to determine the risks of fire on-board the ISS due to non-standard stowage. ISS stowage is constantly being reexamined for optimality. Non-standard stowage involves stowing items outside of rack drawers, and fire risk is a key concern and is heavily mitigated. A Methodology is needed to account for fire risk due to non-standard stowage to capture the risk. The contents include: 1) Fire Risk Background; 2) General Assumptions; 3) Modeling Techniques; 4) Event Sequence Diagram (ESD); 5) Qualitative Fire Analysis; 6) Sample Qualitative Results for Fire Risk; 7) Qualitative Stowage Analysis; 8) Sample Qualitative Results for Non-Standard Stowage; and 9) Quantitative Analysis Basic Event Data.
Evans, Luke; Manley, Kate
2016-06-01
Single-incision laparoscopic surgery represents an evolution of minimally invasive techniques, but has been a controversial development. A cosmetic advantage is stated by many authors, but has not been found to be universally present or even of considerable importance by patients. This systematic review and meta-analysis demonstrates that there is a cosmetic advantage of the technique regardless of the operation type. The treatment effect in terms of cosmetic improvement is of the order of 0.63.
NASA Astrophysics Data System (ADS)
Ctvrtnickova, T.; Mateo, M. P.; Yañez, A.; Nicolas, G.
2011-04-01
Presented work brings results of Laser-Induced Breakdown Spectroscopy (LIBS) and Thermo-Mechanical Analysis (TMA) of coals and coal blends used in coal fired power plants all over Spain. Several coal specimens, its blends and corresponding laboratory ash were analyzed by mentioned techniques and results were compared to standard laboratory methods. The indices of slagging, which predict the tendency of coal ash deposition on the boiler walls, were determined by means of standard chemical analysis, LIBS and TMA. The optimal coal suitable to be blended with the problematic national lignite coal was suggested in order to diminish the slagging problems. Used techniques were evaluated based on the precision, acquisition time, extension and quality of information they could provide. Finally, the applicability of LIBS and TMA to the successful calculation of slagging indices is discussed and their substitution of time-consuming and instrumentally difficult standard methods is considered.
Potvin, Christopher M; Zhou, Hongde
2011-11-01
The objective of this study was to demonstrate the effects of complex matrix effects caused by chemical materials on the analysis of key soluble microbial products (SMP) including proteins, humics, carbohydrates, and polysaccharides in activated sludge samples. Emphasis was placed on comparison of the commonly used standard curve technique with standard addition (SA), a technique that differs in that the analytical responses are measured for sample solutions spiked with known quantities of analytes. The results showed that using SA provided a great improvement in compensating for SMP recovery and thus improving measurement accuracy by correcting for matrix effects. Analyte recovery was found to be highly dependent on sample dilution, and changed due to extraction techniques, storage conditions and sample composition. Storage of sample extracts by freezing changed SMP concentrations dramatically, as did storage at 4°C for as little as 1d. Copyright © 2011 Elsevier Ltd. All rights reserved.
Characterization of Metal Powders Used for Additive Manufacturing.
Slotwinski, J A; Garboczi, E J; Stutzman, P E; Ferraris, C F; Watson, S S; Peltz, M A
2014-01-01
Additive manufacturing (AM) techniques can produce complex, high-value metal parts, with potential applications as critical parts, such as those found in aerospace components. The production of AM parts with consistent and predictable properties requires input materials (e.g., metal powders) with known and repeatable characteristics, which in turn requires standardized measurement methods for powder properties. First, based on our previous work, we assess the applicability of current standardized methods for powder characterization for metal AM powders. Then we present the results of systematic studies carried out on two different powder materials used for additive manufacturing: stainless steel and cobalt-chrome. The characterization of these powders is important in NIST efforts to develop appropriate measurements and standards for additive materials and to document the property of powders used in a NIST-led additive manufacturing material round robin. An extensive array of characterization techniques was applied to these two powders, in both virgin and recycled states. The physical techniques included laser diffraction particle size analysis, X-ray computed tomography for size and shape analysis, and optical and scanning electron microscopy. Techniques sensitive to structure and chemistry, including X-ray diffraction, energy dispersive analytical X-ray analysis using the X-rays generated during scanning electron microscopy, and X-Ray photoelectron spectroscopy were also employed. The results of these analyses show how virgin powder changes after being exposed to and recycled from one or more Direct Metal Laser Sintering (DMLS) additive manufacturing build cycles. In addition, these findings can give insight into the actual additive manufacturing process.
Iterative categorization (IC): a systematic technique for analysing qualitative data.
Neale, Joanne
2016-06-01
The processes of analysing qualitative data, particularly the stage between coding and publication, are often vague and/or poorly explained within addiction science and research more broadly. A simple but rigorous and transparent technique for analysing qualitative textual data, developed within the field of addiction, is described. The technique, iterative categorization (IC), is suitable for use with inductive and deductive codes and can support a range of common analytical approaches, e.g. thematic analysis, Framework, constant comparison, analytical induction, content analysis, conversational analysis, discourse analysis, interpretative phenomenological analysis and narrative analysis. Once the data have been coded, the only software required is a standard word processing package. Worked examples are provided. © 2016 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.
UNCERTAINTY ON RADIATION DOSES ESTIMATED BY BIOLOGICAL AND RETROSPECTIVE PHYSICAL METHODS.
Ainsbury, Elizabeth A; Samaga, Daniel; Della Monaca, Sara; Marrale, Maurizio; Bassinet, Celine; Burbidge, Christopher I; Correcher, Virgilio; Discher, Michael; Eakins, Jon; Fattibene, Paola; Güçlü, Inci; Higueras, Manuel; Lund, Eva; Maltar-Strmecki, Nadica; McKeever, Stephen; Rääf, Christopher L; Sholom, Sergey; Veronese, Ivan; Wieser, Albrecht; Woda, Clemens; Trompier, Francois
2018-03-01
Biological and physical retrospective dosimetry are recognised as key techniques to provide individual estimates of dose following unplanned exposures to ionising radiation. Whilst there has been a relatively large amount of recent development in the biological and physical procedures, development of statistical analysis techniques has failed to keep pace. The aim of this paper is to review the current state of the art in uncertainty analysis techniques across the 'EURADOS Working Group 10-Retrospective dosimetry' members, to give concrete examples of implementation of the techniques recommended in the international standards, and to further promote the use of Monte Carlo techniques to support characterisation of uncertainties. It is concluded that sufficient techniques are available and in use by most laboratories for acute, whole body exposures to highly penetrating radiation, but further work will be required to ensure that statistical analysis is always wholly sufficient for the more complex exposure scenarios.
TOWARDS A STANDARD METHOD FOR THE MEASUREMENT OF ORGANIC CARBON IN SEDIMENTS
The precisions achieved by two different methods for analysis of organic carbon in soils and sediments were determined and compared. The first method is a rapid dichromate oxidation technique (Walkley-Black) that has long been a standard in soil chemistry. The second is an automa...
3D thermography imaging standardization technique for inflammation diagnosis
NASA Astrophysics Data System (ADS)
Ju, Xiangyang; Nebel, Jean-Christophe; Siebert, J. Paul
2005-01-01
We develop a 3D thermography imaging standardization technique to allow quantitative data analysis. Medical Digital Infrared Thermal Imaging is very sensitive and reliable mean of graphically mapping and display skin surface temperature. It allows doctors to visualise in colour and quantify temperature changes in skin surface. The spectrum of colours indicates both hot and cold responses which may co-exist if the pain associate with an inflammatory focus excites an increase in sympathetic activity. However, due to thermograph provides only qualitative diagnosis information, it has not gained acceptance in the medical and veterinary communities as a necessary or effective tool in inflammation and tumor detection. Here, our technique is based on the combination of visual 3D imaging technique and thermal imaging technique, which maps the 2D thermography images on to 3D anatomical model. Then we rectify the 3D thermogram into a view independent thermogram and conform it a standard shape template. The combination of these imaging facilities allows the generation of combined 3D and thermal data from which thermal signatures can be quantified.
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Schmauch, Preston
2012-01-01
Turbine blades in rocket and jet engine turbomachinery experience enormous harmonic loading conditions. These loads result from the integer number of upstream and downstream stator vanes as well as the other turbine stages. The standard technique for forced response analysis to assess structural integrity is to decompose a CFD generated flow field into its harmonic components, and to then perform a frequency response analysis at the problematic natural frequencies. Recent CFD analysis and water-flow testing at NASA/MSFC, though, indicates that this technique may miss substantial harmonic and non-harmonic excitation sources that become present in complex flows. These complications suggest the question of whether frequency domain analysis is capable of capturing the excitation content sufficiently. Two studies comparing frequency response analysis with transient response analysis, therefore, have been performed. The first is of a bladed disk with each blade modeled by simple beam elements. It was hypothesized that the randomness and other variation from the standard harmonic excitation would reduce the blade structural response, but the results showed little reduction. The second study was of a realistic model of a bladed-disk excited by the same CFD used in the J2X engine program. The results showed that the transient analysis results were up to 10% higher for "clean" nodal diameter excitations and six times larger for "messy" excitations, where substantial Fourier content around the main harmonic exists.
38 CFR 1.921 - Analysis of costs.
Code of Federal Regulations, 2010 CFR
2010-07-01
... effectiveness of alternative collection techniques, establish guidelines with respect to points at which costs... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Analysis of costs. 1.921... Standards for Collection of Claims § 1.921 Analysis of costs. VA collection procedures should provide for...
10 CFR 503.34 - Inability to comply with applicable environmental requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
... environmental compliance of the facility, including an analysis of its ability to meet applicable standards and... will be based solely on an analysis of the petitioner's capacity to physically achieve applicable... exemption. All such analysis must be based on accepted analytical techniques, such as air quality modeling...
10 CFR 503.34 - Inability to comply with applicable environmental requirements.
Code of Federal Regulations, 2013 CFR
2013-01-01
... environmental compliance of the facility, including an analysis of its ability to meet applicable standards and... will be based solely on an analysis of the petitioner's capacity to physically achieve applicable... exemption. All such analysis must be based on accepted analytical techniques, such as air quality modeling...
Error modelling of quantum Hall array resistance standards
NASA Astrophysics Data System (ADS)
Marzano, Martina; Oe, Takehiko; Ortolano, Massimo; Callegaro, Luca; Kaneko, Nobu-Hisa
2018-04-01
Quantum Hall array resistance standards (QHARSs) are integrated circuits composed of interconnected quantum Hall effect elements that allow the realization of virtually arbitrary resistance values. In recent years, techniques were presented to efficiently design QHARS networks. An open problem is that of the evaluation of the accuracy of a QHARS, which is affected by contact and wire resistances. In this work, we present a general and systematic procedure for the error modelling of QHARSs, which is based on modern circuit analysis techniques and Monte Carlo evaluation of the uncertainty. As a practical example, this method of analysis is applied to the characterization of a 1 MΩ QHARS developed by the National Metrology Institute of Japan. Software tools are provided to apply the procedure to other arrays.
Cabot, Jennifer C; Lee, Cho Rok; Brunaud, Laurent; Kleiman, David A; Chung, Woong Youn; Fahey, Thomas J; Zarnegar, Rasa
2012-12-01
This study presents a cost analysis of the standard cervical, gasless transaxillary endoscopic, and gasless transaxillary robotic thyroidectomy approaches based on medical costs in the United States. A retrospective review of 140 patients who underwent standard cervical, transaxillary endoscopic, or transaxillary robotic thyroidectomy at 2 tertiary centers was conducted. The cost model included operating room charges, anesthesia fee, consumables cost, equipment depreciation, and maintenance cost. Sensitivity analyses assessed individual cost variables. The mean operative times for the standard cervical, transaxillary endoscopic, and transaxillary robotic approaches were 121 ± 18.9, 185 ± 26.0, and 166 ± 29.4 minutes, respectively. The total cost for the standard cervical, transaxillary endoscopic, and transaxillary robotic approaches were $9,028 ± $891, $12,505 ± $1,222, and $13,670 ± $1,384, respectively. Transaxillary approaches were significantly more expensive than the standard cervical technique (standard cervical/transaxillary endoscopic, P < .0001; standard cervical/transaxillary robotic, P < .0001; and transaxillary endoscopic/transaxillary robotic, P = .001). The transaxillary and standard cervical techniques became equivalent in cost when transaxillary endoscopic operative time decreased to 111 minutes and transaxillary robotic operative time decreased to 68 minutes. Increasing the case load did not resolve the cost difference. Transaxillary endoscopic and transaxillary robotic thyroidectomies are significantly more expensive than the standard cervical approach. Decreasing operative times reduces this cost difference. The greater expense may be prohibitive in countries with a flat reimbursement schedule. Copyright © 2012 Mosby, Inc. All rights reserved.
Allen, Robert C; Rutan, Sarah C
2011-10-31
Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.
Jha, Abhinav K.; Mena, Esther; Caffo, Brian; Ashrafinia, Saeed; Rahmim, Arman; Frey, Eric; Subramaniam, Rathan M.
2017-01-01
Abstract. Recently, a class of no-gold-standard (NGS) techniques have been proposed to evaluate quantitative imaging methods using patient data. These techniques provide figures of merit (FoMs) quantifying the precision of the estimated quantitative value without requiring repeated measurements and without requiring a gold standard. However, applying these techniques to patient data presents several practical difficulties including assessing the underlying assumptions, accounting for patient-sampling-related uncertainty, and assessing the reliability of the estimated FoMs. To address these issues, we propose statistical tests that provide confidence in the underlying assumptions and in the reliability of the estimated FoMs. Furthermore, the NGS technique is integrated within a bootstrap-based methodology to account for patient-sampling-related uncertainty. The developed NGS framework was applied to evaluate four methods for segmenting lesions from F-Fluoro-2-deoxyglucose positron emission tomography images of patients with head-and-neck cancer on the task of precisely measuring the metabolic tumor volume. The NGS technique consistently predicted the same segmentation method as the most precise method. The proposed framework provided confidence in these results, even when gold-standard data were not available. The bootstrap-based methodology indicated improved performance of the NGS technique with larger numbers of patient studies, as was expected, and yielded consistent results as long as data from more than 80 lesions were available for the analysis. PMID:28331883
Applications of Automation Methods for Nonlinear Fracture Test Analysis
NASA Technical Reports Server (NTRS)
Allen, Phillip A.; Wells, Douglas N.
2013-01-01
As fracture mechanics material testing evolves, the governing test standards continue to be refined to better reflect the latest understanding of the physics of the fracture processes involved. The traditional format of ASTM fracture testing standards, utilizing equations expressed directly in the text of the standard to assess the experimental result, is self-limiting in the complexity that can be reasonably captured. The use of automated analysis techniques to draw upon a rich, detailed solution database for assessing fracture mechanics tests provides a foundation for a new approach to testing standards that enables routine users to obtain highly reliable assessments of tests involving complex, non-linear fracture behavior. Herein, the case for automating the analysis of tests of surface cracks in tension in the elastic-plastic regime is utilized as an example of how such a database can be generated and implemented for use in the ASTM standards framework. The presented approach forms a bridge between the equation-based fracture testing standards of today and the next generation of standards solving complex problems through analysis automation.
Macready, Anna L; Fallaize, Rosalind; Butler, Laurie T; Ellis, Judi A; Kuznesof, Sharron; Frewer, Lynn J; Celis-Morales, Carlos; Livingstone, Katherine M; Araújo-Soares, Vera; Fischer, Arnout RH; Stewart-Knox, Barbara J; Mathers, John C
2018-01-01
Background To determine the efficacy of behavior change techniques applied in dietary and physical activity intervention studies, it is first necessary to record and describe techniques that have been used during such interventions. Published frameworks used in dietary and smoking cessation interventions undergo continuous development, and most are not adapted for Web-based delivery. The Food4Me study (N=1607) provided the opportunity to use existing frameworks to describe standardized Web-based techniques employed in a large-scale, internet-based intervention to change dietary behavior and physical activity. Objective The aims of this study were (1) to describe techniques embedded in the Food4Me study design and explain the selection rationale and (2) to demonstrate the use of behavior change technique taxonomies, develop standard operating procedures for training, and identify strengths and limitations of the Food4Me framework that will inform its use in future studies. Methods The 6-month randomized controlled trial took place simultaneously in seven European countries, with participants receiving one of four levels of personalized advice (generalized, intake-based, intake+phenotype–based, and intake+phenotype+gene–based). A three-phase approach was taken: (1) existing taxonomies were reviewed and techniques were identified a priori for possible inclusion in the Food4Me study, (2) a standard operating procedure was developed to maintain consistency in the use of methods and techniques across research centers, and (3) the Food4Me behavior change technique framework was reviewed and updated post intervention. An analysis of excluded techniques was also conducted. Results Of 46 techniques identified a priori as being applicable to Food4Me, 17 were embedded in the intervention design; 11 were from a dietary taxonomy, and 6 from a smoking cessation taxonomy. In addition, the four-category smoking cessation framework structure was adopted for clarity of communication. Smoking cessation texts were adapted for dietary use where necessary. A posteriori, a further 9 techniques were included. Examination of excluded items highlighted the distinction between techniques considered appropriate for face-to-face versus internet-based delivery. Conclusions The use of existing taxonomies facilitated the description and standardization of techniques used in Food4Me. We recommend that for complex studies of this nature, technique analysis should be conducted a priori to develop standardized procedures and training and reviewed a posteriori to audit the techniques actually adopted. The present framework description makes a valuable contribution to future systematic reviews and meta-analyses that explore technique efficacy and underlying psychological constructs. This was a novel application of the behavior change taxonomies and was the first internet-based personalized nutrition intervention to use such a framework remotely. Trial Registration ClinicalTrials.gov NCT01530139; https://clinicaltrials.gov/ct2/show/NCT01530139 (Archived by WebCite at http://www.webcitation.org/6y8XYUft1) PMID:29631993
Residual anastomoses in twin-twin transfusion syndrome after laser: the Solomon randomized trial.
Slaghekke, Femke; Lewi, Liesbeth; Middeldorp, Johanna M; Weingertner, Anne Sophie; Klumper, Frans J; Dekoninck, Philip; Devlieger, Roland; Lanna, Mariano M; Deprest, Jan; Favre, Romain; Oepkes, Dick; Lopriore, Enrico
2014-09-01
Residual anastomoses after fetoscopic laser surgery for twin-to-twin transfusion syndrome (TTTS) may lead to severe postoperative complications, including recurrent TTTS and twin anemia-polycythemia sequence (TAPS). A novel technique (Solomon technique) using laser coagulation of the entire vascular equator was recently investigated in a randomized controlled trial (Solomon trial) and compared with the Standard selective laser technique. The aim of this secondary analysis was to evaluate the occurrence and characteristics of residual anastomoses in placentas included in the Solomon trial. International multicenter randomized controlled trial in TTTS, randomized 1:1 ratio to either the Solomon laser technique or Standard laser technique. At time of laser, surgeons recorded whether they considered the procedure to be complete. Placental dye injection was performed after birth in the participating centers to evaluate the presence of residual anastomoses. A total of 151 placentas were included in the study. The percentage of placentas with residual anastomoses in the Solomon group and Standard group was 19% (14/74) and 34% (26/77), respectively (P = .04). The percentage of placentas with residual anastomoses in the subgroup of cases where the procedure was recorded as complete was 8/65 (12%) and 22/69 (32%) in the Solomon group and Standard group, respectively (P < .01). The Solomon laser technique reduces the risk of residual anastomoses. However, careful follow-up remains essential also after the Solomon technique, as complete dichorionization is not always achieved. Copyright © 2014 Mosby, Inc. All rights reserved.
Methods for trend analysis: Examples with problem/failure data
NASA Technical Reports Server (NTRS)
Church, Curtis K.
1989-01-01
Statistics are emphasized as an important role in quality control and reliability. Consequently, Trend Analysis Techniques recommended a variety of statistical methodologies that could be applied to time series data. The major goal of the working handbook, using data from the MSFC Problem Assessment System, is to illustrate some of the techniques in the NASA standard, some different techniques, and to notice patterns of data. Techniques for trend estimation used are: regression (exponential, power, reciprocal, straight line) and Kendall's rank correlation coefficient. The important details of a statistical strategy for estimating a trend component are covered in the examples. However, careful analysis and interpretation is necessary because of small samples and frequent zero problem reports in a given time period. Further investigations to deal with these issues are being conducted.
NASA Technical Reports Server (NTRS)
Stretchberry, D. M.; Hein, G. F.
1972-01-01
The general concepts of costing, budgeting, and benefit-cost ratio and cost-effectiveness analysis are discussed. The three common methods of costing are presented. Budgeting distributions are discussed. The use of discounting procedures is outlined. The benefit-cost ratio and cost-effectiveness analysis is defined and their current application to NASA planning is pointed out. Specific practices and techniques are discussed, and actual costing and budgeting procedures are outlined. The recommended method of calculating benefit-cost ratios is described. A standardized method of cost-effectiveness analysis and long-range planning are also discussed.
76 FR 16728 - Announcement of the American Petroleum Institute's Standards Activities
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-25
... voluntary standards for equipment, materials, operations, and processes for the petroleum and natural gas... Techniques for Designing and/or Optimizing Gas-lift Wells and Systems, 1st Ed. RP 13K, Chemical Analysis of... Q2, Quality Management Systems for Service Supply Organizations for the Petroleum and Natural Gas...
Greenfeld, Max; van de Meent, Jan-Willem; Pavlichin, Dmitri S; Mabuchi, Hideo; Wiggins, Chris H; Gonzalez, Ruben L; Herschlag, Daniel
2015-01-16
Single-molecule techniques have emerged as incisive approaches for addressing a wide range of questions arising in contemporary biological research [Trends Biochem Sci 38:30-37, 2013; Nat Rev Genet 14:9-22, 2013; Curr Opin Struct Biol 2014, 28C:112-121; Annu Rev Biophys 43:19-39, 2014]. The analysis and interpretation of raw single-molecule data benefits greatly from the ongoing development of sophisticated statistical analysis tools that enable accurate inference at the low signal-to-noise ratios frequently associated with these measurements. While a number of groups have released analysis toolkits as open source software [J Phys Chem B 114:5386-5403, 2010; Biophys J 79:1915-1927, 2000; Biophys J 91:1941-1951, 2006; Biophys J 79:1928-1944, 2000; Biophys J 86:4015-4029, 2004; Biophys J 97:3196-3205, 2009; PLoS One 7:e30024, 2012; BMC Bioinformatics 288 11(8):S2, 2010; Biophys J 106:1327-1337, 2014; Proc Int Conf Mach Learn 28:361-369, 2013], it remains difficult to compare analysis for experiments performed in different labs due to a lack of standardization. Here we propose a standardized single-molecule dataset (SMD) file format. SMD is designed to accommodate a wide variety of computer programming languages, single-molecule techniques, and analysis strategies. To facilitate adoption of this format we have made two existing data analysis packages that are used for single-molecule analysis compatible with this format. Adoption of a common, standard data file format for sharing raw single-molecule data and analysis outcomes is a critical step for the emerging and powerful single-molecule field, which will benefit both sophisticated users and non-specialists by allowing standardized, transparent, and reproducible analysis practices.
A study of the feasibility of statistical analysis of airport performance simulation
NASA Technical Reports Server (NTRS)
Myers, R. H.
1982-01-01
The feasibility of conducting a statistical analysis of simulation experiments to study airport capacity is investigated. First, the form of the distribution of airport capacity is studied. Since the distribution is non-Gaussian, it is important to determine the effect of this distribution on standard analysis of variance techniques and power calculations. Next, power computations are made in order to determine how economic simulation experiments would be if they are designed to detect capacity changes from condition to condition. Many of the conclusions drawn are results of Monte-Carlo techniques.
CMOS array design automation techniques. [metal oxide semiconductors
NASA Technical Reports Server (NTRS)
Ramondetta, P.; Feller, A.; Noto, R.; Lombardi, T.
1975-01-01
A low cost, quick turnaround technique for generating custom metal oxide semiconductor arrays using the standard cell approach was developed, implemented, tested and validated. Basic cell design topology and guidelines are defined based on an extensive analysis that includes circuit, layout, process, array topology and required performance considerations particularly high circuit speed.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-21
... Total Coliform Total Coliform 9221 A, B, C..... 9221 A, B, C..... Fermentation Technique. Total Coliform... Methodology category Method SM 22nd Edition \\28\\ Total Coliforms Lactose Fermentation Standard Total 9221 B.1, B.2 Methods. Coliform Fermentation Technique. Enzyme Substrate Colilert[supreg].... 9223 B Methods...
Solar Cell Calibration and Measurement Techniques
NASA Technical Reports Server (NTRS)
Bailey, Sheila; Brinker, Dave; Curtis, Henry; Jenkins, Phillip; Scheiman, Dave
1997-01-01
The increasing complexity of space solar cells and the increasing international markets for both cells and arrays has resulted in workshops jointly sponsored by NASDA, ESA and NASA. These workshops are designed to obtain international agreement on standardized values for the AMO spectrum and constant, recommend laboratory measurement practices and establish a set of protocols for international comparison of laboratory measurements. A working draft of an ISO standard, WDI 5387, 'Requirements for Measurement and Calibration Procedures for Space Solar Cells' was discussed with a focus on the scope of the document, a definition of primary standard cell, and required error analysis for all measurement techniques. Working groups addressed the issues of Air Mass Zero (AMO) solar constant and spectrum, laboratory measurement techniques, and the international round robin methodology. A summary is presented of the current state of each area and the formulation of the ISO document.
Solar Cell Calibration and Measurement Techniques
NASA Technical Reports Server (NTRS)
Bailey, Sheila; Brinker, Dave; Curtis, Henry; Jenkins, Phillip; Scheiman, Dave
2004-01-01
The increasing complexity of space solar cells and the increasing international markets for both cells and arrays has resulted in workshops jointly sponsored by NASDA, ESA and NASA. These workshops are designed to obtain international agreement on standardized values for the AMO spectrum and constant, recommend laboratory measurement practices and establish a set of protocols for international comparison of laboratory measurements. A working draft of an ISO standard, WD15387, "Requirements for Measurement and Calibration Procedures for Space Solar Cells" was discussed with a focus on the scope of the document, a definition of primary standard cell, and required error analysis for all measurement techniques. Working groups addressed the issues of Air Mass Zero (AMO) solar constant and spectrum, laboratory measurement techniques, and te international round robin methodology. A summary is presented of the current state of each area and the formulation of the ISO document.
Density-cluster NMA: A new protein decomposition technique for coarse-grained normal mode analysis.
Demerdash, Omar N A; Mitchell, Julie C
2012-07-01
Normal mode analysis has emerged as a useful technique for investigating protein motions on long time scales. This is largely due to the advent of coarse-graining techniques, particularly Hooke's Law-based potentials and the rotational-translational blocking (RTB) method for reducing the size of the force-constant matrix, the Hessian. Here we present a new method for domain decomposition for use in RTB that is based on hierarchical clustering of atomic density gradients, which we call Density-Cluster RTB (DCRTB). The method reduces the number of degrees of freedom by 85-90% compared with the standard blocking approaches. We compared the normal modes from DCRTB against standard RTB using 1-4 residues in sequence in a single block, with good agreement between the two methods. We also show that Density-Cluster RTB and standard RTB perform well in capturing the experimentally determined direction of conformational change. Significantly, we report superior correlation of DCRTB with B-factors compared with 1-4 residue per block RTB. Finally, we show significant reduction in computational cost for Density-Cluster RTB that is nearly 100-fold for many examples. Copyright © 2012 Wiley Periodicals, Inc.
Exhaled human breath analysis has become a standard technique for assessing exposure to exogenous volatile organic compounds (VOCs) such as trihalomethanes from water chlorination; aromatics, hydrocarbons, and oxygenates from fuels usage; and various chlorinated solvents from i...
40 CFR 61.32 - Emission standard.
Code of Federal Regulations, 2010 CFR
2010-07-01
... frequency of calibration. (b) Method of sample analysis. (c) Averaging technique for determining 30-day...) Plant and sampling area plots showing emission points and sampling sites. Topographic features...
PyMVPA: A python toolbox for multivariate pattern analysis of fMRI data.
Hanke, Michael; Halchenko, Yaroslav O; Sederberg, Per B; Hanson, Stephen José; Haxby, James V; Pollmann, Stefan
2009-01-01
Decoding patterns of neural activity onto cognitive states is one of the central goals of functional brain imaging. Standard univariate fMRI analysis methods, which correlate cognitive and perceptual function with the blood oxygenation-level dependent (BOLD) signal, have proven successful in identifying anatomical regions based on signal increases during cognitive and perceptual tasks. Recently, researchers have begun to explore new multivariate techniques that have proven to be more flexible, more reliable, and more sensitive than standard univariate analysis. Drawing on the field of statistical learning theory, these new classifier-based analysis techniques possess explanatory power that could provide new insights into the functional properties of the brain. However, unlike the wealth of software packages for univariate analyses, there are few packages that facilitate multivariate pattern classification analyses of fMRI data. Here we introduce a Python-based, cross-platform, and open-source software toolbox, called PyMVPA, for the application of classifier-based analysis techniques to fMRI datasets. PyMVPA makes use of Python's ability to access libraries written in a large variety of programming languages and computing environments to interface with the wealth of existing machine learning packages. We present the framework in this paper and provide illustrative examples on its usage, features, and programmability.
PyMVPA: A Python toolbox for multivariate pattern analysis of fMRI data
Hanke, Michael; Halchenko, Yaroslav O.; Sederberg, Per B.; Hanson, Stephen José; Haxby, James V.; Pollmann, Stefan
2009-01-01
Decoding patterns of neural activity onto cognitive states is one of the central goals of functional brain imaging. Standard univariate fMRI analysis methods, which correlate cognitive and perceptual function with the blood oxygenation-level dependent (BOLD) signal, have proven successful in identifying anatomical regions based on signal increases during cognitive and perceptual tasks. Recently, researchers have begun to explore new multivariate techniques that have proven to be more flexible, more reliable, and more sensitive than standard univariate analysis. Drawing on the field of statistical learning theory, these new classifier-based analysis techniques possess explanatory power that could provide new insights into the functional properties of the brain. However, unlike the wealth of software packages for univariate analyses, there are few packages that facilitate multivariate pattern classification analyses of fMRI data. Here we introduce a Python-based, cross-platform, and open-source software toolbox, called PyMVPA, for the application of classifier-based analysis techniques to fMRI datasets. PyMVPA makes use of Python's ability to access libraries written in a large variety of programming languages and computing environments to interface with the wealth of existing machine-learning packages. We present the framework in this paper and provide illustrative examples on its usage, features, and programmability. PMID:19184561
[Bus drivers' biomechanical risk assessment in two different contexts].
Baracco, A; Coggiola, M; Perrelli, F; Banchio, M; Martignone, S; Gullino, A; Romano, C
2012-01-01
The application of standardize methods for the biomechanical risk assessment in non-industrial cycled activity is not always possible. A typical case is the public transport sector, where workers complain of suffering for shoulder more than elbow and wrist pains. The Authors present the results of two studies involving two public transport companies and the risk of biomechanical overload of upper limbs for bus and tram drivers. The analysis has been made using three different approaches: focus groups; static analysis by using anthropometric manikins; work sampling technique by monitoring worker's activity and posture at each minute, for two hours and for each binomial vehicle-route, considering P5F e P95M drivers and assessing the perceived efforts thorough the Borg's CR10 Scale. The conclusive results show that the ergonomic analysis managed by multiple non-standardized techniques may reach consistent and repeatable results according to the epidemiological evidences.
Characterization of Metal Powders Used for Additive Manufacturing
Slotwinski, JA; Garboczi, EJ; Stutzman, PE; Ferraris, CF; Watson, SS; Peltz, MA
2014-01-01
Additive manufacturing (AM) techniques1 can produce complex, high-value metal parts, with potential applications as critical parts, such as those found in aerospace components. The production of AM parts with consistent and predictable properties requires input materials (e.g., metal powders) with known and repeatable characteristics, which in turn requires standardized measurement methods for powder properties. First, based on our previous work, we assess the applicability of current standardized methods for powder characterization for metal AM powders. Then we present the results of systematic studies carried out on two different powder materials used for additive manufacturing: stainless steel and cobalt-chrome. The characterization of these powders is important in NIST efforts to develop appropriate measurements and standards for additive materials and to document the property of powders used in a NIST-led additive manufacturing material round robin. An extensive array of characterization techniques was applied to these two powders, in both virgin and recycled states. The physical techniques included laser diffraction particle size analysis, X-ray computed tomography for size and shape analysis, and optical and scanning electron microscopy. Techniques sensitive to structure and chemistry, including X-ray diffraction, energy dispersive analytical X-ray analysis using the X-rays generated during scanning electron microscopy, and X-Ray photoelectron spectroscopy were also employed. The results of these analyses show how virgin powder changes after being exposed to and recycled from one or more Direct Metal Laser Sintering (DMLS) additive manufacturing build cycles. In addition, these findings can give insight into the actual additive manufacturing process. PMID:26601040
Manna, Carmelinda; Silva, Mario; Cobelli, Rocco; Poggesi, Sara; Rossi, Cristina; Sverzellati, Nicola
2017-01-01
PURPOSE We aimed to perform intraindividual comparison of computed tomography (CT) parameters, image quality, and radiation exposure between standard CT angiography (CTA) and high-pitch dual source (DS)-CTA, in subjects undergoing serial CTA of thoracoabdominal aorta. METHODS Eighteen subjects with thoracoabdominal CTA by standard technique and high-pitch DS-CTA technique within 6 months of each other were retrieved for intraindividual comparison of image quality in thoracic and abdominal aorta. Quantitative analysis was performed by comparison of mean aortic attenuation, noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR). Qualitative analysis was performed by visual assessment of motion artifacts and diagnostic confidence. Radiation exposure was quantified by effective dose. Image quality was apportioned to radiation exposure by means of figure of merit. RESULTS Mean aortic attenuation and noise were higher in high-pitch DS-CTA of thoracoabdominal aorta, whereas SNR and CNR were similar in thoracic aorta and significantly lower in high-pitch DS-CTA of abdominal aorta (P = 0.024 and P = 0.016). High-pitch DS-CTA was significantly better in the first segment of thoracic aorta. Effective dose was reduced by 72% in high-pitch DS-CTA. CONCLUSION High-pitch DS-CTA without electrocardiography-gating is an effective technique for imaging aorta with very low radiation exposure and with significant reduction of motion artifacts in ascending aorta; however, the overall quality of high-pitch DS-CTA in abdominal aorta is lower than standard CTA. PMID:28703104
Efficient Analysis of Mass Spectrometry Data Using the Isotope Wavelet
NASA Astrophysics Data System (ADS)
Hussong, Rene; Tholey, Andreas; Hildebrandt, Andreas
2007-09-01
Mass spectrometry (MS) has become today's de-facto standard for high-throughput analysis in proteomics research. Its applications range from toxicity analysis to MS-based diagnostics. Often, the time spent on the MS experiment itself is significantly less than the time necessary to interpret the measured signals, since the amount of data can easily exceed several gigabytes. In addition, automated analysis is hampered by baseline artifacts, chemical as well as electrical noise, and an irregular spacing of data points. Thus, filtering techniques originating from signal and image analysis are commonly employed to address these problems. Unfortunately, smoothing, base-line reduction, and in particular a resampling of data points can affect important characteristics of the experimental signal. To overcome these problems, we propose a new family of wavelet functions based on the isotope wavelet, which is hand-tailored for the analysis of mass spectrometry data. The resulting technique is theoretically well-founded and compares very well with standard peak picking tools, since it is highly robust against noise spoiling the data, but at the same time sufficiently sensitive to detect even low-abundant peptides.
Automatic Error Analysis Using Intervals
ERIC Educational Resources Information Center
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
Code Analysis and Refactoring with Clang Tools, Version 0.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelley, Timothy M.
2016-12-23
Code Analysis and Refactoring with Clang Tools is a small set of example code that demonstrates techniques for applying tools distributed with the open source Clang compiler. Examples include analyzing where variables are used and replacing old data structures with standard structures.
Conceptual designs for in situ analysis of Mars soil
NASA Technical Reports Server (NTRS)
Mckay, C. P.; Zent, A. P.; Hartman, H.
1991-01-01
A goal of this research is to develop conceptual designs for instrumentation to perform in situ measurements of the Martian soil in order to determine the existence and nature of any reactive chemicals. Our approach involves assessment and critical review of the Viking biology results which indicated the presence of a soil oxidant, an investigation of the possible application of standard soil science techniques to the analysis of Martian soil, and a preliminary consideration of non-standard methods that may be necessary for use in the highly oxidizing Martian soil. Based on our preliminary analysis, we have developed strawman concepts for standard soil analysis on Mars, including pH, suitable for use on a Mars rover mission. In addition, we have devised a method for the determination of the possible strong oxidants on Mars.
Moyer, Jason T; Gnatkovsky, Vadym; Ono, Tomonori; Otáhal, Jakub; Wagenaar, Joost; Stacey, William C; Noebels, Jeffrey; Ikeda, Akio; Staley, Kevin; de Curtis, Marco; Litt, Brian; Galanopoulou, Aristea S
2017-11-01
Electroencephalography (EEG)-the direct recording of the electrical activity of populations of neurons-is a tremendously important tool for diagnosing, treating, and researching epilepsy. Although standard procedures for recording and analyzing human EEG exist and are broadly accepted, there are no such standards for research in animal models of seizures and epilepsy-recording montages, acquisition systems, and processing algorithms may differ substantially among investigators and laboratories. The lack of standard procedures for acquiring and analyzing EEG from animal models of epilepsy hinders the interpretation of experimental results and reduces the ability of the scientific community to efficiently translate new experimental findings into clinical practice. Accordingly, the intention of this report is twofold: (1) to review current techniques for the collection and software-based analysis of neural field recordings in animal models of epilepsy, and (2) to offer pertinent standards and reporting guidelines for this research. Specifically, we review current techniques for signal acquisition, signal conditioning, signal processing, data storage, and data sharing, and include applicable recommendations to standardize collection and reporting. We close with a discussion of challenges and future opportunities, and include a supplemental report of currently available acquisition systems and analysis tools. This work represents a collaboration on behalf of the American Epilepsy Society/International League Against Epilepsy (AES/ILAE) Translational Task Force (TASK1-Workgroup 5), and is part of a larger effort to harmonize video-EEG interpretation and analysis methods across studies using in vivo and in vitro seizure and epilepsy models. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eppich, G.; Kips, R.; Lindvall, R.
The CUP-2 uranium ore concentrate (UOC) standard reference material, a powder, was produced at the Blind River uranium refinery of Eldorado Resources Ltd. in Canada in 1986. This material was produced as part of a joint effort by the Canadian Certified Reference Materials Project and the Canadian Uranium Producers Metallurgical Committee to develop a certified reference material for uranium concentration and the concentration of several impurity constituents. This standard was developed to satisfy the requirements of the UOC mining and milling industry, and was characterized with this purpose in mind. To produce CUP-2, approximately 25 kg of UOC derived frommore » the Blind River uranium refinery was blended, homogenized, and assessed for homogeneity by X-ray fluorescence (XRF) analysis. The homogenized material was then packaged into bottles, containing 50 g of material each, and distributed for analysis to laboratories in 1986. The CUP-2 UOC standard was characterized by an interlaboratory analysis program involving eight member laboratories, six commercial laboratories, and three additional volunteer laboratories. Each laboratory provided five replicate results on up to 17 analytes, including total uranium concentration, and moisture content. The selection of analytical technique was left to each participating laboratory. Uranium was reported on an “as-received” basis; all other analytes (besides moisture content) were reported on a “dry-weight” basis. A bottle of 25g of CUP-2 UOC standard as described above was purchased by LLNL and characterized by the LLNL Nuclear Forensics Group. Non-destructive and destructive analytical techniques were applied to the UOC sample. Information obtained from short-term techniques such as photography, gamma spectrometry, and scanning electron microscopy were used to guide the performance of longer-term techniques such as ICP-MS. Some techniques, such as XRF and ICP-MS, provided complementary types of data. The results indicate that the CUP-2 standard has a natural isotopic ratio, and does not appear to have been isotopically enriched or depleted in any way, and was not contaminated by a source of uranium with a non-natural isotopic composition. Furthermore, the lack of 233U and 236U above the instrumental detection limit indicates that this sample was not exposed to a neutron flux, which would have generated one or both of these isotopes in measurable concentrations.« less
Battery Test Manual For 48 Volt Mild Hybrid Electric Vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Lee Kenneth
2017-03-01
This manual details the U.S. Advanced Battery Consortium and U.S. Department of Energy Vehicle Technologies Program goals, test methods, and analysis techniques for a 48 Volt Mild Hybrid Electric Vehicle system. The test methods are outlined stating with characterization tests, followed by life tests. The final section details standardized analysis techniques for 48 V systems that allow for the comparison of different programs that use this manual. An example test plan is included, along with guidance to filling in gap table numbers.
Neufeld, E; Chavannes, N; Samaras, T; Kuster, N
2007-08-07
The modeling of thermal effects, often based on the Pennes Bioheat Equation, is becoming increasingly popular. The FDTD technique commonly used in this context suffers considerably from staircasing errors at boundaries. A new conformal technique is proposed that can easily be integrated into existing implementations without requiring a special update scheme. It scales fluxes at interfaces with factors derived from the local surface normal. The new scheme is validated using an analytical solution, and an error analysis is performed to understand its behavior. The new scheme behaves considerably better than the standard scheme. Furthermore, in contrast to the standard scheme, it is possible to obtain with it more accurate solutions by increasing the grid resolution.
ERIC Educational Resources Information Center
Vivo, Juana-Maria; Franco, Manuel
2008-01-01
This article attempts to present a novel application of a method of measuring accuracy for academic success predictors that could be used as a standard. This procedure is known as the receiver operating characteristic (ROC) curve, which comes from statistical decision techniques. The statistical prediction techniques provide predictor models and…
ERIC Educational Resources Information Center
Storm, Lance; Tressoldi, Patrizio E.; Di Risio, Lorenzo
2010-01-01
We report the results of meta-analyses on 3 types of free-response study: (a) ganzfeld (a technique that enhances a communication anomaly referred to as "psi"); (b) nonganzfeld noise reduction using alleged psi-enhancing techniques such as dream psi, meditation, relaxation, or hypnosis; and (c) standard free response (nonganzfeld, no noise…
ERIC Educational Resources Information Center
Alter, Krystyn P.; Molloy, John L.; Niemeyer, Emily D.
2005-01-01
A laboratory experiment reinforces the concept of acid-base equilibria while introducing a common application of spectrophotometry and can easily be completed within a standard four-hour laboratory period. It provides students with an opportunity to use advanced data analysis techniques like data smoothing and spectral deconvolution to…
Allanite age-dating: Non-matrix-matched standardization in quadrupole LA-ICP-MS
NASA Astrophysics Data System (ADS)
Burn, M.; Lanari, P.; Pettke, T.; Engi, M.
2014-12-01
Allanite Th-U-Pb age-dating has recently been found to be powerful in unraveling the timing of geological processes such as the metamorphic dynamics in subduction zones and crystallization velocity of magmas. However, inconsistencies among analytical techniques have raised doubts about the accuracy of allanite age data. Spot analysis techniques such as LA-ICP-MS are claimed to be crucially dependent on matrix-matched standards, the quality of which is variable. We present a new approach in LA-ICP-MS data reduction that allows non-matrix-matched standardization via well constrained zircon reference materials as primary standards. Our data were obtained using a GeoLas Pro 193 nm ArF excimer laser ablation system coupled to an ELAN DRC-e quadrupole ICP-MS. We use 32 μm and 24 μm spot sizes; laser operating conditions of 9 Hz repetition rate and 2.5 J/cm2 fluence have proven advantageous. Matrix dependent downhole fractionation evolution is empirically determined by analyzing 208Pb/232Th and 206Pb/238U and applied prior to standardization. The new data reduction technique was tested on three magmatic allanite reference materials (SISSb, CAPb, TARA); within error these show the same downhole fractionation evolution for all allanite types and in different analytical sessions, provided measurement conditions remain the same. Although the downhole evolution of allanite and zircon differs significantly, a link between zircon and allanite matrix is established by assuming CAPb and TARA to be fixed at the corresponding reference ages. Our weighted mean 208Pb/232Th ages are 30.06 ± 0.22 (2σ) for SISSb, 275.4 ± 1.3 (2σ) for CAPb, and 409.9 ± 1.8 (2σ) for TARA. Precision of single spot age data varies between 1.5 and 8 % (2σ), dependent on spot size and common lead concentrations. Quadrupole LA-ICP-MS allanite age-dating has thus similar uncertainties as do other spot analysis techniques. The new data reduction technique is much less dependent on quality and homogeneity of allanite standard reference materials. This method of correcting for matrix-dependent downhole fractionation evolution opens new possibilities in the field of LA-ICP-MS data acquisition, e.g. the use of a NIST standard glass to date all material types given a set of well constrained reference materials.
[The requirements of standard and conditions of interchangeability of medical articles].
Men'shikov, V V; Lukicheva, T I
2013-11-01
The article deals with possibility to apply specific approaches under evaluation of interchangeability of medical articles for laboratory analysis. The development of standardized analytical technologies of laboratory medicine and formulation of requirements of standards addressed to manufacturers of medical articles the clinically validated requirements are to be followed. These requirements include sensitivity and specificity of techniques, accuracy and precision of research results, stability of reagents' quality in particular conditions of their transportation and storage. The validity of requirements formulated in standards and addressed to manufacturers of medical articles can be proved using reference system, which includes master forms and standard samples, reference techniques and reference laboratories. This approach is supported by data of evaluation of testing systems for measurement of level of thyrotrophic hormone, thyroid hormones and glycated hemoglobin HB A1c. The versions of testing systems can be considered as interchangeable only in case of results corresponding to the results of reference technique and comparable with them. In case of absence of functioning reference system the possibilities of the Joined committee of traceability in laboratory medicine make it possible for manufacturers of reagent sets to apply the certified reference materials under development of manufacturing of sets for large listing of analytes.
Computer-Assisted Digital Image Analysis of Plus Disease in Retinopathy of Prematurity.
Kemp, Pavlina S; VanderVeen, Deborah K
2016-01-01
The objective of this study is to review the current state and role of computer-assisted analysis in diagnosis of plus disease in retinopathy of prematurity. Diagnosis and documentation of retinopathy of prematurity are increasingly being supplemented by digital imaging. The incorporation of computer-aided techniques has the potential to add valuable information and standardization regarding the presence of plus disease, an important criterion in deciding the necessity of treatment of vision-threatening retinopathy of prematurity. A review of literature found that several techniques have been published examining the process and role of computer aided analysis of plus disease in retinopathy of prematurity. These techniques use semiautomated image analysis techniques to evaluate retinal vascular dilation and tortuosity, using calculated parameters to evaluate presence or absence of plus disease. These values are then compared with expert consensus. The study concludes that computer-aided image analysis has the potential to use quantitative and objective criteria to act as a supplemental tool in evaluating for plus disease in the setting of retinopathy of prematurity.
Gold-standard for computer-assisted morphological sperm analysis.
Chang, Violeta; Garcia, Alejandra; Hitschfeld, Nancy; Härtel, Steffen
2017-04-01
Published algorithms for classification of human sperm heads are based on relatively small image databases that are not open to the public, and thus no direct comparison is available for competing methods. We describe a gold-standard for morphological sperm analysis (SCIAN-MorphoSpermGS), a dataset of sperm head images with expert-classification labels in one of the following classes: normal, tapered, pyriform, small or amorphous. This gold-standard is for evaluating and comparing known techniques and future improvements to present approaches for classification of human sperm heads for semen analysis. Although this paper does not provide a computational tool for morphological sperm analysis, we present a set of experiments for comparing sperm head description and classification common techniques. This classification base-line is aimed to be used as a reference for future improvements to present approaches for human sperm head classification. The gold-standard provides a label for each sperm head, which is achieved by majority voting among experts. The classification base-line compares four supervised learning methods (1- Nearest Neighbor, naive Bayes, decision trees and Support Vector Machine (SVM)) and three shape-based descriptors (Hu moments, Zernike moments and Fourier descriptors), reporting the accuracy and the true positive rate for each experiment. We used Fleiss' Kappa Coefficient to evaluate the inter-expert agreement and Fisher's exact test for inter-expert variability and statistical significant differences between descriptors and learning techniques. Our results confirm the high degree of inter-expert variability in the morphological sperm analysis. Regarding the classification base line, we show that none of the standard descriptors or classification approaches is best suitable for tackling the problem of sperm head classification. We discovered that the correct classification rate was highly variable when trying to discriminate among non-normal sperm heads. By using the Fourier descriptor and SVM, we achieved the best mean correct classification: only 49%. We conclude that the SCIAN-MorphoSpermGS will provide a standard tool for evaluation of characterization and classification approaches for human sperm heads. Indeed, there is a clear need for a specific shape-based descriptor for human sperm heads and a specific classification approach to tackle the problem of high variability within subcategories of abnormal sperm cells. Copyright © 2017 Elsevier Ltd. All rights reserved.
Rathi, Monika; Ahrenkiel, S P; Carapella, J J; Wanlass, M W
2013-02-01
Given an unknown multicomponent alloy, and a set of standard compounds or alloys of known composition, can one improve upon popular standards-based methods for energy dispersive X-ray (EDX) spectrometry to quantify the elemental composition of the unknown specimen? A method is presented here for determining elemental composition of alloys using transmission electron microscopy-based EDX with appropriate standards. The method begins with a discrete set of related reference standards of known composition, applies multivariate statistical analysis to those spectra, and evaluates the compositions with a linear matrix algebra method to relate the spectra to elemental composition. By using associated standards, only limited assumptions about the physical origins of the EDX spectra are needed. Spectral absorption corrections can be performed by providing an estimate of the foil thickness of one or more reference standards. The technique was applied to III-V multicomponent alloy thin films: composition and foil thickness were determined for various III-V alloys. The results were then validated by comparing with X-ray diffraction and photoluminescence analysis, demonstrating accuracy of approximately 1% in atomic fraction.
Anneken, David; Striebich, Richard; DeWitt, Matthew J; Klingshirn, Christopher; Corporan, Edwin
2015-03-01
Aircraft turbine engines are a significant source of particulate matter (PM) and gaseous emissions in the vicinity of airports and military installations. Hazardous air pollutants (HAPs) (e.g., formaldehyde, benzene, naphthalene and other compounds) associated with aircraft emissions are an environmental concern both in flight and at ground level. Therefore, effective sampling, identification, and accurate measurement of these trace species are important to assess their environmental impact. This effort evaluates two established ambient air sampling and analysis methods, U.S. Environmental Protection Agency (EPA) Method TO-11A and National Institute for Occupational Safety and Health (NIOSH) Method 1501, for potential use to quantify HAPs from aircraft turbine engines. The techniques were used to perform analysis of the exhaust from a T63 turboshaft engine, and were examined using certified gas standards transferred through the heated sampling systems used for engine exhaust gaseous emissions measurements. Test results show that the EPA Method TO-11A (for aldehydes) and NIOSH Method 1501 (for semivolatile hydrocarbons) were effective techniques for the sampling and analysis of most HAPs of interest. Both methods showed reasonable extraction efficiencies of HAP species from the sorbent tubes, with the exception of acrolein, styrene, and phenol, which were not well quantified. Formaldehyde measurements using dinitrophenylhydrazine (DNPH) tubes (EPA method TO-11A) were accurate for gas-phase standards, and compared favorably to measurements using gas-phase Fourier-transform infrared (FTIR) spectroscopy. In general, these two standard methodologies proved to be suitable techniques for field measurement of turbine engine HAPs within a reasonable (5-10 minutes) sampling period. Details of the tests, the analysis methods, calibration procedures, and results from the gas standards and T63 engine tested using a conventional JP-8 jet fuel are provided. HAPs from aviation-related sources are important because of their adverse health and environmental impacts in and around airports and flight lines. Simpler, more convenient techniques to measure the important HAPs, especially aldehydes and volatile organic HAPs, are needed to provide information about their occurrence and assist in the development of engines that emit fewer harmful emissions.
Evaluation of digestion methods for analysis of trace metals in mammalian tissues and NIST 1577c.
Binder, Grace A; Metcalf, Rainer; Atlas, Zachary; Daniel, Kenyon G
2018-02-15
Digestion techniques for ICP analysis have been poorly studied for biological samples. This report describes an optimized method for analysis of trace metals that can be used across a variety of sample types. Digestion methods were tested and optimized with the analysis of trace metals in cancerous as compared to normal tissue as the end goal. Anthropological, forensic, oncological and environmental research groups can employ this method reasonably cheaply and safely whilst still being able to compare between laboratories. We examined combined HNO 3 and H 2 O 2 digestion at 170 °C for human, porcine and bovine samples whether they are frozen, fresh or lyophilized powder. Little discrepancy is found between microwave digestion and PFA Teflon pressure vessels. The elements of interest (Cu, Zn, Fe and Ni) yielded consistently higher and more accurate values on standard reference material than samples heated to 75 °C or samples that utilized HNO 3 alone. Use of H 2 SO 4 does not improve homogeneity of the sample and lowers precision during ICP analysis. High temperature digestions (>165 °C) using a combination of HNO 3 and H 2 O 2 as outlined are proposed as a standard technique for all mammalian tissues, specifically, human tissues and yield greater than 300% higher values than samples digested at 75 °C regardless of the acid or acid combinations used. The proposed standardized technique is designed to accurately quantify potential discrepancies in metal loads between cancerous and healthy tissues and applies to numerous tissue studies requiring quick, effective and safe digestions. Copyright © 2017 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Sharp, William G.; Berry, Rashelle C.; McCracken, Courtney; Nuhu, Nadrat N.; Marvel, Elizabeth; Saulnier, Celine A.; Klin, Ami; Jones, Warren; Jaquess, David L.
2013-01-01
We conducted a comprehensive review and meta-analysis of research regarding feeding problems and nutrient status among children with autism spectrum disorders (ASD). The systematic search yielded 17 prospective studies involving a comparison group. Using rigorous meta-analysis techniques, we calculated the standardized mean difference (SMD) with…
Percy, Andrew J; Yang, Juncong; Chambers, Andrew G; Mohammed, Yassene; Miliotis, Tasso; Borchers, Christoph H
2016-01-01
Quantitative mass spectrometry (MS)-based approaches are emerging as a core technology for addressing health-related queries in systems biology and in the biomedical and clinical fields. In several 'omics disciplines (proteomics included), an approach centered on selected or multiple reaction monitoring (SRM or MRM)-MS with stable isotope-labeled standards (SIS), at the protein or peptide level, has emerged as the most precise technique for quantifying and screening putative analytes in biological samples. To enable the widespread use of MRM-based protein quantitation for disease biomarker assessment studies and its ultimate acceptance for clinical analysis, the technique must be standardized to facilitate precise and accurate protein quantitation. To that end, we have developed a number of kits for assessing method/platform performance, as well as for screening proposed candidate protein biomarkers in various human biofluids. Collectively, these kits utilize a bottom-up LC-MS methodology with SIS peptides as internal standards and quantify proteins using regression analysis of standard curves. This chapter details the methodology used to quantify 192 plasma proteins of high-to-moderate abundance (covers a 6 order of magnitude range from 31 mg/mL for albumin to 18 ng/mL for peroxidredoxin-2), and a 21-protein subset thereof. We also describe the application of this method to patient samples for biomarker discovery and verification studies. Additionally, we introduce our recently developed Qualis-SIS software, which is used to expedite the analysis and assessment of protein quantitation data in control and patient samples.
Sampling methods for microbiological analysis of red meat and poultry carcasses.
Capita, Rosa; Prieto, Miguel; Alonso-Calleja, Carlos
2004-06-01
Microbiological analysis of carcasses at slaughterhouses is required in the European Union for evaluating the hygienic performance of carcass production processes as required for effective hazard analysis critical control point implementation. The European Union microbial performance standards refer exclusively to the excision method, even though swabbing using the wet/dry technique is also permitted when correlation between both destructive and nondestructive methods can be established. For practical and economic reasons, the swab technique is the most extensively used carcass surface-sampling method. The main characteristics, advantages, and limitations of the common excision and swabbing methods are described here.
Leischik, Roman; Littwitz, Henning; Dworrak, Birgit; Garg, Pankaj; Zhu, Meihua; Sahn, David J; Horlitz, Marc
2015-01-01
Left atrial (LA) functional analysis has an established role in assessing left ventricular diastolic function. The current standard echocardiographic parameters used to study left ventricular diastolic function include pulsed-wave Doppler mitral inflow analysis, tissue Doppler imaging measurements, and LA dimension estimation. However, the above-mentioned parameters do not directly quantify LA performance. Deformation studies using strain and strain-rate imaging to assess LA function were validated in previous research, but this technique is not currently used in routine clinical practice. This review discusses the history, importance, and pitfalls of strain technology for the analysis of LA mechanics.
NASA Technical Reports Server (NTRS)
Allen, Thomas R. (Editor); Emerson, Charles W. (Editor); Quattrochi, Dale A. (Editor); Arnold, James E. (Technical Monitor)
2001-01-01
This special issue continues the precedence of the Association of American Geographers (AAG), Remote Sensing Specialty Group (RSSG) for publishing selected articles in Geocarto International as a by-product from the AAG annual meeting. As editors, we issued earlier this year, a solicitation for papers to be published in a special issue of Geocarto International that were presented in RSSG-sponsored sessions at the 2001 AAG annual meeting held in New York City on February 27-March 3. Although not an absolute requisite for publication, the vast majority of the papers in this special issue were presented at this year's AAG meeting in New York. Other articles in this issue that were not part of a paper or poster session at the 2001 AAG meeting are authored by RSSG members. Under the auspices of the RSSG, this special Geocarto International issue provides even more compelling evidence of the inextricable linkage between remote sensing and geography. The papers in this special issue fall into four general themes: 1) Urban Analysis and Techniques for Urban Analysis; 2) Land Use/Land Cover Analysis; 3) Fire Modeling Assessment; and 4) Techniques. The first four papers herein are concerned with the use of remote sensing for analysis of urban areas, and with use or development of techniques to better characterize urban areas using remote sensing data. As the lead paper in this grouping, Rashed et al., examine the usage of spectral mixture analysis (SMA) for analyzing satellite imagery of urban areas as opposed to more 'standard' methods of classification. Here SMA has been applied to IRS-1C satellite multispectral imagery to extract measures that better describe the 'anatomy' of the greater Cairo, Egypt region. Following this paper, Weng and Lo describe how Landsat TM data have been used to monitor land cover types and to estimate biomass parameters within an urban environment. The research reported in this paper applies an integrated GIS (Geographic Information System) approach for detecting urban growth and assessing its impact on biomass in the Zhujiang Delta, China. The remaining two papers in this first grouping deal with improved techniques for characterizing and analyzing urban areas using remote sensing data. Myint examines the use of texture analysis to better classify urban features. Here wavelet analysis has been employed to assist in deriving a more robust classification of the urban environment from high spatial resolution, multispectral aircraft data. Mesev provides insight on how through the modification of the standard maximum likelihood image analysis technique, population census data can be used enhance the overall robustness of urban image classification through the modification of the standard maximum likelihood image analysis technique.
Setting technical standards for visual assessment procedures
Kenneth H. Craik; Nickolaus R. Feimer
1979-01-01
Under the impetus of recent legislative and administrative mandates concerning analysis and management of the landscape, governmental agencies are being called upon to adopt or develop visual resource and impact assessment (VRIA) systems. A variety of techniques that combine methods of psychological assessment and landscape analysis to serve these purposes is being...
Personal Constructions of Biological Concepts--The Repertory Grid Approach
ERIC Educational Resources Information Center
McCloughlin, Thomas J. J.; Matthews, Philip S. C.
2017-01-01
This work discusses repertory grid analysis as a tool for investigating the structures of students' representations of biological concepts. Repertory grid analysis provides the researcher with a variety of techniques that are not associated with standard methods of concept mapping for investigating conceptual structures. It can provide valuable…
Mathematical Creativity and Mathematical Aptitude: A Cross-Lagged Panel Analysis
ERIC Educational Resources Information Center
Tyagi, Tarun Kumar
2016-01-01
Cross-lagged panel correlation (CLPC) analysis has been used to identify causal relationships between mathematical creativity and mathematical aptitude. For this study, 480 8th standard students were selected through a random cluster technique from 9 intermediate and high schools of Varanasi, India. Mathematical creativity and mathematical…
Jung, Hae-Jin; Eom, Hyo-Jin; Kang, Hyun-Woo; Moreau, Myriam; Sobanska, Sophie; Ro, Chul-Un
2014-08-21
In this work, quantitative energy-dispersive electron probe X-ray microanalysis (ED-EPMA) (called low-Z particle EPMA), Raman microspectrometry (RMS), and attenuated total reflectance Fourier transform infrared spectroscopic (ATR-FTIR) imaging were applied in combination for the analysis of the same individual airborne particles for the first time. After examining individual particles of micrometer size by low-Z particle EPMA, consecutive examinations by RMS and ATR-FTIR imaging of the same individual particles were then performed. The relocation of the same particles on Al or Ag foils was successfully carried out among the three standalone instruments for several standard samples and an indoor airborne particle sample, resulting in the successful acquisition of quality spectral data from the three single-particle analytical techniques. The combined application of the three techniques to several different standard particles confirmed that those techniques provided consistent and complementary chemical composition information on the same individual particles. Further, it was clearly demonstrated that the three different types of spectral and imaging data from the same individual particles in an indoor aerosol sample provided richer information on physicochemical characteristics of the particle ensemble than that obtainable by the combined use of two single-particle analytical techniques.
Novel methods of imaging and analysis for the thermoregulatory sweat test.
Carroll, Michael Sean; Reed, David W; Kuntz, Nancy L; Weese-Mayer, Debra Ellyn
2018-06-07
The thermoregulatory sweat test (TST) can be central to the identification and management of disorders affecting sudomotor function and small sensory and autonomic nerve fibers, but the cumbersome nature of the standard testing protocol has prevented its widespread adoption. A high resolution, quantitative, clean and simple assay of sweating could significantly improve identification and management of these disorders. Images from 89 clinical TSTs were analyzed retrospectively using two novel techniques. First, using the standard indicator powder, skin surface sweat distributions were determined algorithmically for each patient. Second, a fundamentally novel method using thermal imaging of forced evaporative cooling was evaluated through comparison with the standard technique. Correlation and receiver operating characteristic analyses were used to determine the degree of match between these methods, and the potential limits of thermal imaging were examined through cumulative analysis of all studied patients. Algorithmic encoding of sweating and non-sweating regions produces a more objective analysis for clinical decision making. Additionally, results from the forced cooling method correspond well with those from indicator powder imaging, with a correlation across spatial regions of -0.78 (CI: -0.84 to -0.71). The method works similarly across body regions, and frame-by-frame analysis suggests the ability to identify sweating regions within about 1 second of imaging. While algorithmic encoding can enhance the standard sweat testing protocol, thermal imaging with forced evaporative cooling can dramatically improve the TST by making it less time-consuming and more patient-friendly than the current approach.
NASA Astrophysics Data System (ADS)
Jeong, C.; Om, J.; Hwang, J.; Joo, K.; Heo, J.
2013-12-01
In recent, the frequency of extreme flood has been increasing due to climate change and global warming. Highly flood damages are mainly caused by the collapse of flood control structures such as dam and dike. In order to reduce these disasters, the disaster management system (DMS) through flood forecasting, inundation mapping, EAP (Emergency Action Plan) has been studied. The estimation of inundation damage and practical EAP are especially crucial to the DMS. However, it is difficult to predict inundation and take a proper action through DMS in real emergency situation because several techniques for inundation damage estimation are not integrated and EAP is supplied in the form of a document in Korea. In this study, the integrated simulation system including rainfall frequency analysis, rainfall-runoff modeling, inundation prediction, surface runoff analysis, and inland flood analysis was developed. Using this system coupled with standard GIS data, inundation damage can be estimated comprehensively and automatically. The standard EAP based on BIM (Building Information Modeling) was also established in this system. It is, therefore, expected that the inundation damages through this study over the entire area including buildings can be predicted and managed.
NASA Technical Reports Server (NTRS)
Wingard, Charles D.
1999-01-01
White Hypalon paint is brush-applied as a moisture barrier coating over cork surfaces on each of the two Space Shuttle SRBS. Fine cracks have been observed in the Hypalon coating three times historically on laboratory witness panels, but never on flight hardware. Recent samples of the cracked and standard ("good") Hypalon were removed from cork surfaces and were tested by Thermal Gravimetric Analysis (TGA), Thermomechanical (TMA) and Differential Scanning Calorimetry (DSC) thermal analysis techniques. The TGA data showed that at 700 C, where only paint pigment solids remain, the cracked material had about 9 weight percent more material remaining than the standard material, probably indicating incomplete mixing of the paint before it was brush-applied to produce the cracked material. Use of the TMA film tension method showed that the average static modulus vs. temperature was about 3 to 6 times higher for the cracked material than for the standard material, indicating a much higher stiffness for the cracked Hypalon. The TMA data also showed than an increased coating thickness for the cracked Hypalon was not a factor in the anomaly.
Brain tumor classification using AFM in combination with data mining techniques.
Huml, Marlene; Silye, René; Zauner, Gerald; Hutterer, Stephan; Schilcher, Kurt
2013-01-01
Although classification of astrocytic tumors is standardized by the WHO grading system, which is mainly based on microscopy-derived, histomorphological features, there is great interobserver variability. The main causes are thought to be the complexity of morphological details varying from tumor to tumor and from patient to patient, variations in the technical histopathological procedures like staining protocols, and finally the individual experience of the diagnosing pathologist. Thus, to raise astrocytoma grading to a more objective standard, this paper proposes a methodology based on atomic force microscopy (AFM) derived images made from histopathological samples in combination with data mining techniques. By comparing AFM images with corresponding light microscopy images of the same area, the progressive formation of cavities due to cell necrosis was identified as a typical morphological marker for a computer-assisted analysis. Using genetic programming as a tool for feature analysis, a best model was created that achieved 94.74% classification accuracy in distinguishing grade II tumors from grade IV ones. While utilizing modern image analysis techniques, AFM may become an important tool in astrocytic tumor diagnosis. By this way patients suffering from grade II tumors are identified unambiguously, having a less risk for malignant transformation. They would benefit from early adjuvant therapies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hubbard, L; Ziemer, B; Sadeghi, B
Purpose: To evaluate the accuracy of dynamic CT myocardial perfusion measurement using first pass analysis (FPA) and maximum slope models. Methods: A swine animal model was prepared by percutaneous advancement of an angioplasty balloon into the proximal left anterior descending (LAD) coronary artery to induce varying degrees of stenosis. Maximal hyperaemia was achieved in the LAD with an intracoronary adenosine drip (240 µg/min). Serial microsphere and contrast (370 mg/mL iodine, 30 mL, 5mL/s) injections were made over a range of induced stenoses, and dynamic imaging was performed using a 320-row CT scanner at 100 kVp and 200 mA. The FPAmore » CT perfusion technique was used to make vessel-specific myocardial perfusion measurements. CT perfusion measurements using the FPA and maximum slope models were validated using colored microspheres as the reference gold standard. Results: Perfusion measurements using the FPA technique (P-FPA) showed good correlation with minimal offset when compared to perfusion measurements using microspheres (P- Micro) as the reference standard (P -FPA = 0.96 P-Micro + 0.05, R{sup 2} = 0.97, RMSE = 0.19 mL/min/g). In contrast, the maximum slope model technique (P-MS) was shown to underestimate perfusion when compared to microsphere perfusion measurements (P-MS = 0.42 P -Micro −0.48, R{sup 2} = 0.94, RMSE = 3.3 mL/min/g). Conclusion: The results indicate the potential for significant improvements in accuracy of dynamic CT myocardial perfusion measurement using the first pass analysis technique as compared with the standard maximum slope model.« less
NASA Technical Reports Server (NTRS)
Leveson, Nancy
1987-01-01
Software safety and its relationship to other qualities are discussed. It is shown that standard reliability and fault tolerance techniques will not solve the safety problem for the present. A new attitude requires: looking at what you do NOT want software to do along with what you want it to do; and assuming things will go wrong. New procedures and changes to entire software development process are necessary: special software safety analysis techniques are needed; and design techniques, especially eliminating complexity, can be very helpful.
Role of capillary electrophoresis in the fight against doping in sports.
Harrison, Christopher R
2013-08-06
At present the role of capillary electrophoresis in the detection of doping agents in athletes is, for the most part, nonexistent. More traditional techniques, namely gas and liquid chromatography with mass spectrometric detection, remain the gold standard of antidoping tests. This Feature will investigate the in-roads that capillary electrophoresis has made, the limitations that the technique suffers from, and where the technique may grow into being a key tool for antidoping analysis.
[The evaluation of costs: standards of medical care and clinical statistic groups].
Semenov, V Iu; Samorodskaia, I V
2014-01-01
The article presents the comparative analysis of techniques of evaluation of costs of hospital treatment using medical economic standards of medical care and clinical statistical groups. The technique of evaluation of costs on the basis of clinical statistical groups was developed almost fifty years ago and is largely applied in a number of countries. Nowadays, in Russia the payment for completed case of treatment on the basis of medical economic standards is the main mode of payment for medical care in hospital. It is very conditionally a Russian analogue of world-wide prevalent system of diagnostic related groups. The tariffs for these cases of treatment as opposed to clinical statistical groups are counted on basis of standards of provision of medical care approved by Minzdrav of Russia. The information derived from generalization of cases of treatment of real patients is not applied.
Benson, Sarah J; Lennard, Christopher J; Hill, David M; Maynard, Philip; Roux, Claude
2010-01-01
A significant amount of research has been conducted into the use of stable isotopes to assist in determining the origin of various materials. The research conducted in the forensic field shows the potential of isotope ratio mass spectrometry (IRMS) to provide a level of discrimination not achievable utilizing traditional forensic techniques. Despite the research there have been few, if any, publications addressing the validation and measurement uncertainty of the technique for forensic applications. This study, the first in a planned series, presents validation data for the measurement of bulk nitrogen isotope ratios in ammonium nitrate (AN) using the DELTA(plus)XP (Thermo Finnigan) IRMS instrument equipped with a ConFlo III interface and FlashEA 1112 elemental analyzer (EA). Appropriate laboratory standards, analytical methods and correction calculations were developed and evaluated. A validation protocol was developed in line with the guidelines provided by the National Association of Testing Authorities, Australia (NATA). Performance characteristics including: accuracy, precision/repeatability, reproducibility/ruggedness, robustness, linear range, and measurement uncertainty were evaluated for the measurement of nitrogen isotope ratios in AN. AN (99.5%) and ammonium thiocyanate (99.99+%) were determined to be the most suitable laboratory standards and were calibrated against international standards (certified reference materials). All performance characteristics were within an acceptable range when potential uncertainties, including the manufacturer's uncertainty of the technique and standards, were taken into account. The experiments described in this article could be used as a model for validation of other instruments for similar purposes. Later studies in this series will address the more general issue of demonstrating that the IRMS technique is scientifically sound and fit-for-purpose in the forensic explosives analysis field.
Performance evaluation of the atmospheric phase of aeromaneuvering orbital transfer vehicles
NASA Technical Reports Server (NTRS)
Powell, R. W.; Stone, H. W.; Naftel, J. C.
1984-01-01
Studies are underway to design reusable orbital transfer vehicles that would be used to transfer payloads from low-earth orbit to higher orbits and return. One promising concept is to use an atmospheric pass on the return leg to reduce the amount of fuel for the mission. This paper discusses a six-degree-of-freedom simulation analysis for two configurations, a low-lift-to-drag ratio configuration and a medium-lift-to-drag ratio configuration using both a predictive guidance technique and an adaptive guidance technique. Both guidance schemes were evaluated using the 1962 standard atmosphere and three atmospheres that had been derived from three entries of the Space Shuttle. The predictive technique requires less reaction control system activity for both configurations, but because of the limited number of updates and because each update used the 1962 standard atmosphere, the adaptive technique produces more accurate exit conditions.
Flat-plate solar array project process development area: Process research of non-CZ silicon material
NASA Technical Reports Server (NTRS)
Campbell, R. B.
1986-01-01
Several different techniques to simultaneously diffuse the front and back junctions in dendritic web silicon were investigated. A successful simultaneous diffusion reduces the cost of the solar cell by reducing the number of processing steps, the amount of capital equipment, and the labor cost. The three techniques studied were: (1) simultaneous diffusion at standard temperatures and times using a tube type diffusion furnace or a belt furnace; (2) diffusion using excimer laser drive-in; and (3) simultaneous diffusion at high temperature and short times using a pulse of high intensity light as the heat source. The use of an excimer laser and high temperature short time diffusion experiment were both more successful than the diffusion at standard temperature and times. The three techniques are described in detail and a cost analysis of the more successful techniques is provided.
NASA Technical Reports Server (NTRS)
Green, R. N.
1981-01-01
The shape factor, parameter estimation, and deconvolution data analysis techniques were applied to the same set of Earth emitted radiation measurements to determine the effects of different techniques on the estimated radiation field. All three techniques are defined and their assumptions, advantages, and disadvantages are discussed. Their results are compared globally, zonally, regionally, and on a spatial spectrum basis. The standard deviations of the regional differences in the derived radiant exitance varied from 7.4 W-m/2 to 13.5 W-m/2.
ERIC Educational Resources Information Center
Lindle, Jane Clark; Stalion, Nancy; Young, Lu
2005-01-01
Kentucky's accountability system includes a school-processes audit known as Standards and Indicators for School Improvement (SISI), which is in a nascent stage of validation. Content validity methods include comparison to instruments measuring similar constructs as well as other techniques such as job analysis. This study used a two-phase process…
Field methods and data processing techniques associated with mapped inventory plots
William A. Bechtold; Stanley J. Zarnoch
1999-01-01
The U.S. Forest Inventory and Analysis (FIA) and Forest Health Monitoring (FHM) programs utilize a fixed-area mapped-plot design as the national standard for extensive forest inventories. The mapped-plot design is explained, as well as the rationale for its selection as the national standard. Ratio-of-means estimators am presented as a method to process data from...
Cost-effectiveness of the streamflow-gaging program in Wyoming
Druse, S.A.; Wahl, K.L.
1988-01-01
This report documents the results of a cost-effectiveness study of the streamflow-gaging program in Wyoming. Regression analysis or hydrologic flow-routing techniques were considered for 24 combinations of stations from a 139-station network operated in 1984 to investigate suitability of techniques for simulating streamflow records. Only one station was determined to have sufficient accuracy in the regression analysis to consider discontinuance of the gage. The evaluation of the gaging-station network, which included the use of associated uncertainty in streamflow records, is limited to the nonwinter operation of the 47 stations operated by the Riverton Field Office of the U.S. Geological Survey. The current (1987) travel routes and measurement frequencies require a budget of $264,000 and result in an average standard error in streamflow records of 13.2%. Changes in routes and station visits using the same budget, could optimally reduce the standard error by 1.6%. Budgets evaluated ranged from $235,000 to $400,000. A $235,000 budget increased the optimal average standard error/station from 11.6 to 15.5%, and a $400,000 budget could reduce it to 6.6%. For all budgets considered, lost record accounts for about 40% of the average standard error. (USGS)
Ford, Kristina L.; Zeng, Wei; Heazlewood, Joshua L.; ...
2015-08-28
The analysis of post-translational modifications (PTMs) by proteomics is regarded as a technically challenging undertaking. While in recent years approaches to examine and quantify protein phosphorylation have greatly improved, the analysis of many protein modifications, such as glycosylation, are still regarded as problematic. Limitations in the standard proteomics workflow, such as use of suboptimal peptide fragmentation methods, can significantly prevent the identification of glycopeptides. The current generation of tandem mass spectrometers has made available a variety of fragmentation options, many of which are becoming standard features on these instruments. Lastly, we have used three common fragmentation techniques, namely CID, HCD,more » and ETD, to analyze a glycopeptide and highlight how an integrated fragmentation approach can be used to identify the modified residue and characterize the N-glycan on a peptide.« less
Requirements analysis, domain knowledge, and design
NASA Technical Reports Server (NTRS)
Potts, Colin
1988-01-01
Two improvements to current requirements analysis practices are suggested: domain modeling, and the systematic application of analysis heuristics. Domain modeling is the representation of relevant application knowledge prior to requirements specification. Artificial intelligence techniques may eventually be applicable for domain modeling. In the short term, however, restricted domain modeling techniques, such as that in JSD, will still be of practical benefit. Analysis heuristics are standard patterns of reasoning about the requirements. They usually generate questions of clarification or issues relating to completeness. Analysis heuristics can be represented and therefore systematically applied in an issue-based framework. This is illustrated by an issue-based analysis of JSD's domain modeling and functional specification heuristics. They are discussed in the context of the preliminary design of simple embedded systems.
Advanced analysis techniques for uranium assay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geist, W. H.; Ensslin, Norbert; Carrillo, L. A.
2001-01-01
Uranium has a negligible passive neutron emission rate making its assay practicable only with an active interrogation method. The active interrogation uses external neutron sources to induce fission events in the uranium in order to determine the mass. This technique requires careful calibration with standards that are representative of the items to be assayed. The samples to be measured are not always well represented by the available standards which often leads to large biases. A technique of active multiplicity counting is being developed to reduce some of these assay difficulties. Active multiplicity counting uses the measured doubles and triples countmore » rates to determine the neutron multiplication (f4) and the product of the source-sample coupling ( C ) and the 235U mass (m). Since the 35U mass always appears in the multiplicity equations as the product of Cm, the coupling needs to be determined before the mass can be known. A relationship has been developed that relates the coupling to the neutron multiplication. The relationship is based on both an analytical derivation and also on empirical observations. To determine a scaling constant present in this relationship, known standards must be used. Evaluation of experimental data revealed an improvement over the traditional calibration curve analysis method of fitting the doubles count rate to the 235Um ass. Active multiplicity assay appears to relax the requirement that the calibration standards and unknown items have the same chemical form and geometry.« less
Toward standardized mapping for left atrial analysis and cardiac ablation guidance
NASA Astrophysics Data System (ADS)
Rettmann, M. E.; Holmes, D. R.; Linte, C. A.; Packer, D. L.; Robb, R. A.
2014-03-01
In catheter-based cardiac ablation, the pulmonary vein ostia are important landmarks for guiding the ablation procedure, and for this reason, have been the focus of many studies quantifying their size, structure, and variability. Analysis of pulmonary vein structure, however, has been limited by the lack of a standardized reference space for population based studies. Standardized maps are important tools for characterizing anatomic variability across subjects with the goal of separating normal inter-subject variability from abnormal variability associated with disease. In this work, we describe a novel technique for computing flat maps of left atrial anatomy in a standardized space. A flat map of left atrial anatomy is created by casting a single ray through the volume and systematically rotating the camera viewpoint to obtain the entire field of view. The technique is validated by assessing preservation of relative surface areas and distances between the original 3D geometry and the flat map geometry. The proposed methodology is demonstrated on 10 subjects which are subsequently combined to form a probabilistic map of anatomic location for each of the pulmonary vein ostia and the boundary of the left atrial appendage. The probabilistic map demonstrates that the location of the inferior ostia have higher variability than the superior ostia and the variability of the left atrial appendage is similar to the superior pulmonary veins. This technique could also have potential application in mapping electrophysiology data, radio-frequency ablation burns, or treatment planning in cardiac ablation therapy.
Tam Tam, Kiran Babu; Dozier, James; Martin, James Nello
2012-04-01
A systematic review of the literature was conducted to answer the following question: are there enhancements to standard peripartum hysterectomy technique that minimize unintentional urinary tract (UT) injury in pregnancies complicated by invasive placental attachment (INPLAT)? A PubMed search of English language articles on INPLAT published by June 2010 was conducted. Data regarding the following parameters was required for inclusion in the quantitative analysis of the review's objective: (1) type of INPLAT, (2) details pertaining to medical and surgical management of INPLAT, and (3) complications, if any, associated with management. An attempt was made to identify approaches that may lower the risk of unintentional UT injury. Most cases (285 of 292) were managed by hysterectomy. There were 83 (29%) cases of unintentional UT injury. Antenatal diagnosis of INPLAT lowered the rate of UT injury (39% vs. 63%; P = 0.04). Information regarding surgical technique or medical management was available for 90 cases; 14 of these underwent a standard hysterectomy technique. Methotrexate treatment and 11 modifications of the surgical technique were associated with 16% unintentional UT injury rate as opposed to 57% for standard hysterectomy (P = 0.002). The use of ureteral stents reduced risk of urologic injury (P = 0.01). Multiple logistic regression analysis identified antenatal diagnosis as the significant predictor of an intact UT. Antenatal diagnosis of INPLAT is paramount to minimize UT injury. Utilization of management modifications identified in this review may reduce urologic injury due to INPLAT.
Nagar, Y S; Singh, S; Kumar, S; Lal, P
2004-01-01
The advantage of 4-field radiation to the pelvis is that the use of lateral portals spares a portion of the small bowel anteriorly and rectum posteriorly. The standard lateral portals defined in textbooks are not always adequate especially in advanced cancer cervix. An analysis was done to determine adequacy of margins of standard lateral pelvic portals with CECT defined tumor volumes. The study included 40 patients of FIGO stage IIB and IIIB treated definitively for cancer cervix between 1998 and 2000. An inadequate margin was defined if the cervical growth and uterus were not encompassed by the 95% isodose. An inadequate posterior margin was common with bulky disease (P = 0.06) and with retroverted uterus (P = 0.08). Menopausal status, FIGO stage, associated myoma, and age were of no apparent prognostic significance. Bulk retained significant on multivariate analysis. An inadequate anterior margin was common in premenopausal (P = 0.01); anteverted uterus (P = 0.02); associated myoma (P = 0.01); and younger patients (P = 0.03). It was not influenced by bulk or stage. Menopausal status and associated myoma retained significant on multivariate analysis. Without the knowledge of precise tumor volume, the 4-field technique with standard portals is potentially risky as it may under dose the tumor through lateral portals and the standard AP/ PA portals are a safer option.
NASA Technical Reports Server (NTRS)
Kurtenbach, F. J.
1979-01-01
The technique which relies on afterburner duct pressure measurements and empirical corrections to an ideal one dimensional flow analysis to determine thrust is presented. A comparison of the calculated and facility measured thrust values is reported. The simplified model with the engine manufacturer's gas generator model are compared. The evaluation was conducted over a range of Mach numbers from 0.80 to 2.00 and at altitudes from 4020 meters to 15,240 meters. The effects of variations in inlet total temperature from standard day conditions were explored. Engine conditions were varied from those normally scheduled for flight. The technique was found to be accurate to a twice standard deviation of 2.89 percent, with accuracy a strong function of afterburner duct pressure difference.
Determination of dynamic fracture toughness using a new experimental technique
NASA Astrophysics Data System (ADS)
Cady, Carl M.; Liu, Cheng; Lovato, Manuel L.
2015-09-01
In other studies dynamic fracture toughness has been measured using Charpy impact and modified Hopkinson Bar techniques. In this paper results will be shown for the measurement of fracture toughness using a new test geometry. The crack propagation velocities range from ˜0.15 mm/s to 2.5 m/s. Digital image correlation (DIC) will be the technique used to measure both the strain and the crack growth rates. The boundary of the crack is determined using the correlation coefficient generated during image analysis and with interframe timing the crack growth rate and crack opening can be determined. A comparison of static and dynamic loading experiments will be made for brittle polymeric materials. The analysis technique presented by Sammis et al. [1] is a semi-empirical solution, however, additional Linear Elastic Fracture Mechanics analysis of the strain fields generated as part of the DIC analysis allow for the more commonly used method resembling the crack tip opening displacement (CTOD) experiment. It should be noted that this technique was developed because limited amounts of material were available and crack growth rates were to fast for a standard CTOD method.
Biomechanical analysis of occipitocervical stability afforded by three fixation techniques.
Helgeson, Melvin D; Lehman, Ronald A; Sasso, Rick C; Dmitriev, Anton E; Mack, Andrew W; Riew, K Daniel
2011-03-01
Occipital condyle screws appear to be a novel technique that demands biomechanical consideration. It has the potential to achieve fixation anterior to the axis of rotation while offering a point of fixation in line with the C1/C2 screws. To compare the segmental stability and range of motion (ROM) of standard occipitocervical (OC) screw/rod and plate constructs versus a new technique that incorporates occipital condyle fixation. Human cadaveric biomechanical analysis. After intact analysis, 10 fresh-frozen human cadaveric OC spine specimens were instrumented bilaterally with C1 lateral mass screws and C2 pedicle screws. Additional occipital instrumentation was tested in random order under the following conditions: standard occipitocervical plate/rod system (Vertex Max; Medtronic, Inc., Minneapolis, MN, USA); occipital condyle screws alone; and occipital condyle screws with the addition of an eyelet screw placed into the occiput bilaterally. After nondestructive ROM testing, specimens were evaluated under computed tomography (CT) and underwent destructive forward flexion failure comparing Group 1 to Group 3. There was no significant difference in OC (Occiput-C1) axial rotation and flexion/extension ROM between the standard occipitocervical plate/rod system (Group 1) and the occipital condyle screws with one eyelet screw bilaterally (Group 3). Furthermore, the occipital condyle screws alone (Group 2) did allow significantly more flexion/extension compared with Group 1. Interestingly, the two groups with occipital condyle screws (Groups 2 and 3) had significantly less lateral bending compared with Group 1. During CT analysis, the mean occipital condyle width was 10.8 mm (range, 9.1-12.7 mm), and the mean condylar length was 24.3 mm (range, 20.2-28.5). On destructive testing, there was no significant difference in forward flexion failure between Groups 1 and 3. With instrumentation across the mobile OC junction, our results indicate that similar stability can be achieved with occipital condyle screws/eyelet screws compared with the standard occipitocervical plate/rod system. Published by Elsevier Inc.
ERIC Educational Resources Information Center
Hoffer, Bates
Dialect analysis should follow the procedure for analysis of a new language: collection of a corpus of words, stories, and sentences and identifying structural features of phonology, morphology, syntax, and lexicon. Contrastive analysis between standard English and the native language is used and the ethnic dialect of English is described and…
Application handbook for a Standardized Control Module (SCM) for DC-DC converters, volume 1
NASA Astrophysics Data System (ADS)
Lee, F. C.; Mahmoud, M. F.; Yu, Y.
1980-04-01
The standardized control module (SCM) was developed for application in the buck, boost and buck/boost DC-DC converters. The SCM used multiple feedback loops to provide improved input line and output load regulation, stable feedback control system, good dynamic transient response and adaptive compensation of the control loop for changes in open loop gain and output filter time constraints. The necessary modeling and analysis tools to aid the design engineer in the application of the SCM to DC-DC Converters were developed. The SCM functional block diagram and the different analysis techniques were examined. The average time domain analysis technique was chosen as the basic analytical tool. The power stage transfer functions were developed for the buck, boost and buck/boost converters. The analog signal and digital signal processor transfer functions were developed for the three DC-DC Converter types using the constant on time, constant off time and constant frequency control laws.
High-coverage quantitative proteomics using amine-specific isotopic labeling.
Melanson, Jeremy E; Avery, Steven L; Pinto, Devanand M
2006-08-01
Peptide dimethylation with isotopically coded formaldehydes was evaluated as a potential alternative to techniques such as the iTRAQ method for comparative proteomics. The isotopic labeling strategy and custom-designed protein quantitation software were tested using protein standards and then applied to measure proteins levels associated with Alzheimer's disease (AD). The method provided high accuracy (10% error), precision (14% RSD) and coverage (70%) when applied to the analysis of a standard solution of BSA by LC-MS/MS. The technique was then applied to measure protein abundance levels in brain tissue afflicted with AD relative to normal brain tissue. 2-D LC-MS analysis identified 548 unique proteins (p<0.05). Of these, 349 were quantified with two or more peptides that met the statistical criteria used in this study. Several classes of proteins exhibited significant changes in abundance. For example, elevated levels of antioxidant proteins and decreased levels of mitochondrial electron transport proteins were observed. The results demonstrate the utility of the labeling method for high-throughput quantitative analysis.
Application handbook for a Standardized Control Module (SCM) for DC-DC converters, volume 1
NASA Technical Reports Server (NTRS)
Lee, F. C.; Mahmoud, M. F.; Yu, Y.
1980-01-01
The standardized control module (SCM) was developed for application in the buck, boost and buck/boost DC-DC converters. The SCM used multiple feedback loops to provide improved input line and output load regulation, stable feedback control system, good dynamic transient response and adaptive compensation of the control loop for changes in open loop gain and output filter time constraints. The necessary modeling and analysis tools to aid the design engineer in the application of the SCM to DC-DC Converters were developed. The SCM functional block diagram and the different analysis techniques were examined. The average time domain analysis technique was chosen as the basic analytical tool. The power stage transfer functions were developed for the buck, boost and buck/boost converters. The analog signal and digital signal processor transfer functions were developed for the three DC-DC Converter types using the constant on time, constant off time and constant frequency control laws.
Structure of Nano-sized CeO 2 Materials: Combined Scattering and Spectroscopic Investigations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marchbank, Huw R.; Clark, Adam H.; Hyde, Timothy I.
Here, the nature of nano-sized ceria, CeO 2, systems were investigated using neutron and X-ray diffraction and X-ray absorption spectroscopy. Whilst both diffraction andtotal pair distribution functions (PDFs) revealed that in all the samples the occupancy of both Ce 4+ and O 2- are very close to the ideal stoichiometry, the analysis using reverse Monte Carlo technique revealedsignificant disorder around oxygen atoms in the nano sized ceria samples in comparison to the highly crystalline NIST standard.In addition, the analysis reveal that the main differences observed in the pair correlations from various X-ray and neutron diffraction techniques were attributed to themore » particle size of the CeO 2 prepared by the reported three methods. Furthermore, detailed analysis of the Ce L 3– and K-edge EXAFS data support this finding; in particular the decrease in higher shell coordination numbers with respect to the NIST standard, are attributed to differences in particle size.« less
Structure of Nano-sized CeO 2 Materials: Combined Scattering and Spectroscopic Investigations
Marchbank, Huw R.; Clark, Adam H.; Hyde, Timothy I.; ...
2016-08-29
Here, the nature of nano-sized ceria, CeO 2, systems were investigated using neutron and X-ray diffraction and X-ray absorption spectroscopy. Whilst both diffraction andtotal pair distribution functions (PDFs) revealed that in all the samples the occupancy of both Ce 4+ and O 2- are very close to the ideal stoichiometry, the analysis using reverse Monte Carlo technique revealedsignificant disorder around oxygen atoms in the nano sized ceria samples in comparison to the highly crystalline NIST standard.In addition, the analysis reveal that the main differences observed in the pair correlations from various X-ray and neutron diffraction techniques were attributed to themore » particle size of the CeO 2 prepared by the reported three methods. Furthermore, detailed analysis of the Ce L 3– and K-edge EXAFS data support this finding; in particular the decrease in higher shell coordination numbers with respect to the NIST standard, are attributed to differences in particle size.« less
Gene Identification Algorithms Using Exploratory Statistical Analysis of Periodicity
NASA Astrophysics Data System (ADS)
Mukherjee, Shashi Bajaj; Sen, Pradip Kumar
2010-10-01
Studying periodic pattern is expected as a standard line of attack for recognizing DNA sequence in identification of gene and similar problems. But peculiarly very little significant work is done in this direction. This paper studies statistical properties of DNA sequences of complete genome using a new technique. A DNA sequence is converted to a numeric sequence using various types of mappings and standard Fourier technique is applied to study the periodicity. Distinct statistical behaviour of periodicity parameters is found in coding and non-coding sequences, which can be used to distinguish between these parts. Here DNA sequences of Drosophila melanogaster were analyzed with significant accuracy.
New mechanistic insights in the NH 3-SCR reactions at low temperature
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruggeri, Maria Pia; Selleri, Tomasso; Nova, Isabella
2016-05-06
The present study is focused on the investigation of the low temperature Standard SCR reaction mechanism over Fe- and Cu-promoted zeolites. Different techniques are employed, including in situ DRIFTS, transient reaction analysis and chemical trapping techniques. The results present strong evidence of nitrite formation in the oxidative activation of NO and of their role in SCR reactions. These elements lead to a deeper understanding of the standard SCR chemistry at low temperature and can potentially improve the consistency of mechanistic mathematical models. Furthermore, comprehension of the mechanism on a fundamental level can contribute to the development of improved SCR catalysts.
Coplen, T.B.; Wildman, J.D.; Chen, J.
1991-01-01
Improved precision in the H2-H2O equilibration method for ??D analysis has been achieved in an automated system. Reduction in 1-?? standard deviation of a single mass-spectrometer analysis to 1.3??? is achieved by (1) bonding catalyst to glass rods and assigning use to specific equilibration chambers to monitor performance of catalyst, (2) improving the apparatus design, and (3) reducing the H3+ contribution of the mass-spectrometer ion source. For replicate analysis of a water sample, the standard deviation improved to 0.8???. H2S-bearing samples and samples as small as 0.1 mL can be analyzed routinely with this method.
Segmentation of Unstructured Datasets
NASA Technical Reports Server (NTRS)
Bhat, Smitha
1996-01-01
Datasets generated by computer simulations and experiments in Computational Fluid Dynamics tend to be extremely large and complex. It is difficult to visualize these datasets using standard techniques like Volume Rendering and Ray Casting. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This thesis explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and from Finite Element Analysis.
Kummalue, Tanawan; Chuphrom, Anchalee; Sukpanichanant, Sanya; Pongpruttipan, Tawatchai; Sukpanichanant, Sathien
2010-05-19
Malignant lymphoma, especially non-Hodgkin lymphoma, is one of the most common hematologic malignancies in Thailand. The diagnosis of malignant lymphoma is often problematic, especially in early stages of the disease. Detection of antigen receptor gene rearrangement including T cell receptor (TCR) and immunoglobulin heavy chain (IgH) by polymerase chain reaction followed by heteroduplex has currently become standard whereas fluorescent fragment analysis (GeneScan) has been used for confirmation test. In this study, three techniques had been compared: thermocycler polymerase chain reaction (PCR) followed by heteroduplex and polyacrylamide gel electrophoresis, GeneScan analysis, and real time PCR with High Resolution Melting curve analysis (HRM). The comparison was carried out with DNA extracted from paraffin embedded tissues diagnosed as B- cell non-Hodgkin lymphoma. Specific PCR primers sequences for IgH gene variable region 3, including fluorescence labeled IgH primers were used and results were compared with HRM. In conclusion, the detection IgH gene rearrangement by HRM in the LightCycler System showed potential for distinguishing monoclonality from polyclonality in B-cell non-Hodgkin lymphoma. Malignant lymphoma, especially non-Hodgkin lymphoma, is one of the most common hematologic malignancies in Thailand. The incidence rate as reported by Ministry of Public Health is 3.1 per 100,000 population in female whereas the rate in male is 4.5 per 100,000 population 1. At Siriraj Hospital, the new cases diagnosed as malignant lymphoma were 214.6 cases/year 2. The diagnosis of malignant lymphoma is often problematic, especially in early stages of the disease. Therefore, detection of antigen receptor gene rearrangement including T cell receptor (TCR) and immunoglobulin heavy chain (IgH) by polymerase chain reaction (PCR) assay has recently become a standard laboratory test for discrimination of reactive from malignant clonal lymphoproliferation 34. Analyzing DNA extracted from formalin-fixed, paraffin-embedded tissues by multiplex PCR techniques is more rapid, accurate and highly sensitive. Measuring the size of the amplicon from PCR analysis could be used to diagnose malignant lymphoma with monoclonal pattern showing specific and distinct bands detected on acrylamide gel electrophoresis. However, this technique has some limitations and some patients might require a further confirmation test such as GeneScan or fragment analysis 56.GeneScan technique or fragment analysis reflects size and peak of DNA by using capillary gel electrophoresis. This technique is highly sensitive and can detect 0.5-1% of clonal lymphoid cells. It measures the amplicons by using various fluorescently labeled primers at forward or reverse sides and a specific size standard. Using a Genetic Analyzer machine and GeneMapper software (Applied Bioscience, USA), the monoclonal pattern revealed one single, sharp and high peak at the specific size corresponding to acrylamide gel pattern, whereas the polyclonal pattern showed multiple and small peak condensed at the same size standard. This technique is the most sensitive and accurate technique; however, it usually requires high technical experience and is also of high cost 7. Therefore, rapid and more cost effective technique are being sought.LightCycler PCR performs the diagnostic detection of amplicon via melting curve analysis within 2 hours with the use of a specific dye 89. This dye consists of two types: one known as SYBR-Green I which is non specific and the other named as High Resolution Melting analysis (HRM) which is highly sensitive, more accurate and stable. Several reports demonstrated that this new instrument combined with DNA intercalating dyes can be used to discriminate sequence changes in PCR amplicon without manual handling of PCR product 1011. Therefore, current investigations using melting curve analysis are being developed 1213.In this study, three different techniques were compared to evaluate the suitability of LightCycler PCR with HRM as the clonal diagnostic tool for IgH gene rearrangement in B-cell non-Hogdkin lymphoma, i.e. thermocycler PCR followed by heteroduplex analysis and PAGE, GeneScan analysis and LightCycler PCR with HRM.
Differences in head impulse test results due to analysis techniques.
Cleworth, Taylor W; Carpenter, Mark G; Honegger, Flurin; Allum, John H J
2017-01-01
Different analysis techniques are used to define vestibulo-ocular reflex (VOR) gain between eye and head angular velocity during the video head impulse test (vHIT). Comparisons would aid selection of gain techniques best related to head impulse characteristics and promote standardisation. Compare and contrast known methods of calculating vHIT VOR gain. We examined lateral canal vHIT responses recorded from 20 patients twice within 13 weeks of acute unilateral peripheral vestibular deficit onset. Ten patients were tested with an ICS Impulse system (GN Otometrics) and 10 with an EyeSeeCam (ESC) system (Interacoustics). Mean gain and variance were computed with area, average sample gain, and regression techniques over specific head angular velocity (HV) and acceleration (HA) intervals. Results for the same gain technique were not different between measurement systems. Area and average sample gain yielded equally lower variances than regression techniques. Gains computed over the whole impulse duration were larger than those computed for increasing HV. Gain over decreasing HV was associated with larger variances. Gains computed around peak HV were smaller than those computed around peak HA. The median gain over 50-70 ms was not different from gain around peak HV. However, depending on technique used, the gain over increasing HV was different from gain around peak HA. Conversion equations between gains obtained with standard ICS and ESC methods were computed. For low gains, the conversion was dominated by a constant that needed to be added to ESC gains to equal ICS gains. We recommend manufacturers standardize vHIT gain calculations using 2 techniques: area gain around peak HA and peak HV.
ERIC Educational Resources Information Center
Embrey, Karen K.
2012-01-01
Cognitive task analysis (CTA) is a knowledge elicitation technique employed for acquiring expertise from domain specialists to support the effective instruction of novices. CTA guided instruction has proven effective in improving surgical skills training for medical students and surgical residents. The standard, current method of teaching clinical…
A Molecular Iodine Spectral Data Set for Rovibronic Analysis
ERIC Educational Resources Information Center
Williamson, J. Charles; Kuntzleman, Thomas S.; Kafader, Rachael A.
2013-01-01
A data set of 7,381 molecular iodine vapor rovibronic transitions between the X and B electronic states has been prepared for an advanced undergraduate spectroscopic analysis project. Students apply standard theoretical techniques to these data and determine the values of three X-state constants (image omitted) and four B-state constants (image…
Analysis of He I 1083 nm Imaging Spectroscopy Using a Spectral Standard
NASA Technical Reports Server (NTRS)
Malanushenko, Elena V.; Jones, Harrison P.
2004-01-01
We develop a technique. for the analysis of He I 1083 nanometer spectra which addresses several difficulties through determination of a continuum background by comparison with a well calibrated standard and through removal of nearby solar and telluric blends by differential comparison to an average spectrum. The method is compared with earlier analysis of imaging spectroscopy obtained at the National Solar Observatory/Kitt Peak Vacuum Telescope (NSO/KPVT) with the NASA/NSO Spectromagnetograph (SPM). We examine distributions of Doppler velocity and line width as a function of central intensity for an active region, filament, quiet Sun, and coronal hole. For our example, we find that line widths and central intensity are oppositely correlated in a coronal hole and quiet Sun. Line widths are comparable to the quiet sun in the active region, are systematically lower in the filament, and extend to higher values in the coronal hole. Outward velocities of approximately equal to 2 to 4 kilometers per second are typically observed in the coronal hole. The sensitivity of these results to analysis technique is discussed.
Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis
Jamshidy, Ladan; Faraji, Payam; Sharifi, Roohollah
2016-01-01
Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL) regions by a stereomicroscope using a standard method. Results. The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion. The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique. PMID:28003824
Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis.
Jamshidy, Ladan; Mozaffari, Hamid Reza; Faraji, Payam; Sharifi, Roohollah
2016-01-01
Introduction . One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods . A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL) regions by a stereomicroscope using a standard method. Results . The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion . The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.
Macready, Anna L; Fallaize, Rosalind; Butler, Laurie T; Ellis, Judi A; Kuznesof, Sharron; Frewer, Lynn J; Celis-Morales, Carlos; Livingstone, Katherine M; Araújo-Soares, Vera; Fischer, Arnout Rh; Stewart-Knox, Barbara J; Mathers, John C; Lovegrove, Julie A
2018-04-09
To determine the efficacy of behavior change techniques applied in dietary and physical activity intervention studies, it is first necessary to record and describe techniques that have been used during such interventions. Published frameworks used in dietary and smoking cessation interventions undergo continuous development, and most are not adapted for Web-based delivery. The Food4Me study (N=1607) provided the opportunity to use existing frameworks to describe standardized Web-based techniques employed in a large-scale, internet-based intervention to change dietary behavior and physical activity. The aims of this study were (1) to describe techniques embedded in the Food4Me study design and explain the selection rationale and (2) to demonstrate the use of behavior change technique taxonomies, develop standard operating procedures for training, and identify strengths and limitations of the Food4Me framework that will inform its use in future studies. The 6-month randomized controlled trial took place simultaneously in seven European countries, with participants receiving one of four levels of personalized advice (generalized, intake-based, intake+phenotype-based, and intake+phenotype+gene-based). A three-phase approach was taken: (1) existing taxonomies were reviewed and techniques were identified a priori for possible inclusion in the Food4Me study, (2) a standard operating procedure was developed to maintain consistency in the use of methods and techniques across research centers, and (3) the Food4Me behavior change technique framework was reviewed and updated post intervention. An analysis of excluded techniques was also conducted. Of 46 techniques identified a priori as being applicable to Food4Me, 17 were embedded in the intervention design; 11 were from a dietary taxonomy, and 6 from a smoking cessation taxonomy. In addition, the four-category smoking cessation framework structure was adopted for clarity of communication. Smoking cessation texts were adapted for dietary use where necessary. A posteriori, a further 9 techniques were included. Examination of excluded items highlighted the distinction between techniques considered appropriate for face-to-face versus internet-based delivery. The use of existing taxonomies facilitated the description and standardization of techniques used in Food4Me. We recommend that for complex studies of this nature, technique analysis should be conducted a priori to develop standardized procedures and training and reviewed a posteriori to audit the techniques actually adopted. The present framework description makes a valuable contribution to future systematic reviews and meta-analyses that explore technique efficacy and underlying psychological constructs. This was a novel application of the behavior change taxonomies and was the first internet-based personalized nutrition intervention to use such a framework remotely. ClinicalTrials.gov NCT01530139; https://clinicaltrials.gov/ct2/show/NCT01530139 (Archived by WebCite at http://www.webcitation.org/6y8XYUft1). ©Anna L Macready, Rosalind Fallaize, Laurie T Butler, Judi A Ellis, Sharron Kuznesof, Lynn J Frewer, Carlos Celis-Morales, Katherine M Livingstone, Vera Araújo-Soares, Arnout RH Fischer, Barbara J Stewart-Knox, John C Mathers, Julie A Lovegrove. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 09.04.2018.
The dream of a one-stop-shop: Meta-analysis on myocardial perfusion CT.
Pelgrim, Gert Jan; Dorrius, Monique; Xie, Xueqian; den Dekker, Martijn A M; Schoepf, U Joseph; Henzler, Thomas; Oudkerk, Matthijs; Vliegenthart, Rozemarijn
2015-12-01
To determine the diagnostic performance of computed tomography (CT) perfusion techniques for the detection of functionally relevant coronary artery disease (CAD) in comparison to reference standards, including invasive coronary angiography (ICA), single photon emission computed tomography (SPECT), and magnetic resonance imaging (MRI). PubMed, Web of Knowledge and Embase were searched from January 1, 1998 until July 1, 2014. The search yielded 9475 articles. After duplicate removal, 6041 were screened on title and abstract. The resulting 276 articles were independently analyzed in full-text by two reviewers, and included if the inclusion criteria were met. The articles reporting diagnostic parameters including true positive, true negative, false positive and false negative were subsequently evaluated for the meta-analysis. Results were pooled according to CT perfusion technique, namely snapshot techniques: single-phase rest, single-phase stress, single-phase dual-energy stress and combined coronary CT angiography [rest] and single-phase stress, as well the dynamic technique: dynamic stress CT perfusion. Twenty-two articles were included in the meta-analysis (1507 subjects). Pooled per-patient sensitivity and specificity of single-phase rest CT compared to rest SPECT were 89% (95% confidence interval [CI], 82-94%) and 88% (95% CI, 78-94%), respectively. Vessel-based sensitivity and specificity of single-phase stress CT compared to ICA-based >70% stenosis were 82% (95% CI, 64-92%) and 78% (95% CI, 61-89%). Segment-based sensitivity and specificity of single-phase dual-energy stress CT in comparison to stress MRI were 75% (95% CI, 60-85%) and 95% (95% CI, 80-99%). Segment-based sensitivity and specificity of dynamic stress CT perfusion compared to stress SPECT were 77% (95% CI, 67-85) and 89% (95% CI, 78-95%). For combined coronary CT angiography and single-phase stress CT, vessel-based sensitivity and specificity in comparison to ICA-based >50% stenosis were 84% (95% CI, 67-93%) and 93% (95% CI, 89-96%). This meta-analysis shows considerable variation in techniques and reference standards for CT of myocardial blood supply. While CT seems sensitive and specific for evaluation of hemodynamically relevant CAD, studies so far are limited in size. Standardization of myocardial perfusion CT technique is essential. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Uncertainty Analysis of Instrument Calibration and Application
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.
Growth rate measurement in free jet experiments
NASA Astrophysics Data System (ADS)
Charpentier, Jean-Baptiste; Renoult, Marie-Charlotte; Crumeyrolle, Olivier; Mutabazi, Innocent
2017-07-01
An experimental method was developed to measure the growth rate of the capillary instability for free liquid jets. The method uses a standard shadow-graph imaging technique to visualize a jet, produced by extruding a liquid through a circular orifice, and a statistical analysis of the entire jet. The analysis relies on the computation of the standard deviation of a set of jet profiles, obtained in the same experimental conditions. The principle and robustness of the method are illustrated with a set of emulated jet profiles. The method is also applied to free falling jet experiments conducted for various Weber numbers and two low-viscosity solutions: a Newtonian and a viscoelastic one. Growth rate measurements are found in good agreement with linear stability theory in the Rayleigh's regime, as expected from previous studies. In addition, the standard deviation curve is used to obtain an indirect measurement of the initial perturbation amplitude and to identify beads on a string structure on the jet. This last result serves to demonstrate the capability of the present technique to explore in the future the dynamics of viscoelastic liquid jets.
Why bundled payments could drive innovation: an example from interventional oncology.
Steele, Joseph R; Jones, A Kyle; Ninan, Elizabeth P; Clarke, Ryan K; Odisio, Bruno C; Avritscher, Rony; Murthy, Ravi; Mahvash, Armeen
2015-03-01
Some have suggested that the current fee-for-service health care payment system in the United States stifles innovation. However, there are few published examples supporting this concept. We implemented an innovative temporary balloon occlusion technique for yttrium 90 radioembolization of nonresectable liver cancer. Although our balloon occlusion technique was associated with similar patient outcomes, lower cost, and faster procedure times compared with the standard-of-care coil embolization technique, our technique failed to gain widespread acceptance. Financial analysis revealed that because the balloon occlusion technique avoided a procedural step associated with a lucrative Current Procedural Terminology billing code, this new technique resulted in a significant decrease in hospital and physician revenue in the current fee-for-service payment system, even though the new technique would provide a revenue enhancement through cost savings in a bundled payment system. Our analysis illustrates how in a fee-for-service payment system, financial disincentives can stifle innovation and advancement of health care delivery. Copyright © 2015 by American Society of Clinical Oncology.
Multi-technique comparison of troposphere zenith delays and gradients during CONT08
NASA Astrophysics Data System (ADS)
Teke, Kamil; Böhm, Johannes; Nilsson, Tobias; Schuh, Harald; Steigenberger, Peter; Dach, Rolf; Heinkelmann, Robert; Willis, Pascal; Haas, Rüdiger; García-Espada, Susana; Hobiger, Thomas; Ichikawa, Ryuichi; Shimizu, Shingo
2011-07-01
CONT08 was a 15 days campaign of continuous Very Long Baseline Interferometry (VLBI) sessions during the second half of August 2008 carried out by the International VLBI Service for Geodesy and Astrometry (IVS). In this study, VLBI estimates of troposphere zenith total delays (ZTD) and gradients during CONT08 were compared with those derived from observations with the Global Positioning System (GPS), Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS), and water vapor radiometers (WVR) co-located with the VLBI radio telescopes. Similar geophysical models were used for the analysis of the space geodetic data, whereas the parameterization for the least-squares adjustment of the space geodetic techniques was optimized for each technique. In addition to space geodetic techniques and WVR, ZTD and gradients from numerical weather models (NWM) were used from the European Centre for Medium-Range Weather Forecasts (ECMWF) (all sites), the Japan Meteorological Agency (JMA) and Cloud Resolving Storm Simulator (CReSS) (Tsukuba), and the High Resolution Limited Area Model (HIRLAM) (European sites). Biases, standard deviations, and correlation coefficients were computed between the troposphere estimates of the various techniques for all eleven CONT08 co-located sites. ZTD from space geodetic techniques generally agree at the sub-centimetre level during CONT08, and—as expected—the best agreement is found for intra-technique comparisons: between the Vienna VLBI Software and the combined IVS solutions as well as between the Center for Orbit Determination (CODE) solution and an IGS PPP time series; both intra-technique comparisons are with standard deviations of about 3-6 mm. The best inter space geodetic technique agreement of ZTD during CONT08 is found between the combined IVS and the IGS solutions with a mean standard deviation of about 6 mm over all sites, whereas the agreement with numerical weather models is between 6 and 20 mm. The standard deviations are generally larger at low latitude sites because of higher humidity, and the latter is also the reason why the standard deviations are larger at northern hemisphere stations during CONT08 in comparison to CONT02 which was observed in October 2002. The assessment of the troposphere gradients from the different techniques is not as clear because of different time intervals, different estimation properties, or different observables. However, the best inter-technique agreement is found between the IVS combined gradients and the GPS solutions with standard deviations between 0.2 and 0.7 mm.
NASA Astrophysics Data System (ADS)
Trappe, Neil; Murphy, J. Anthony; Withington, Stafford
2003-07-01
Gaussian beam mode analysis (GBMA) offers a more intuitive physical insight into how light beams evolve as they propagate than the conventional Fresnel diffraction integral approach. In this paper we illustrate that GBMA is a computationally efficient, alternative technique for tracing the evolution of a diffracting coherent beam. In previous papers we demonstrated the straightforward application of GBMA to the computation of the classical diffraction patterns associated with a range of standard apertures. In this paper we show how the GBMA technique can be expanded to investigate the effects of aberrations in the presence of diffraction by introducing the appropriate phase error term into the propagating quasi-optical beam. We compare our technique to the standard diffraction integral calculation for coma, astigmatism and spherical aberration, taking—for comparison—examples from the classic text 'Principles of Optics' by Born and Wolf. We show the advantages of GBMA for allowing the defocusing of an aberrated image to be evaluated quickly, which is particularly important and useful for probing the consequences of astigmatism and spherical aberration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clegg, Samuel M; Barefield, James E; Wiens, Roger C
2008-01-01
Quantitative analysis with LIBS traditionally employs calibration curves that are complicated by the chemical matrix effects. These chemical matrix effects influence the LIBS plasma and the ratio of elemental composition to elemental emission line intensity. Consequently, LIBS calibration typically requires a priori knowledge of the unknown, in order for a series of calibration standards similar to the unknown to be employed. In this paper, three new Multivariate Analysis (MV A) techniques are employed to analyze the LIBS spectra of 18 disparate igneous and highly-metamorphosed rock samples. Partial Least Squares (PLS) analysis is used to generate a calibration model from whichmore » unknown samples can be analyzed. Principal Components Analysis (PCA) and Soft Independent Modeling of Class Analogy (SIMCA) are employed to generate a model and predict the rock type of the samples. These MV A techniques appear to exploit the matrix effects associated with the chemistries of these 18 samples.« less
Fifty years of solid-phase extraction in water analysis--historical development and overview.
Liska, I
2000-07-14
The use of an appropriate sample handling technique is a must in an analysis of organic micropollutants in water. The efforts to use a solid phase for the recovery of analytes from a water matrix prior to their detection have a long history. Since the first experimental trials using activated carbon filters that were performed 50 years ago, solid-phase extraction (SPE) has become an established sample preparation technique. The initial experimental applications of SPE resulted in widespread use of this technique in current water analysis and also to adoption of SPE into standardized analytical methods. During the decades of its evolution, chromatographers became aware of the advantages of SPE and, despite many innovations that appeared in the last decade, new SPE developments are still expected in the future. A brief overview of 50 years of the history of the use of SPE in organic trace analysis of water is given in presented paper.
NASA Astrophysics Data System (ADS)
Jacobson, Gloria; Rella, Chris; Farinas, Alejandro
2014-05-01
Technological advancement of instrumentation in atmospheric and other geoscience disciplines over the past decade has lead to a shift from discrete sample analysis to continuous, in-situ monitoring. Standard error analysis used for discrete measurements is not sufficient to assess and compare the error contribution of noise and drift from continuous-measurement instruments, and a different statistical analysis approach should be applied. The Allan standard deviation analysis technique developed for atomic clock stability assessment by David W. Allan [1] can be effectively and gainfully applied to continuous measurement instruments. As an example, P. Werle et al has applied these techniques to look at signal averaging for atmospheric monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS) [2]. This presentation will build on, and translate prior foundational publications to provide contextual definitions and guidelines for the practical application of this analysis technique to continuous scientific measurements. The specific example of a Picarro G2401 Cavity Ringdown Spectroscopy (CRDS) analyzer used for continuous, atmospheric monitoring of CO2, CH4 and CO will be used to define the basics features the Allan deviation, assess factors affecting the analysis, and explore the time-series to Allan deviation plot translation for different types of instrument noise (white noise, linear drift, and interpolated data). In addition, the useful application of using an Allan deviation to optimize and predict the performance of different calibration schemes will be presented. Even though this presentation will use the specific example of the Picarro G2401 CRDS Analyzer for atmospheric monitoring, the objective is to present the information such that it can be successfully applied to other instrument sets and disciplines. [1] D.W. Allan, "Statistics of Atomic Frequency Standards," Proc, IEEE, vol. 54, pp 221-230, Feb 1966 [2] P. Werle, R. Miicke, F. Slemr, "The Limits of Signal Averaging in Atmospheric Trace-Gas Monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS)," Applied Physics, B57, pp 131-139, April 1993
Certification of the Uranium Isotopic Ratios in Nbl Crm 112-A, Uranium Assay Standard (Invited)
NASA Astrophysics Data System (ADS)
Mathew, K. J.; Mason, P.; Narayanan, U.
2010-12-01
Isotopic reference materials are needed to validate measurement procedures and to calibrate multi-collector ion counting detector systems. New Brunswick Laboratory (NBL) provides a suite of certified isotopic and assay standards for the US and international nuclear safeguards community. NBL Certified Reference Material (CRM) 112-A Uranium Metal Assay Standard with a consensus value of 137.88 for the 238U/235U ratio [National Bureau of Standards -- NBS, currently named National Institute for Standards and Technology, Standard Reference Material (SRM) 960 had been renamed CRM 112-A] is commonly used as a natural uranium isotopic reference material within the earth science community. We have completed the analytical work for characterizing the isotopic composition of NBL CRM 112-A Uranium Assay Standard and NBL CRM 145 (uranyl nitrate solution prepared from CRM 112-A). The 235U/238U isotopic ratios were characterized using the total evaporation (TE) and the modified total evaporation (MTE) methods. The 234U/238U isotope ratios were characterized using a conventional analysis technique and verified using the ratios measured in the MTE analytical technique. The analysis plan for the characterization work was developed such that isotopic ratios that are traceable to NBL CRM U030-A are obtained. NBL is preparing a certificate of Analysis and will issue a certificate for Uranium Assay and Isotopics. The results of the CRM 112-A certification measurements will be discussed. These results will be compared with the average values from Richter et al (2010). A comparison of the precision and accuracy of the measurement methods (TE, MTE and Conventional) employed in the certification will be presented. The uncertainties in the 235U/238U and 234U/238U ratios, calculated according to the Guide to the Expression of Uncertainty in Measurements (GUM) and the dominant contributors to the combined standard uncertainty will be discussed.
An investigation of dynamic-analysis methods for variable-geometry structures
NASA Technical Reports Server (NTRS)
Austin, F.
1980-01-01
Selected space structure configurations were reviewed in order to define dynamic analysis problems associated with variable geometry. The dynamics of a beam being constructed from a flexible base and the relocation of the completed beam by rotating the remote manipulator system about the shoulder joint were selected. Equations of motion were formulated in physical coordinates for both of these problems, and FORTRAN programs were developed to generate solutions by numerically integrating the equations. These solutions served as a standard of comparison to gauge the accuracy of approximate solution techniques that were developed and studied. Good control was achieved in both problems. Unstable control system coupling with the system flexibility did not occur. An approximate method was developed for each problem to enable the analyst to investigate variable geometry effects during a short time span using standard fixed geometry programs such as NASTRAN. The average angle and average length techniques are discussed.
Synthesis of Hydroxyapatite through Ultrasound and Calcination Techniques
NASA Astrophysics Data System (ADS)
Akindoyo, John O.; Beg, M. D. H.; Ghazali, Suriati; Akindoyo, Edward O.; Jeyaratnam, Nitthiyah
2017-05-01
There is a growing demand for hydroxyapatite (HA) especially in medical applications, production of HA which is totally green is however a challenge. In this research, HA was produced from biowaste through ultrasound followed by calcination techniques. Pre-treatment of the biowaste was effectively achieved through the help of ultrasound. After calcination at 950°C, the obtained HA was characterized through Thermogravimetric (TGA) analysis, X-ray diffraction analysis (XRD) and Fourier transform infrared spectroscopy (FTIR). Spectrum of the produced HA was compared with standard HA index. The spectrum is in agreement with the standard HA as confirmed through FTIR, XRD and TGA result. Furthermore, morphological study of the HA through Field emission scanning electron microscope (FESEM) shows almost uniform spherical shape for the HA as expected. Based on the results obtained herein, combining ultrasound with calcination can help to produce pure HA with potential medical applications without the use of any organic solvent.
On the Applications of IBA Techniques to Biological Samples Analysis: PIXE and RBS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falcon-Gonzalez, J. M.; Bernal-Alvarado, J.; Sosa, M.
2008-08-11
The analytical techniques based on ion beams or IBA techniques give quantitative information on elemental concentration in samples of a wide variety of nature. In this work, we focus on PIXE technique, analyzing thick target biological specimens (TTPIXE), using 3 MeV protons produced by an electrostatic accelerator. A nuclear microprobe was used performing PIXE and RBS simultaneously, in order to solve the uncertainties produced in the absolute PIXE quantifying. The advantages of using both techniques and a nuclear microprobe are discussed. Quantitative results are shown to illustrate the multielemental resolution of the PIXE technique; for this, a blood standard wasmore » used.« less
NASA Astrophysics Data System (ADS)
Baker, Joel; Waight, Tod; Ulfbeck, David
2002-10-01
A method has been developed for the rapid chemical separation and highly reproducible analysis of the rare earth elements (REE) by isotope dilution analysis by means of a multiple collector inductively coupled plasma mass spectrometer (MC-ICP-MS). This technique is superior in terms of the analytical reproducibility or rapidity of analysis compared with quadrupole ICP-MS or with thermal ionization mass spectrometric isotope dilution techniques. Samples are digested by standard hydrofluoric-nitric acid-based techniques and spiked with two mixed spikes. The bulk REE are separated from the sample on a cation exchange column, collecting the middle-heavy and light REE as two groups, which provides a middle-heavy REE cut with sufficient separation of the light from the heavier REE to render oxide interferences trivial, and a Ba-free light REE cut. The heavy (Er-Lu), middle (Eu-Gd), and light REE (La-Eu) concentrations are determined by three short (1 to 2 min) analyses with a CETAC Aridus desolvating nebulizer introduction system. Replicate digestions of international rock standards demonstrate that concentrations can be reproduced to <1%, which reflects weighing errors during digestion and aliquotting as inter-REE ratios reproduce to ≤0.2% (2 SD). Eu and Ce anomalies reproduce to <0.15%. In addition to determining the concentrations of polyisotopic REE by isotope dilution analysis, the concentration of monoisotopic Pr can be measured during the light REE isotope dilution run, by reference to Pr/Ce and Pr/Nd ratios measured in a REE standard solution. Pr concentrations determined in this way reproduce to <1%, and Pr/REE ratios reproduce to <0.4%. Ce anomalies calculated with La and Pr also reproduce to <0.15% (2 SD). The precise Ce (and Eu) anomaly measurements should allow greater use of these features in studying the recycling of materials with these anomalies into the mantle, or redox-induced effects on the REE during recycling and dehydration of oceanic lithosphere, partial melting, metamorphism, alteration, or sedimentation processes. Moreover, this technique consumes very small amounts (subnanograms) of the REE and will allow precise REE determinations to be made on much smaller samples than hitherto possible.
Metabolomic analysis using porcine skin: a pilot study of analytical techniques.
Wu, Julie; Fiehn, Oliver; Armstrong, April W
2014-06-15
Metabolic byproducts serve as indicators of the chemical processes and can provide valuable information on pathogenesis by measuring the amplified output. Standardized techniques for metabolome extraction of skin samples serve as a critical foundation to this field but have not been developed. We sought to determine the optimal cell lysage techniques for skin sample preparation and to compare GC-TOF-MS and UHPLC-QTOF-MS for metabolomic analysis. Using porcine skin samples, we pulverized the skin via various combinations of mechanical techniques for cell lysage. After extraction, the samples were subjected to GC-TOF-MS and/or UHPLC-QTOF-MS. Signal intensities from GC-TOF-MS analysis showed that ultrasonication (2.7x107) was most effective for cell lysage when compared to mortar-and-pestle (2.6x107), ball mill followed by ultrasonication (1.6x107), mortar-and-pestle followed by ultrasonication (1.4x107), and homogenization (trial 1: 8.4x106; trial 2: 1.6x107). Due to the similar signal intensities, ultrasonication and mortar-and-pestle were applied to additional samples and subjected to GC-TOF-MS and UHPLC-QTOF-MS. Ultrasonication yielded greater signal intensities than mortar-and-pestle for 92% of detected metabolites following GC-TOF-MS and for 68% of detected metabolites following UHPLC-QTOF-MS. Overall, ultrasonication is the preferred method for efficient cell lysage of skin tissue for both metabolomic platforms. With standardized sample preparation, metabolomic analysis of skin can serve as a powerful tool in elucidating underlying biological processes in dermatological conditions.
Nakano, Andreza Rodrigues; Bonan, Claudia; Teixeira, Luiz Antônio
2016-01-01
This article discusses the development of techniques for cesarean sections by doctors in Brazil, during the 20th century, by analyzing the title "Operação Cesárea" (Cesarean Section), of three editions of the textbookObstetrícia, by Jorge de Rezende. His prominence as an author in obstetrics and his particular style of working, created the groundwork for the normalization of the practice of cesarean sections. The networks of meaning practiced within this scientific community included a "provision for feeling and for action" (Fleck) which established the C-section as a "normal" delivery: showing standards that exclude unpredictability, chaos, and dangers associated with the physiology of childbirth, meeting the demand for control, discipline and safety, qualities associated with practices, techniques and technologies of biomedicine.
A rapid method for estimation of Pu-isotopes in urine samples using high volume centrifuge.
Kumar, Ranjeet; Rao, D D; Dubla, Rupali; Yadav, J R
2017-07-01
The conventional radio-analytical technique used for estimation of Pu-isotopes in urine samples involves anion exchange/TEVA column separation followed by alpha spectrometry. This sequence of analysis consumes nearly 3-4 days for completion. Many a times excreta analysis results are required urgently, particularly under repeat and incidental/emergency situations. Therefore, there is need to reduce the analysis time for the estimation of Pu-isotopes in bioassay samples. This paper gives the details of standardization of a rapid method for estimation of Pu-isotopes in urine samples using multi-purpose centrifuge, TEVA resin followed by alpha spectrometry. The rapid method involves oxidation of urine samples, co-precipitation of plutonium along with calcium phosphate followed by sample preparation using high volume centrifuge and separation of Pu using TEVA resin. Pu-fraction was electrodeposited and activity estimated using 236 Pu tracer recovery by alpha spectrometry. Ten routine urine samples of radiation workers were analyzed and consistent radiochemical tracer recovery was obtained in the range 47-88% with a mean and standard deviation of 64.4% and 11.3% respectively. With this newly standardized technique, the whole analytical procedure is completed within 9h (one working day hour). Copyright © 2017 Elsevier Ltd. All rights reserved.
Limitation of the Cavitron technique by conifer pit aspiration.
Beikircher, B; Ameglio, T; Cochard, H; Mayr, S
2010-07-01
The Cavitron technique facilitates time and material saving for vulnerability analysis. The use of rotors with small diameters leads to high water pressure gradients (DeltaP) across samples, which may cause pit aspiration in conifers. In this study, the effect of pit aspiration on Cavitron measurements was analysed and a modified 'conifer method' was tested which avoids critical (i.e. pit aspiration inducing) DeltaP. Four conifer species were used (Juniperus communis, Picea abies, Pinus sylvestris, and Larix decidua) for vulnerability analysis based on the standard Cavitron technique and the conifer method. In addition, DeltaP thresholds for pit aspiration were determined and water extraction curves were constructed. Vulnerability curves obtained with the standard method showed generally a less negative P for the induction of embolism than curves of the conifer method. Differences were species-specific with the smallest effects in Juniperus. Larix showed the most pronounced shifts in P(50) (pressure at 50% loss of conductivity) between the standard (-1.5 MPa) and the conifer (-3.5 MPa) methods. Pit aspiration occurred at the lowest DeltaP in Larix and at the highest in Juniperus. Accordingly, at a spinning velocity inducing P(50), DeltaP caused only a 4% loss of conductivity induced by pit aspiration in Juniperus, but about 60% in Larix. Water extraction curves were similar to vulnerability curves indicating that spinning itself did not affect pits. Conifer pit aspiration can have major influences on Cavitron measurements and lead to an overestimation of vulnerability thresholds when a small rotor is used. Thus, the conifer method presented here enables correct vulnerability analysis by avoiding artificial conductivity losses.
Santiago-Moreno, Julian; Esteso, Milagros Cristina; Villaverde-Morcillo, Silvia; Toledano-Díaz, Adolfo; Castaño, Cristina; Velázquez, Rosario; López-Sebastián, Antonio; Goya, Agustín López; Martínez, Javier Gimeno
2016-01-01
Postcopulatory sexual selection through sperm competition may be an important evolutionary force affecting many reproductive traits, including sperm morphometrics. Environmental factors such as pollutants, pesticides, and climate change may affect different sperm traits, and thus reproduction, in sensitive bird species. Many sperm-handling processes used in assisted reproductive techniques may also affect the size of sperm cells. The accurately measured dimensions of sperm cell structures (especially the head) can thus be used as indicators of environmental influences, in improving our understanding of reproductive and evolutionary strategies, and for optimizing assisted reproductive techniques (e.g., sperm cryopreservation) for use with birds. Computer-assisted sperm morphometry analysis (CASA-Morph) provides an accurate and reliable method for assessing sperm morphometry, reducing the problem of subjectivity associated with human visual assessment. Computerized systems have been standardized for use with semen from different mammalian species. Avian spermatozoa, however, are filiform, limiting their analysis with such systems, which were developed to examine the approximately spherical heads of mammalian sperm cells. To help overcome this, the standardization of staining techniques to be used in computer-assessed light microscopical methods is a priority. The present review discusses these points and describes the sperm morphometric characteristics of several wild and domestic bird species. PMID:27678467
Cosmographic analysis with Chebyshev polynomials
NASA Astrophysics Data System (ADS)
Capozziello, Salvatore; D'Agostino, Rocco; Luongo, Orlando
2018-05-01
The limits of standard cosmography are here revised addressing the problem of error propagation during statistical analyses. To do so, we propose the use of Chebyshev polynomials to parametrize cosmic distances. In particular, we demonstrate that building up rational Chebyshev polynomials significantly reduces error propagations with respect to standard Taylor series. This technique provides unbiased estimations of the cosmographic parameters and performs significatively better than previous numerical approximations. To figure this out, we compare rational Chebyshev polynomials with Padé series. In addition, we theoretically evaluate the convergence radius of (1,1) Chebyshev rational polynomial and we compare it with the convergence radii of Taylor and Padé approximations. We thus focus on regions in which convergence of Chebyshev rational functions is better than standard approaches. With this recipe, as high-redshift data are employed, rational Chebyshev polynomials remain highly stable and enable one to derive highly accurate analytical approximations of Hubble's rate in terms of the cosmographic series. Finally, we check our theoretical predictions by setting bounds on cosmographic parameters through Monte Carlo integration techniques, based on the Metropolis-Hastings algorithm. We apply our technique to high-redshift cosmic data, using the Joint Light-curve Analysis supernovae sample and the most recent versions of Hubble parameter and baryon acoustic oscillation measurements. We find that cosmography with Taylor series fails to be predictive with the aforementioned data sets, while turns out to be much more stable using the Chebyshev approach.
A Test Reliability Analysis of an Abbreviated Version of the Pupil Control Ideology Form.
ERIC Educational Resources Information Center
Gaffney, Patrick V.
A reliability analysis was conducted of an abbreviated, 10-item version of the Pupil Control Ideology Form (PCI), using the Cronbach's alpha technique (L. J. Cronbach, 1951) and the computation of the standard error of measurement. The PCI measures a teacher's orientation toward pupil control. Subjects were 168 preservice teachers from one private…
Kuhn, M A; Burch, M; Chinnock, R E; Fenton, M J
2017-10-01
Intravascular ultrasound (IVUS) has been routinely used in some centers to investigate cardiac allograft vasculopathy in pediatric heart transplant recipients. We present an alternative method using more sophisticated imaging software. This study presents a comparison of this method with an established standard method. All patients who had IVUS performed in 2014 were retrospectively evaluated. The standard technique consisted of analysis of 10 operator-selected segments along the vessel. Each study was re-evaluated using a longitudinal technique, taken at every third cardiac cycle, along the entire vessel. Semiautomatic edge detection software was used to detect vessel imaging planes. Measurements included outer and inner diameter, total and luminal area, maximal intimal thickness (MIT), and intimal index. Each IVUS was graded for severity using the Stanford classification. All results were given as mean ± standard deviation (SD). Groups were compared using Student t test. A P value <.05 was considered significant. There were 59 IVUS studies performed on 58 patients. There was no statistically significant difference between outer diameter, inner diameter, or total area. In the longitudinal group, there was a significantly smaller luminal area, higher MIT, and higher intimal index. Using the longitudinal technique, there was an increase in Stanford classification in 20 patients. The longitudinal technique appeared more sensitive in assessing the degree of cardiac allograft vasculopathy and may play a role in the increase in the degree of thickening seen. It may offer an alternative way of grading severity of cardiac allograft vasculopathy in pediatric heart transplant recipients. Copyright © 2017 Elsevier Inc. All rights reserved.
Wavelet-Bayesian inference of cosmic strings embedded in the cosmic microwave background
NASA Astrophysics Data System (ADS)
McEwen, J. D.; Feeney, S. M.; Peiris, H. V.; Wiaux, Y.; Ringeval, C.; Bouchet, F. R.
2017-12-01
Cosmic strings are a well-motivated extension to the standard cosmological model and could induce a subdominant component in the anisotropies of the cosmic microwave background (CMB), in addition to the standard inflationary component. The detection of strings, while observationally challenging, would provide a direct probe of physics at very high-energy scales. We develop a framework for cosmic string inference from observations of the CMB made over the celestial sphere, performing a Bayesian analysis in wavelet space where the string-induced CMB component has distinct statistical properties to the standard inflationary component. Our wavelet-Bayesian framework provides a principled approach to compute the posterior distribution of the string tension Gμ and the Bayesian evidence ratio comparing the string model to the standard inflationary model. Furthermore, we present a technique to recover an estimate of any string-induced CMB map embedded in observational data. Using Planck-like simulations, we demonstrate the application of our framework and evaluate its performance. The method is sensitive to Gμ ∼ 5 × 10-7 for Nambu-Goto string simulations that include an integrated Sachs-Wolfe contribution only and do not include any recombination effects, before any parameters of the analysis are optimized. The sensitivity of the method compares favourably with other techniques applied to the same simulations.
Percy, Andrew J; Mohammed, Yassene; Yang, Juncong; Borchers, Christoph H
2015-12-01
An increasingly popular mass spectrometry-based quantitative approach for health-related research in the biomedical field involves the use of stable isotope-labeled standards (SIS) and multiple/selected reaction monitoring (MRM/SRM). To improve inter-laboratory precision and enable more widespread use of this 'absolute' quantitative technique in disease-biomarker assessment studies, methods must be standardized. Results/methodology: Using this MRM-with-SIS-peptide approach, we developed an automated method (encompassing sample preparation, processing and analysis) for quantifying 76 candidate protein markers (spanning >4 orders of magnitude in concentration) in neat human plasma. The assembled biomarker assessment kit - the 'BAK-76' - contains the essential materials (SIS mixes), methods (for acquisition and analysis), and tools (Qualis-SIS software) for performing biomarker discovery or verification studies in a rapid and standardized manner.
Garbarino, John R.; Taylor, Howard E.
1987-01-01
Inductively coupled plasma mass spectrometry is employed in the determination of Ni, Cu, Sr, Cd, Ba, Ti, and Pb in nonsaline, natural water samples by stable isotope dilution analysis. Hydrologic samples were directly analyzed without any unusual pretreatment. Interference effects related to overlapping isobars, formation of metal oxide and multiply charged ions, and matrix composition were identified and suitable methods of correction evaluated. A comparability study snowed that single-element isotope dilution analysis was only marginally better than sequential multielement isotope dilution analysis. Accuracy and precision of the single-element method were determined on the basis of results obtained for standard reference materials. The instrumental technique was shown to be ideally suited for programs associated with certification of standard reference materials.
Comparison of 2-D and 3-D estimates of placental volume in early pregnancy.
Aye, Christina Y L; Stevenson, Gordon N; Impey, Lawrence; Collins, Sally L
2015-03-01
Ultrasound estimation of placental volume (PlaV) between 11 and 13 wk has been proposed as part of a screening test for small-for-gestational-age babies. A semi-automated 3-D technique, validated against the gold standard of manual delineation, has been found at this stage of gestation to predict small-for-gestational-age at term. Recently, when used in the third trimester, an estimate obtained using a 2-D technique was found to correlate with placental weight at delivery. Given its greater simplicity, the 2-D technique might be more useful as part of an early screening test. We investigated if the two techniques produced similar results when used in the first trimester. The correlation between PlaV values calculated by the two different techniques was assessed in 139 first-trimester placentas. The agreement on PlaV and derived "standardized placental volume," a dimensionless index correcting for gestational age, was explored with the Mann-Whitney test and Bland-Altman plots. Placentas were categorized into five different shape subtypes, and a subgroup analysis was performed. Agreement was poor for both PlaV and standardized PlaV (p < 0.001 and p < 0.001), with the 2-D technique yielding larger estimates for both indices compared with the 3-D method. The mean difference in standardized PlaV values between the two methods was 0.007 (95% confidence interval: 0.006-0.009). The best agreement was found for regular rectangle-shaped placentas (p = 0.438 and p = 0.408). The poor correlation between the 2-D and 3-D techniques may result from the heterogeneity of placental morphology at this stage of gestation. In early gestation, the simpler 2-D estimates of PlaV do not correlate strongly with those obtained with the validated 3-D technique. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Short-Arc Analysis of Intersatellite Tracking Data in a Gravity Mapping Mission
NASA Technical Reports Server (NTRS)
Rowlands, David D.; Ray, Richard D.; Chinn, Douglas S.; Lemoine, Frank G.; Smith, David E. (Technical Monitor)
2001-01-01
A technique for the analysis of low-low intersatellite range-rate data in a gravity mapping mission is explored. The technique is based on standard tracking data analysis for orbit determination but uses a spherical coordinate representation of the 12 epoch state parameters describing the baseline between the two satellites. This representation of the state parameters is exploited to allow the intersatellite range-rate analysis to benefit from information provided by other tracking data types without large simultaneous multiple data type solutions. The technique appears especially valuable for estimating gravity from short arcs (e.g., less than 15 minutes) of data. Gravity recovery simulations which use short arcs are compared with those using arcs a day in length. For a high-inclination orbit, the short-arc analysis recovers low-order gravity coefficients remarkably well, although higher order terms, especially sectorial terms, are less accurate. Simulations suggest that either long or short arcs of GRACE data are likely to improve parts of the geopotential spectrum by orders of magnitude.
NASA Technical Reports Server (NTRS)
Wilson, C.; Dye, R.; Reed, L.
1982-01-01
The errors associated with planimetric mapping of the United States using satellite remote sensing techniques are analyzed. Assumptions concerning the state of the art achievable for satellite mapping systems and platforms in the 1995 time frame are made. An analysis of these performance parameters is made using an interactive cartographic satellite computer model, after first validating the model using LANDSAT 1 through 3 performance parameters. An investigation of current large scale (1:24,000) US National mapping techniques is made. Using the results of this investigation, and current national mapping accuracy standards, the 1995 satellite mapping system is evaluated for its ability to meet US mapping standards for planimetric and topographic mapping at scales of 1:24,000 and smaller.
Elemental Analysis in Biological Matrices Using ICP-MS.
Hansen, Matthew N; Clogston, Jeffrey D
2018-01-01
The increasing exploration of metallic nanoparticles for use as cancer therapeutic agents necessitates a sensitive technique to track the clearance and distribution of the material once introduced into a living system. Inductively coupled plasma mass spectrometry (ICP-MS) provides a sensitive and selective tool for tracking the distribution of metal components from these nanotherapeutics. This chapter presents a standardized method for processing biological matrices, ensuring complete homogenization of tissues, and outlines the preparation of appropriate standards and controls. The method described herein utilized gold nanoparticle-treated samples; however, the method can easily be applied to the analysis of other metals.
Flowing-water optical power meter for primary-standard, multi-kilowatt laser power measurements
NASA Astrophysics Data System (ADS)
Williams, P. A.; Hadler, J. A.; Cromer, C.; West, J.; Li, X.; Lehman, J. H.
2018-06-01
A primary-standard flowing-water optical power meter for measuring multi-kilowatt laser emission has been built and operated. The design and operational details of this primary standard are described, and a full uncertainty analysis is provided covering the measurement range from 1–10 kW with an expanded uncertainty of 1.2%. Validating measurements at 5 kW and 10 kW show agreement with other measurement techniques to within the measurement uncertainty. This work of the U.S. Government is not subject to U.S. copyright.
Handling nonnormality and variance heterogeneity for quantitative sublethal toxicity tests.
Ritz, Christian; Van der Vliet, Leana
2009-09-01
The advantages of using regression-based techniques to derive endpoints from environmental toxicity data are clear, and slowly, this superior analytical technique is gaining acceptance. As use of regression-based analysis becomes more widespread, some of the associated nuances and potential problems come into sharper focus. Looking at data sets that cover a broad spectrum of standard test species, we noticed that some model fits to data failed to meet two key assumptions-variance homogeneity and normality-that are necessary for correct statistical analysis via regression-based techniques. Failure to meet these assumptions often is caused by reduced variance at the concentrations showing severe adverse effects. Although commonly used with linear regression analysis, transformation of the response variable only is not appropriate when fitting data using nonlinear regression techniques. Through analysis of sample data sets, including Lemna minor, Eisenia andrei (terrestrial earthworm), and algae, we show that both the so-called Box-Cox transformation and use of the Poisson distribution can help to correct variance heterogeneity and nonnormality and so allow nonlinear regression analysis to be implemented. Both the Box-Cox transformation and the Poisson distribution can be readily implemented into existing protocols for statistical analysis. By correcting for nonnormality and variance heterogeneity, these two statistical tools can be used to encourage the transition to regression-based analysis and the depreciation of less-desirable and less-flexible analytical techniques, such as linear interpolation.
Energy resolution improvement of CdTe detectors by using the principal component analysis technique
NASA Astrophysics Data System (ADS)
Alharbi, T.
2018-02-01
In this paper, we report on the application of the Principal Component Analysis (PCA) technique for the improvement of the γ-ray energy resolution of CdTe detectors. The PCA technique is used to estimate the amount of charge-trapping effect which is reflected in the shape of each detector pulse, thereby correcting for the charge-trapping effect. The details of the method are described and the results obtained with a CdTe detector are shown. We have achieved an energy resolution of 1.8 % (FWHM) at 662 keV with full detection efficiency from a 1 mm thick CdTe detector which gives an energy resolution of 4.5 % (FWHM) by using the standard pulse processing method.
Nonlinear relaxation algorithms for circuit simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saleh, R.A.
Circuit simulation is an important Computer-Aided Design (CAD) tool in the design of Integrated Circuits (IC). However, the standard techniques used in programs such as SPICE result in very long computer-run times when applied to large problems. In order to reduce the overall run time, a number of new approaches to circuit simulation were developed and are described. These methods are based on nonlinear relaxation techniques and exploit the relative inactivity of large circuits. Simple waveform-processing techniques are described to determine the maximum possible speed improvement that can be obtained by exploiting this property of large circuits. Three simulation algorithmsmore » are described, two of which are based on the Iterated Timing Analysis (ITA) method and a third based on the Waveform-Relaxation Newton (WRN) method. New programs that incorporate these techniques were developed and used to simulate a variety of industrial circuits. The results from these simulations are provided. The techniques are shown to be much faster than the standard approach. In addition, a number of parallel aspects of these algorithms are described, and a general space-time model of parallel-task scheduling is developed.« less
Fundamentals of quantitative dynamic contrast-enhanced MR imaging.
Paldino, Michael J; Barboriak, Daniel P
2009-05-01
Quantitative analysis of dynamic contrast-enhanced MR imaging (DCE-MR imaging) has the power to provide information regarding physiologic characteristics of the microvasculature and is, therefore, of great potential value to the practice of oncology. In particular, these techniques could have a significant impact on the development of novel anticancer therapies as a promising biomarker of drug activity. Standardization of DCE-MR imaging acquisition and analysis to provide more reproducible measures of tumor vessel physiology is of crucial importance to realize this potential. The purpose of this article is to review the pathophysiologic basis and technical aspects of DCE-MR imaging techniques.
Precise measurement of the half-life of the Fermi {beta} decay of {sup 26}Al{sup m}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, Rebecca J.; Thompson, Maxwell N.; Rassool, Roger P.
2011-08-15
State-of-the-art signal digitization and analysis techniques have been used to measure the half-life of the Fermi {beta} decay of {sup 26}Al{sup m}. The half-life was determined to be 6347.8 {+-} 2.5 ms. This new datum contributes to the experimental testing of the conserved-vector-current hypothesis and the required unitarity of the Cabibbo-Kobayashi-Maskawa matrix: two essential components of the standard model. Detailed discussion of the experimental techniques and data analysis and a thorough investigation of the statistical and systematic uncertainties are presented.
Higher-Order Spectral Analysis of F-18 Flight Flutter Data
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Dunn, Shane
2005-01-01
Royal Australian Air Force (RAAF) F/A-18 flight flutter test data is presented and analyzed using various techniques. The data includes high-quality measurements of forced responses and limit cycle oscillation (LCO) phenomena. Standard correlation and power spectral density (PSD) techniques are applied to the data and presented. Novel applications of experimentally-identified impulse responses and higher-order spectral techniques are also applied to the data and presented. The goal of this research is to develop methods that can identify the onset of nonlinear aeroelastic phenomena, such as LCO, during flutter testing.
Analysis of Extracellular Vesicles in the Tumor Microenvironment.
Al-Nedawi, Khalid; Read, Jolene
2016-01-01
Extracellular vesicles (ECV) are membrane compartments shed from all types of cells in various physiological and pathological states. In recent years, ECV have gained an increasing interest from the scientific community for their role as an intercellular communicator that plays important roles in modifying the tumor microenvironment. Multiple techniques have been established to collect ECV from conditioned media of cell culture or physiological fluids. The gold standard methodology is differential centrifugation. Although alternative techniques exist to collect ECV, these techniques have not proven suitable as a substitution for the ultracentrifugation procedure.
Single-phase power distribution system power flow and fault analysis
NASA Technical Reports Server (NTRS)
Halpin, S. M.; Grigsby, L. L.
1992-01-01
Alternative methods for power flow and fault analysis of single-phase distribution systems are presented. The algorithms for both power flow and fault analysis utilize a generalized approach to network modeling. The generalized admittance matrix, formed using elements of linear graph theory, is an accurate network model for all possible single-phase network configurations. Unlike the standard nodal admittance matrix formulation algorithms, the generalized approach uses generalized component models for the transmission line and transformer. The standard assumption of a common node voltage reference point is not required to construct the generalized admittance matrix. Therefore, truly accurate simulation results can be obtained for networks that cannot be modeled using traditional techniques.
Medvedovici, Andrei; Albu, Florin; Naşcu-Briciu, Rodica Domnica; Sârbu, Costel
2014-02-01
Discrimination power evaluation of UV-Vis and (±) electrospray ionization/mass spectrometric techniques, (ESI-MS) individually considered or coupled as detectors to reversed phase liquid chromatography (RPLC) in the characterization of Ginkgo Biloba standardized extracts, is used in herbal medicines and/or dietary supplements with the help of Fuzzy hierarchical clustering (FHC). Seventeen batches of Ginkgo Biloba commercially available standardized extracts from seven manufacturers were measured during experiments. All extracts were within the criteria of the official monograph dedicated to dried refined and quantified Ginkgo extracts, in the European Pharmacopoeia. UV-Vis and (±) ESI-MS spectra of the bulk standardized extracts in methanol were acquired. Additionally, an RPLC separation based on a simple gradient elution profile was applied to the standardized extracts. Detection was made through monitoring UV absorption at 220 nm wavelength or the total ion current (TIC) produced through (±) ESI-MS analysis. FHC was applied to raw, centered and scaled data sets, for evaluating the discrimination power of the method with respect to the origins of the extracts and to the batch to batch variability. The discrimination power increases with the increase of the intrinsic selectivity of the spectral technique being used: UV-Vis
Parallel gene analysis with allele-specific padlock probes and tag microarrays
Banér, Johan; Isaksson, Anders; Waldenström, Erik; Jarvius, Jonas; Landegren, Ulf; Nilsson, Mats
2003-01-01
Parallel, highly specific analysis methods are required to take advantage of the extensive information about DNA sequence variation and of expressed sequences. We present a scalable laboratory technique suitable to analyze numerous target sequences in multiplexed assays. Sets of padlock probes were applied to analyze single nucleotide variation directly in total genomic DNA or cDNA for parallel genotyping or gene expression analysis. All reacted probes were then co-amplified and identified by hybridization to a standard tag oligonucleotide array. The technique was illustrated by analyzing normal and pathogenic variation within the Wilson disease-related ATP7B gene, both at the level of DNA and RNA, using allele-specific padlock probes. PMID:12930977
New method for stock-tank oil compositional analysis.
McAndrews, Kristine; Nighswander, John; Kotzakoulakis, Konstantin; Ross, Paul; Schroeder, Helmut
2009-01-01
A new method for accurately determining stock-tank oil composition to normal pentatriacontane using gas chromatography is developed and validated. The new method addresses the potential errors associated with the traditional equipment and technique employed for extended hydrocarbon gas chromatography outside a controlled laboratory environment, such as on an offshore oil platform. In particular, the experimental measurement of stock-tank oil molecular weight with the freezing point depression technique and the use of an internal standard to find the unrecovered sample fraction are replaced with correlations for estimating these properties. The use of correlations reduces the number of necessary experimental steps in completing the required sample preparation and analysis, resulting in reduced uncertainty in the analysis.
Dinç, Erdal; Ozdemir, Abdil
2005-01-01
Multivariate chromatographic calibration technique was developed for the quantitative analysis of binary mixtures enalapril maleate (EA) and hydrochlorothiazide (HCT) in tablets in the presence of losartan potassium (LST). The mathematical algorithm of multivariate chromatographic calibration technique is based on the use of the linear regression equations constructed using relationship between concentration and peak area at the five-wavelength set. The algorithm of this mathematical calibration model having a simple mathematical content was briefly described. This approach is a powerful mathematical tool for an optimum chromatographic multivariate calibration and elimination of fluctuations coming from instrumental and experimental conditions. This multivariate chromatographic calibration contains reduction of multivariate linear regression functions to univariate data set. The validation of model was carried out by analyzing various synthetic binary mixtures and using the standard addition technique. Developed calibration technique was applied to the analysis of the real pharmaceutical tablets containing EA and HCT. The obtained results were compared with those obtained by classical HPLC method. It was observed that the proposed multivariate chromatographic calibration gives better results than classical HPLC.
Tian, Jun-Ping; Wang, Hong; Du, Feng-He; Wang, Tao
2016-09-01
The mortality rate of peritoneal dialysis (PD) patients is still high, and the predicting factors for PD patient mortality remain to be determined. This study aimed to explore the relationship between the standard deviation (SD) of extracellular water/intracellular water (E/I) and all-cause mortality and technique failure in continuous ambulatory PD (CAPD) patients. All 152 patients came from the PD Center between January 1st 2006 and December 31st 2007. Clinical data and at least five-visit E/I ratio defined by bioelectrical impedance analysis were collected. The patients were followed up till December 31st 2010. The primary outcomes were death from any cause and technique failure. Kaplan-Meier analysis and Cox proportional hazards models were used to identify risk factors for mortality and technique failure in CAPD patients. All patients were followed up for 59.6 ± 23.0 months. The patients were divided into two groups according to their SD of E/I values: lower SD of E/I group (≤0.126) and higher SD of E/I group (>0.126). The patients with higher SD of E/I showed a higher all-cause mortality (log-rank χ (2) = 10.719, P = 0.001) and technique failure (log-rank χ (2) = 9.724, P = 0.002) than those with lower SD of E/I. Cox regression analysis found that SD of E/I independently predicted all-cause mortality (HR 3.551, 95 % CI 1.442-8.746, P = 0.006) and technique failure (HR 2.487, 95 % CI 1.093-5.659, P = 0.030) in CAPD patients after adjustment for confounders except when sensitive C-reactive protein was added into the model. The SD of E/I was a strong independent predictor of all-cause mortality and technique failure in CAPD patients.
Fire Safety of Passenger Trains : Phase II : Application of Fire Hazard Analysis Techniques
DOT National Transportation Integrated Search
2001-12-01
On May 12, 1999, the Federal Railroad Administration (FRA) issued regulations for passenger rail equipment safety standards that included small-scale fire tests and performance criteria to evaluate the flammability and smoke characteristics of indivi...
Fire safety of passenger trains. Phase II, Application of fire hazard analysis techniques.
DOT National Transportation Integrated Search
2001-12-01
On May 12, 1999, the Federal Railroad Administration (FRA) issued regulations for passenger rail equipment safety standards that included small-scale fire tests and performance criteria to evaluate the flammability and smoke characteristics of indivi...
Incorporating principal component analysis into air quality model evaluation
The efficacy of standard air quality model evaluation techniques is becoming compromised as the simulation periods continue to lengthen in response to ever increasing computing capacity. Accordingly, the purpose of this paper is to demonstrate a statistical approach called Princi...
Isotope Ratios Reveal Trickery in the Produce Aisle
ERIC Educational Resources Information Center
Journal of Chemical Education, 2007
2007-01-01
A new technique for the proper checking and banning of organic food items is proposed. The analysis of the nitrogen isotope ratio present in the food is found to be a perfect standard for the organic checking of the food products.
SELECTION AND CALIBRATION OF SUBSURFACE REACTIVE TRANSPORT MODELS USING A SURROGATE-MODEL APPROACH
While standard techniques for uncertainty analysis have been successfully applied to groundwater flow models, extension to reactive transport is frustrated by numerous difficulties, including excessive computational burden and parameter non-uniqueness. This research introduces a...
Preliminary analysis techniques for ring and stringer stiffened cylindrical shells
NASA Technical Reports Server (NTRS)
Graham, J.
1993-01-01
This report outlines methods of analysis for the buckling of thin-walled circumferentially and longitudinally stiffened cylindrical shells. Methods of analysis for the various failure modes are presented in one cohesive package. Where applicable, more than one method of analysis for a failure mode is presented along with standard practices. The results of this report are primarily intended for use in launch vehicle design in the elastic range. A Microsoft Excel worksheet with accompanying macros has been developed to automate the analysis procedures.
Research and Development on Titanium Alloys
1949-10-31
EVALUATION OF EPERIMENTAL TITANIUM-BASE ALLOYS• 65 Binary Alloys of Titanium . . . . .. 65 Titanium-Silver Alloys. . . . . ..... ... 68 Mechanical Properties...using a technique in melting designed to give more uniform distribution of the alloying additions. NMATTWLL MOMORIAL INSTITUTE 4...tc Dr. Derge for analysis. BATTELLE MEMORIAL INSTITUTE -107- 2TABLE 28. OXYGEN STANDARDS FOR ANALYSIS Wt fSapl Pein Cen Designation Sample lielting, 1
Standardization of pitch-range settings in voice acoustic analysis.
Vogel, Adam P; Maruff, Paul; Snyder, Peter J; Mundt, James C
2009-05-01
Voice acoustic analysis is typically a labor-intensive, time-consuming process that requires the application of idiosyncratic parameters tailored to individual aspects of the speech signal. Such processes limit the efficiency and utility of voice analysis in clinical practice as well as in applied research and development. In the present study, we analyzed 1,120 voice files, using standard techniques (case-by-case hand analysis), taking roughly 10 work weeks of personnel time to complete. The results were compared with the analytic output of several automated analysis scripts that made use of preset pitch-range parameters. After pitch windows were selected to appropriately account for sex differences, the automated analysis scripts reduced processing time of the 1,120 speech samples to less than 2.5 h and produced results comparable to those obtained with hand analysis. However, caution should be exercised when applying the suggested preset values to pathological voice populations.
Rewriting Modulo SMT and Open System Analysis
NASA Technical Reports Server (NTRS)
Rocha, Camilo; Meseguer, Jose; Munoz, Cesar
2014-01-01
This paper proposes rewriting modulo SMT, a new technique that combines the power of SMT solving, rewriting modulo theories, and model checking. Rewriting modulo SMT is ideally suited to model and analyze infinite-state open systems, i.e., systems that interact with a non-deterministic environment. Such systems exhibit both internal non-determinism, which is proper to the system, and external non-determinism, which is due to the environment. In a reflective formalism, such as rewriting logic, rewriting modulo SMT can be reduced to standard rewriting. Hence, rewriting modulo SMT naturally extends rewriting-based reachability analysis techniques, which are available for closed systems, to open systems. The proposed technique is illustrated with the formal analysis of: (i) a real-time system that is beyond the scope of timed-automata methods and (ii) automatic detection of reachability violations in a synchronous language developed to support autonomous spacecraft operations.
Analysis of peptides using an integrated microchip HPLC-MS/MS system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirby, Brian J.; Chirica, Gabriela S.; Reichmuth, David S.
Hyphendated LC-MS techniques are quickly becoming the standard tool for protemic analyses. For large homogeneous samples, bulk processing methods and capillary injection and separation techniques are suitable. However, for analysis of small or heterogeneous samples, techniques that can manipulate picoliter samples without dilution are required or samples will be lost or corrupted; further, static nanospray-type flowrates are required to maximize SNR. Microchip-level integration of sample injection with separation and mass spectrometry allow small-volume analytes to be processed on chip and immediately injected without dilution for analysis. An on-chip HPLC was fabricated using in situ polymerization of both fixed and mobilemore » polymer monoliths. Integration of the chip with a nanospray MS emitter enables identification of peptides by the use of tandem MS. The chip is capable of analyzing of very small sample volumes (< 200 pl) in short times (< 3 min).« less
Measuring the uncertainties of discharge measurements: interlaboratory experiments in hydrometry
NASA Astrophysics Data System (ADS)
Le Coz, Jérôme; Blanquart, Bertrand; Pobanz, Karine; Dramais, Guillaume; Pierrefeu, Gilles; Hauet, Alexandre; Despax, Aurélien
2015-04-01
Quantifying the uncertainty of streamflow data is key for hydrological sciences. The conventional uncertainty analysis based on error propagation techniques is restricted by the absence of traceable discharge standards and by the weight of difficult-to-predict errors related to the operator, procedure and measurement environment. Field interlaboratory experiments recently emerged as an efficient, standardized method to 'measure' the uncertainties of a given streamgauging technique in given measurement conditions. Both uncertainty approaches are compatible and should be developed jointly in the field of hydrometry. In the recent years, several interlaboratory experiments have been reported by different hydrological services. They involved different streamgauging techniques, including acoustic profilers (ADCP), current-meters and handheld radars (SVR). Uncertainty analysis was not always their primary goal: most often, testing the proficiency and homogeneity of instruments, makes and models, procedures and operators was the original motivation. When interlaboratory experiments are processed for uncertainty analysis, once outliers have been discarded all participants are assumed to be equally skilled and to apply the same streamgauging technique in equivalent conditions. A universal requirement is that all participants simultaneously measure the same discharge, which shall be kept constant within negligible variations. To our best knowledge, we were the first to apply the interlaboratory method for computing the uncertainties of streamgauging techniques, according to the authoritative international documents (ISO standards). Several specific issues arise due to the measurements conditions in outdoor canals and rivers. The main limitation is that the best available river discharge references are usually too uncertain to quantify the bias of the streamgauging technique, i.e. the systematic errors that are common to all participants in the experiment. A reference or a sensitivity analysis to the fixed parameters of the streamgauging technique remain very useful for estimating the uncertainty related to the (non quantified) bias correction. In the absence of a reference, the uncertainty estimate is referenced to the average of all discharge measurements in the interlaboratory experiment, ignoring the technique bias. Simple equations can be used to assess the uncertainty of the uncertainty results, as a function of the number of participants and of repeated measurements. The interlaboratory method was applied to several interlaboratory experiments on ADCPs and currentmeters mounted on wading rods, in streams of different sizes and aspects, with 10 to 30 instruments, typically. The uncertainty results were consistent with the usual expert judgment and highly depended on the measurement environment. Approximately, the expanded uncertainties (within the 95% probability interval) were ±5% to ±10% for ADCPs in good or poor conditions, and ±10% to ±15% for currentmeters in shallow creeks. Due to the specific limitations related to a slow measurement process and to small, natural streams, uncertainty results for currentmeters were more uncertain than for ADCPs, for which the site-specific errors were significantly evidenced. The proposed method can be applied to a wide range of interlaboratory experiments conducted in contrasted environments for different streamgauging techniques, in a standardized way. Ideally, an international open database would enhance the investigation of hydrological data uncertainties, according to the characteristics of the measurement conditions and procedures. Such a dataset could be used for implementing and validating uncertainty propagation methods in hydrometry.
Multiscale Analysis of Solar Image Data
NASA Astrophysics Data System (ADS)
Young, C. A.; Myers, D. C.
2001-12-01
It is often said that the blessing and curse of solar physics is that there is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also cursed us with an increased amount of higher complexity data than previous missions. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present a preliminary analysis of multiscale techniques applied to solar image data. Specifically, we explore the use of the 2-d wavelet transform and related transforms with EIT, LASCO and TRACE images. This work was supported by NASA contract NAS5-00220.
Reiner, Bruce I
2017-10-01
Conventional peer review practice is compromised by a number of well-documented biases, which in turn limit standard of care analysis, which is fundamental to determination of medical malpractice. In addition to these intrinsic biases, other existing deficiencies exist in current peer review including the lack of standardization, objectivity, retrospective practice, and automation. An alternative model to address these deficiencies would be one which is completely blinded to the peer reviewer, requires independent reporting from both parties, utilizes automated data mining techniques for neutral and objective report analysis, and provides data reconciliation for resolution of finding-specific report differences. If properly implemented, this peer review model could result in creation of a standardized referenceable peer review database which could further assist in customizable education, technology refinement, and implementation of real-time context and user-specific decision support.
Johnson, R K; Wright, C K; Gandhi, A; Charny, M C; Barr, L
2013-03-01
We performed a cost analysis (using UK 2011/12 NHS tariffs as a proxy for cost) comparing immediate breast reconstruction using the new one-stage technique of acellular dermal matrix (Strattice™) with implant versus the standard alternative techniques of tissue expander (TE)/implant as a two-stage procedure and latissimus dorsi (LD) flap reconstruction. Clinical report data were collected for operative time, length of stay, outpatient procedures, and number of elective and emergency admissions in our first consecutive 24 patients undergoing one-stage Strattice reconstruction. Total cost to the NHS based on tariff, assuming top-up payments to cover Strattice acquisition costs, was assessed and compared to the two historical control groups matched on key variables. Eleven patients having unilateral Strattice reconstruction were compared to 10 having TE/implant reconstruction and 10 having LD flap and implant reconstruction. Thirteen patients having bilateral Strattice reconstruction were compared to 12 having bilateral TE/implant reconstruction. Total costs were: unilateral Strattice, £3685; unilateral TE, £4985; unilateral LD and implant, £6321; bilateral TE, £5478; and bilateral Strattice, £6771. The cost analysis shows a financial advantage of using acellular dermal matrix (Strattice) in unilateral breast reconstruction versus alternative procedures. The reimbursement system in England (Payment by Results) is based on disease-related groups similar to that of many countries across Europe and tariffs are based on reported hospital costs, making this analysis of relevance in other countries. Copyright © 2013 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ziemer, B; Hubbard, L; Groves, E
2015-06-15
Purpose: To evaluate a first pass analysis (FPA) technique for CT perfusion measurement in a swine animal and its validation using fractional flow reserve (FFR) as a reference standard. Methods: Swine were placed under anesthesia and relevant physiologic parameters were continuously recorded. Intra-coronary adenosine was administered to induce maximum hyperemia. A pressure wire was advanced distal to the first diagonal branch of the left anterior descending (LAD) artery for FFR measurements and a balloon dilation catheter was inserted over the pressure wire into the proximal LAD to create varying levels of stenosis. Images were acquired with a 320-row wide volumemore » CT scanner. Three main coronary perfusion beds were delineated in the myocardium using arteries extracted from CT angiography images using a minimum energy hypothesis. The integrated density in the perfusion bed was used to calculate perfusion using the FPA technique. The perfusion in the LAD bed over a range of stenosis severity was measured. The measured fractional perfusion was compared to FFR and linear regression was performed. Results: The measured fractional perfusion using the FPA technique (P-FPA) and FFR were related as P-FPA = 1.06FFR – 0.06 (r{sup 2} = 0.86). The perfusion measurements were calculated with only three to five total CT volume scans, which drastically reduces the radiation dose as compared with the existing techniques requiring 15–20 volume scans. Conclusion: The measured perfusion using the first pass analysis technique showed good correlation with FFR measurements as a reference standard. The technique for perfusion measurement can potentially make a substantial reduction in radiation dose as compared with the existing techniques.« less
Emmanuel, Samson; Shantaram, Kulkarni; Sushil, Kumar C; Manoj, Likhitkar
2013-04-01
Success of non-surgical root canal treatment is predicted by meticulous cleaning and shaping of the root canal system, three-dimensional obturation and a well-fitting "leakage-free" coronal restoration. The techniques of obturation that are available have their own relative position in the historical development of filling techniques. Over the years, pitfalls with one technique have often led to the development of newer methods of obturation, along with the recognition that no one method of obturation may satisfy all clinical cases. A total of 120 extracted human permanent anterior maxillary and mandibular single rooted teeth were selected for the present study and divided into 3 groups based on the method of obturation technique. Following the preparation the patency at the apical foramen was confirmed by passing a file #15. After obturation of all three groups, teeth were immersed in 1% of aqueous methylene blue dye for a period of two weeks and then samples were subjected to spectrophotometric analysis. The present study was conducted to evaluate in vitro the spectrophotometric analysis to quantitatively analyze relative amount of dye penetration using lateral condensation (Group I), Obtura II (Group II ), Thermafil obturating technique (Group III) with ZOE sealer used in all groups. Teeth obturated with lateral condensation (Group I) shows mean value of 0.0243 and standard deviation of 0.0056. The Group II thermoplasticized injectable moulded Guttapercha (Obtura II) showed 0.0239 mean and standard deviation value of 0.0045 and Group III Thermafil obturation technique shows 0.0189 as mean value and 0.0035 standard deviation values. Following conclusion was drawn from the present study. Group III i.e., Thermafil obturating technique shows minimum mean apical dye penetration compared to Group II (ObturaII) and Group I (lateral condensation).Lateral condensation shows maximum mean apical dye penetration in all three groups.There is no significant difference between the apical dye penetration of lateral condensation and Obtura II. Obturation, lateral condensation, Obtura II, Thermafil, Spectrophotometer, dye penetration. How to cite this article: Samson E, Kulkarni S, Sushil K C, Likhitkar M. An In-Vitro Evaluation and Comparison of Apical Sealing Ability of Three Different Obturation Technique - Lateral Condensation, Obtura II, and Thermafil. J Int Oral Health 2013; 5(2):35-43.
Design study for a high reliability five-year spacecraft tape transport
NASA Technical Reports Server (NTRS)
Benn, G. S. L.; Eshleman, R. L.
1971-01-01
Following the establishment of the overall transport concept, a study of all of the life limiting constraints associated with the transport were analyzed using modeling techniques. These design techniques included: (1) a response analysis from which the performance of the transport could be determined under operating conditions for a variety of conceptual variations both in a new and aged condition; (2) an analysis of a double cone guidance technique which yielded an optimum design for maximum guidance with minimum tape degradation; (3) an analysis of the tape pack design to eliminate spoking caused by negative tangential stress within the pack; (4) an evaluation of the stress levels experienced by the magnetic tape throughout the system; (5) a general review of the bearing and lubrication technology as applied to satellite recorders and hence the recommendation for using standard load carrying antifriction ball bearings; and (6) a kinetic analysis to determine the change in kinetic properties of the transport during operation.
HPLC fingerprint analysis combined with chemometrics for pattern recognition of ginger.
Feng, Xu; Kong, Weijun; Wei, Jianhe; Ou-Yang, Zhen; Yang, Meihua
2014-03-01
Ginger, the fresh rhizome of Zingiber officinale Rosc. (Zingiberaceae), has been used worldwide; however, for a long time, there has been no standard approbated internationally for its quality control. To establish an efficacious and combinational method and pattern recognition technique for quality control of ginger. A simple, accurate and reliable method based on high-performance liquid chromatography with photodiode array (HPLC-PDA) detection was developed for establishing the chemical fingerprints of 10 batches of ginger from different markets in China. The method was validated in terms of precision, reproducibility and stability; and the relative standard deviations were all less than 1.57%. On the basis of this method, the fingerprints of 10 batches of ginger samples were obtained, which showed 16 common peaks. Coupled with similarity evaluation software, the similarities between each fingerprint of the sample and the simulative mean chromatogram were in the range of 0.998-1.000. Then, the chemometric techniques, including similarity analysis, hierarchical clustering analysis and principal component analysis were applied to classify the ginger samples. Consistent results were obtained to show that ginger samples could be successfully classified into two groups. This study revealed that HPLC-PDA method was simple, sensitive and reliable for fingerprint analysis, and moreover, for pattern recognition and quality control of ginger.
Masala, Salvatore; Schillaci, Orazio; Bartolucci, Alberto D; Calabria, Ferdinando; Mammucari, Matteo; Simonetti, Giovanni
2011-02-01
Various therapy modalities have been proposed as standard treatments in management of bone metastases. Radiation therapy remains the standard of care for patients with localized bone pain, but up to 30% of them do not experience notable pain relief. Percutaneous cryoablation is a minimally invasive technique that induces necrosis by alternately freezing and thawing a target tissue. This technique is successfully used to treat a variety of malignant and benign diseases in different sites. (18)F-FDG positron emission tomography/computed tomography ((18)F-FDG PET/CT) is a single technique of imaging that provides in a "single step" both morphological and metabolic features of neoplastic lesions of the bone. The aim of this study was to evaluate the efficacy of the cryosurgical technique on secondary musculoskeletal masses according to semi-quantitative PET analysis and clinical-test evaluation with the visual analogue scale (VAS). We enrolled 20 patients with painful bone lesions (score pain that exceeded 4 on the VAS) that were non-responsive to treatment; one lesion per patient was treated. All patients underwent a PET-CT evaluation before and 8 weeks after cryotherapy; maximum standardized uptake value (SUV(max)) was measured before and after treatment for metabolic assessment of response to therapy. After treatment, 18 patients (90%) showed considerable reduction in SUV(max) value (>50%) suggestive of response to treatment; only 2 patients did not show meaningful reduction in metabolic activity. Our preliminary study demonstrates that quantitative analysis provided by PET correlates with response to cryoablation therapy as assessed by CT data and clinical VAS evaluation.
Highly precise Re-Os dating for molybdenite using alkaline fusion and NTIMS.
Markey, R; Stein, H; Morgan, J
1998-03-01
The technique described in this paper represents the modification and combination of two previously existing methods, alkaline fusion and negative thermal ion mass spectrometry (NTIMS). We have used this technique to analyze repeatedly a homogeneous molybdenite powder used as a reference standard in our laboratory. Analyses were made over a period of 18 months, using four different calibrations of two different spike solutions. The age of this standard reproduces at a level of +/-0.13%. Each individual age analysis carries an uncertainty of about 0.4% that includes the uncertainty in the decay constant for (187)Re. This new level of resolution has allowed us to recognize real differences in ages for two grain-size populations of molybdenite from some Archean samples.
Highly precise Re-Os dating for molybdenite using alkaline fusion and NTIMS
Markey, R.; Stein, H.; Morgan, J.
1998-01-01
The technique described in this paper represents the modification and combination of two previously existing methods, alkaline fusion and negative thermal ion mass spectrometry (NTIMS). We have used this technique to analyze repeatedly a homogeneous molybdenite powder used as a reference standard in our laboratory. Analyses were made over a period of 18 months, using four different calibrations of two different spike solutions. The age of this standard reproduces at a level of ?? 0.13%. Each individual age analysis carries an uncertainty of about 0.4% that includes the uncertainty in the decay constant for 187Re. This new level of resolution has allowed us to recognize real differences in ages for two grain-size populations of molybdenite from some Archean samples.
Characterisation of Ductile Prepregs
NASA Astrophysics Data System (ADS)
Pinto, F.; White, A.; Meo, M.
2013-04-01
This study is focused on the analysis of micro-perforated prepregs created from standard, off the shelf prepregs modified by a particular laser process to enhance ductility of prepregs for better formability and drapability. Fibres are shortened through the use of laser cutting in a predetermined pattern intended to maintain alignment, and therefore mechanical properties, yet increase ductility at the working temperature. The increase in ductility allows the product to be more effectively optimised for specific forming techniques. Tensile tests were conducted on several specimens in order to understand the ductility enhancement offered by this process with different micro-perforation patterns over standard prepregs. Furthermore, the effects of forming temperature was also analysed to assess the applicability of this material to hot draping techniques and other heated processes.
Warlick, W B; O'Rear, J H; Earley, L; Moeller, J H; Gaffney, D K; Leavitt, D D
1997-01-01
The dose to the contralateral breast has been associated with an increased risk of developing a second breast malignancy. Varying techniques have been devised and described in the literature to minimize this dose. Metal beam modifiers such as standard wedges are used to improve the dose distribution in the treated breast, but unfortunately introduce an increased scatter dose outside the treatment field, in particular to the contralateral breast. The enhanced dynamic wedge is a means of remote wedging created by independently moving one collimator jaw through the treatment field during dose delivery. This study is an analysis of differing doses to the contralateral breast using two common clinical set-up techniques with the enhanced dynamic wedge versus the standard metal wedge. A tissue equivalent block (solid water), modeled to represent a typical breast outline, was designed as an insert in a Rando phantom to simulate a standard patient being treated for breast conservation. Tissue equivalent material was then used to complete the natural contour of the breast and to reproduce appropriate build-up and internal scatter. Thermoluminescent dosimeter (TLD) rods were placed at predetermined distances from the geometric beam's edge to measure the dose to the contralateral breast. A total of 35 locations were used with five TLDs in each location to verify the accuracy of the measured dose. The radiation techniques used were an isocentric set-up with co-planar, non divergent posterior borders and an isocentric set-up with a half beam block technique utilizing the asymmetric collimator jaw. Each technique used compensating wedges to optimize the dose distribution. A comparison of the dose to the contralateral breast was then made with the enhanced dynamic wedge vs. the standard metal wedge. The measurements revealed a significant reduction in the contralateral breast dose with the enhanced dynamic wedge compared to the standard metal wedge in both set-up techniques. The dose was measured at varying distances from the geometric field edge, ranging from 2 to 8 cm. The average dose with the enhanced dynamic wedge was 2.7-2.8%. The average dose with the standard wedge was 4.0-4.7%. Thermoluminescent dosimeter measurements suggest an increase in both scattered electrons and photons with metal wedges. The enhanced dynamic wedge is a practical clinical advance which improves the dose distribution in patients undergoing breast conservation while at the same time minimizing dose to the contralateral breast, thereby reducing the potential carcinogenic effects.
Application of neural networks and sensitivity analysis to improved prediction of trauma survival.
Hunter, A; Kennedy, L; Henry, J; Ferguson, I
2000-05-01
The performance of trauma departments is widely audited by applying predictive models that assess probability of survival, and examining the rate of unexpected survivals and deaths. Although the TRISS methodology, a logistic regression modelling technique, is still the de facto standard, it is known that neural network models perform better. A key issue when applying neural network models is the selection of input variables. This paper proposes a novel form of sensitivity analysis, which is simpler to apply than existing techniques, and can be used for both numeric and nominal input variables. The technique is applied to the audit survival problem, and used to analyse the TRISS variables. The conclusions discuss the implications for the design of further improved scoring schemes and predictive models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Synovec, R.E.; Johnson, E.L.; Bahowick, T.J.
1990-08-01
This paper describes a new technique for data analysis in chromatography, based on taking the point-by-point ratio of sequential chromatograms that have been base line corrected. This ratio chromatogram provides a robust means for the identification and the quantitation of analytes. In addition, the appearance of an interferent is made highly visible, even when it coelutes with desired analytes. For quantitative analysis, the region of the ratio chromatogram corresponding to the pure elution of an analyte is identified and is used to calculate a ratio value equal to the ratio of concentrations of the analyte in sequential injections. For themore » ratio value calculation, a variance-weighted average is used, which compensates for the varying signal-to-noise ratio. This ratio value, or equivalently the percent change in concentration, is the basis of a chromatographic standard addition method and an algorithm to monitor analyte concentration in a process stream. In the case of overlapped peaks, a spiking procedure is used to calculate both the original concentration of an analyte and its signal contribution to the original chromatogram. Thus, quantitation and curve resolution may be performed simultaneously, without peak modeling or curve fitting. These concepts are demonstrated by using data from ion chromatography, but the technique should be applicable to all chromatographic techniques.« less
Jain, Mamta; Kumar, Anil; Choudhary, Rishabh Charan
2017-06-01
In this article, we have proposed an improved diagonal queue medical image steganography for patient secret medical data transmission using chaotic standard map, linear feedback shift register, and Rabin cryptosystem, for improvement of previous technique (Jain and Lenka in Springer Brain Inform 3:39-51, 2016). The proposed algorithm comprises four stages, generation of pseudo-random sequences (pseudo-random sequences are generated by linear feedback shift register and standard chaotic map), permutation and XORing using pseudo-random sequences, encryption using Rabin cryptosystem, and steganography using the improved diagonal queues. Security analysis has been carried out. Performance analysis is observed using MSE, PSNR, maximum embedding capacity, as well as by histogram analysis between various Brain disease stego and cover images.
New approaches to some methodological problems of meteor science
NASA Technical Reports Server (NTRS)
Meisel, David D.
1987-01-01
Several low cost approaches to continuous radioscatter monitoring of the incoming meteor flux are described. Preliminary experiments were attempted using standard time frequency stations WWVH and CHU (on frequencies near 15 MHz) during nighttime hours. Around-the-clock monitoring using the international standard aeronautical beacon frequency of 75 MHz was also attempted. The techniques are simple and can be managed routinely by amateur astronomers with relatively little technical expertise. Time series analysis can now be performed using relatively inexpensive microcomputers. Several algorithmic approaches to the analysis of meteor rates are discussed. Methods of obtaining optimal filter predictions of future meteor flux are also discussed.
Olson, M.L.; Cleckner, L.B.; Hurley, J.P.; Krabbenhoft, D.P.; Heelan, T.W.
1997-01-01
Aqueous samples from the Florida Everglades present several problems for the analysis of total mercury (HgT) and methyl mercury (MeHg). Constituents such as dissolved organic carbon (DOC) and sulfide at selected sites present particular challenges due to interferences with standard analytical techniques. This is manifested by 1) the inability to discern when bromine monochloride (BrCl) addition is sufficient for sample oxidation for HgT analysis; and 2) incomplete spike recoveries using the distillation/ethylation technique for MeHg analysis. Here, we suggest ultra-violet (UV) oxidation prior to addition of BrCl to ensure total oxidation of DOC prior to HgT analysis and copper sulfate (CuSO4) addition to aid in distillation in the presence of sulfide for MeHg analysis. Despite high chloride (Cl-) levels, we observed no effects on MeHg distillation/ethylation analyses. ?? Springer-Verlag 1997.
NASA Astrophysics Data System (ADS)
Asadpour-Zeynali, Karim; Bastami, Mohammad
2010-02-01
In this work a new modification of the standard addition method called "net analyte signal standard addition method (NASSAM)" is presented for the simultaneous spectrofluorimetric and spectrophotometric analysis. The proposed method combines the advantages of standard addition method with those of net analyte signal concept. The method can be applied for the determination of analyte in the presence of known interferents. The accuracy of the predictions against H-point standard addition method is not dependent on the shape of the analyte and interferent spectra. The method was successfully applied to simultaneous spectrofluorimetric and spectrophotometric determination of pyridoxine (PY) and melatonin (MT) in synthetic mixtures and in a pharmaceutical formulation.
Science and Technology Highlights | NREL
Leads to Enhanced Upgrading Methods NREL's efforts to standardize techniques for bio-oil analysis inform enhanced modeling capability and affordable methods to increase energy efficiency. December 2012 NREL Meets Performance Demands of Advanced Lithium-ion Batteries Novel surface modification methods are
The World of Work--Industrial Clean Rooms.
ERIC Educational Resources Information Center
Potts, Frank E.
The purpose of this publication is to present information concerning the environmental conditions imposed upon workers in industries which require clean room facilities to eliminate particle-caused equipment failure. The information, which was collected through interviews, observation, and other standard job analysis techniques, discusses these…
Elnaggar, Yosra Shaaban R; El-Massik, Magda A; Abdallah, Ossama Y; Ebian, Abd Elazim R
2010-06-01
The recent challenge in orally disintegrating tablets (ODT) manufacturing encompasses the compromise between instantaneous disintegration, sufficient hardness, and standard processing equipment. The current investigation constitutes one attempt to fulfill this challenge. Maltodextrin, in the present work, was utilized as a novel excipient to prepare ODT of meclizine. Tablets were prepared by both direct compression and wet granulation techniques. The effect of maltodextrin concentrations on ODT characteristics--manifested as hardness and disintegration time--was studied. The effect of conditioning (40 degrees C and 75% relative humidity) as a post-compression treatment on ODT characteristics was also assessed. Furthermore, maltodextrin-pronounced hardening effect was investigated using differential scanning calorimetry (DSC) and X-ray analysis. Results revealed that in both techniques, rapid disintegration (30-40 s) would be achieved on the cost of tablet hardness (about 1 kg). Post-compression conditioning of tablets resulted in an increase in hardness (3 kg), while keeping rapid disintegration (30-40 s) according to guidance of the FDA for ODT. However, direct compression-conditioning technique exhibited drawbacks of long conditioning time and appearance of the so-called patch effect. These problems were, yet, absent in wet granulation-conditioning technique. DSC and X-ray analysis suggested involvement of glass-elastic deformation in maltodextrin hardening effect. High-performance liquid chromatography analysis of meclizine ODT suggested no degradation of the drug by the applied conditions of temperature and humidity. Overall results proposed that maltodextrin is a promising saccharide for production of ODT with accepted hardness-disintegration time compromise, utilizing standard processing equipment and phenomena of phase transition.
Connatser, Raynella M.; Lewis, Sr., Samuel Arthur; Keiser, James R.; ...
2014-10-03
Integrating biofuels with conventional petroleum products requires improvements in processing to increase blendability with existing fuels. This work demonstrates analysis techniques for more hydrophilic bio-oil liquids that give improved quantitative and qualitative description of the total acid content and organic acid profiles. To protect infrastructure from damage and reduce the cost associated with upgrading, accurate determination of acid content and representative chemical compound analysis are central imperatives to assessing both the corrosivity and the progress toward removing oxygen and acidity in processed biomass liquids. Established techniques form an ample basis for bio-liquids evaluation. However, early in the upgrading process, themore » unique physical phases and varied hydrophilicity of many pyrolysis liquids can render analytical methods originally designed for use in petroleum-derived oils inadequate. In this work, the water solubility of the organic acids present in bio-oils is exploited in a novel extraction and titration technique followed by analysis on the water-based capillary electrophoresis (CE) platform. The modification of ASTM D664, the standard for Total Acid Number (TAN), to include aqueous carrier solvents improves the utility of that approach for quantifying acid content in hydrophilic bio-oils. Termed AMTAN (modified Total Acid Number), this technique offers 1.2% relative standard deviation and dynamic range comparable to the conventional ASTM method. Furthermore, the results of corrosion product evaluations using several different sources of real bio-oil are discussed in the context of the unique AMTAN and CE analytical approaches developed to facilitate those measurements.« less
NASA Astrophysics Data System (ADS)
Jolivet, S.; Mezghani, S.; El Mansori, M.
2016-09-01
The replication of topography has been generally restricted to optimizing material processing technologies in terms of statistical and single-scale features such as roughness. By contrast, manufactured surface topography is highly complex, irregular, and multiscale. In this work, we have demonstrated the use of multiscale analysis on replicates of surface finish to assess the precise control of the finished replica. Five commercial resins used for surface replication were compared. The topography of five standard surfaces representative of common finishing processes were acquired both directly and by a replication technique. Then, they were characterized using the ISO 25178 standard and multiscale decomposition based on a continuous wavelet transform, to compare the roughness transfer quality at different scales. Additionally, atomic force microscope force modulation mode was used in order to compare the resins’ stiffness properties. The results showed that less stiff resins are able to replicate the surface finish along a larger wavelength band. The method was then tested for non-destructive quality control of automotive gear tooth surfaces.
Statistical methodology: II. Reliability and validity assessment in study design, Part B.
Karras, D J
1997-02-01
Validity measures the correspondence between a test and other purported measures of the same or similar qualities. When a reference standard exists, a criterion-based validity coefficient can be calculated. If no such standard is available, the concepts of content and construct validity may be used, but quantitative analysis may not be possible. The Pearson and Spearman tests of correlation are often used to assess the correspondence between tests, but do not account for measurement biases and may yield misleading results. Techniques that measure interest differences may be more meaningful in validity assessment, and the kappa statistic is useful for analyzing categorical variables. Questionnaires often can be designed to allow quantitative assessment of reliability and validity, although this may be difficult. Inclusion of homogeneous questions is necessary to assess reliability. Analysis is enhanced by using Likert scales or similar techniques that yield ordinal data. Validity assessment of questionnaires requires careful definition of the scope of the test and comparison with previously validated tools.
Analysis of calibration materials to improve dual-energy CT scanning for petrophysical applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ayyalasomavaiula, K.; McIntyre, D.; Jain, J.
2011-01-01
Dual energy CT-scanning is a rapidly emerging imaging technique employed in non-destructive evaluation of various materials. Although CT (Computerized Tomography) has been used for characterizing rocks and visualizing and quantifying multiphase flow through rocks for over 25 years, most of the scanning is done at a voltage setting above 100 kV for taking advantage of the Compton scattering (CS) effect, which responds to density changes. Below 100 kV the photoelectric effect (PE) is dominant which responds to the effective atomic numbers (Zeff), which is directly related to the photo electric factor. Using the combination of the two effects helps inmore » better characterization of reservoir rocks. The most common technique for dual energy CT-scanning relies on homogeneous calibration standards to produce the most accurate decoupled data. However, the use of calibration standards with impurities increases the probability of error in the reconstructed data and results in poor rock characterization. This work combines ICP-OES (inductively coupled plasma optical emission spectroscopy) and LIBS (laser induced breakdown spectroscopy) analytical techniques to quantify the type and level of impurities in a set of commercially purchased calibration standards used in dual-energy scanning. The Zeff data on the calibration standards with and without impurity data were calculated using the weighted linear combination of the various elements present and used in calculating Zeff using the dual energy technique. Results show 2 to 5% difference in predicted Zeff values which may affect the corresponding log calibrations. The effect that these techniques have on improving material identification data is discussed and analyzed. The workflow developed in this paper will translate to a more accurate material identification estimates for unknown samples and improve calibration of well logging tools.« less
Witwer, Kenneth W.; Buzás, Edit I.; Bemis, Lynne T.; Bora, Adriana; Lässer, Cecilia; Lötvall, Jan; Nolte-‘t Hoen, Esther N.; Piper, Melissa G.; Sivaraman, Sarada; Skog, Johan; Théry, Clotilde; Wauben, Marca H.; Hochberg, Fred
2013-01-01
The emergence of publications on extracellular RNA (exRNA) and extracellular vesicles (EV) has highlighted the potential of these molecules and vehicles as biomarkers of disease and therapeutic targets. These findings have created a paradigm shift, most prominently in the field of oncology, prompting expanded interest in the field and dedication of funds for EV research. At the same time, understanding of EV subtypes, biogenesis, cargo and mechanisms of shuttling remains incomplete. The techniques that can be harnessed to address the many gaps in our current knowledge were the subject of a special workshop of the International Society for Extracellular Vesicles (ISEV) in New York City in October 2012. As part of the “ISEV Research Seminar: Analysis and Function of RNA in Extracellular Vesicles (evRNA)”, 6 round-table discussions were held to provide an evidence-based framework for isolation and analysis of EV, purification and analysis of associated RNA molecules, and molecular engineering of EV for therapeutic intervention. This article arises from the discussion of EV isolation and analysis at that meeting. The conclusions of the round table are supplemented with a review of published materials and our experience. Controversies and outstanding questions are identified that may inform future research and funding priorities. While we emphasize the need for standardization of specimen handling, appropriate normative controls, and isolation and analysis techniques to facilitate comparison of results, we also recognize that continual development and evaluation of techniques will be necessary as new knowledge is amassed. On many points, consensus has not yet been achieved and must be built through the reporting of well-controlled experiments. PMID:24009894
Using normalization 3D model for automatic clinical brain quantative analysis and evaluation
NASA Astrophysics Data System (ADS)
Lin, Hong-Dun; Yao, Wei-Jen; Hwang, Wen-Ju; Chung, Being-Tau; Lin, Kang-Ping
2003-05-01
Functional medical imaging, such as PET or SPECT, is capable of revealing physiological functions of the brain, and has been broadly used in diagnosing brain disorders by clinically quantitative analysis for many years. In routine procedures, physicians manually select desired ROIs from structural MR images and then obtain physiological information from correspondent functional PET or SPECT images. The accuracy of quantitative analysis thus relies on that of the subjectively selected ROIs. Therefore, standardizing the analysis procedure is fundamental and important in improving the analysis outcome. In this paper, we propose and evaluate a normalization procedure with a standard 3D-brain model to achieve precise quantitative analysis. In the normalization process, the mutual information registration technique was applied for realigning functional medical images to standard structural medical images. Then, the standard 3D-brain model that shows well-defined brain regions was used, replacing the manual ROIs in the objective clinical analysis. To validate the performance, twenty cases of I-123 IBZM SPECT images were used in practical clinical evaluation. The results show that the quantitative analysis outcomes obtained from this automated method are in agreement with the clinical diagnosis evaluation score with less than 3% error in average. To sum up, the method takes advantage of obtaining precise VOIs, information automatically by well-defined standard 3-D brain model, sparing manually drawn ROIs slice by slice from structural medical images in traditional procedure. That is, the method not only can provide precise analysis results, but also improve the process rate for mass medical images in clinical.
Inglis, Jeremy D.; Maassen, Joel; Kara, Azim; ...
2017-04-28
This study presents a total evaporation method for the analysis of sub-picogram quantities of Pu, utilizing an array of multiple ion counters. Data from three standards are presented to assess the utility of the technique. An external precision of 1.5% RSD (2σ) was achieved on aliquots approaching 100 fg for the minor 240Pu isotope. Accurate analysis of <1 femtogram of 240Pu, is achievable, with an external reproducibility of better than 10% RSD (2σ). Finally, this new technique represents a significant advance in the total evaporation method and will allow routine measurement of femtogram sized Pu samples by thermal ionization massmore » spectrometry.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Inglis, Jeremy D.; Maassen, Joel; Kara, Azim
This study presents a total evaporation method for the analysis of sub-picogram quantities of Pu, utilizing an array of multiple ion counters. Data from three standards are presented to assess the utility of the technique. An external precision of 1.5% RSD (2σ) was achieved on aliquots approaching 100 fg for the minor 240Pu isotope. Accurate analysis of <1 femtogram of 240Pu, is achievable, with an external reproducibility of better than 10% RSD (2σ). Finally, this new technique represents a significant advance in the total evaporation method and will allow routine measurement of femtogram sized Pu samples by thermal ionization massmore » spectrometry.« less
Structural reliability assessment of the Oman India Pipeline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Al-Sharif, A.M.; Preston, R.
1996-12-31
Reliability techniques are increasingly finding application in design. The special design conditions for the deep water sections of the Oman India Pipeline dictate their use since the experience basis for application of standard deterministic techniques is inadequate. The paper discusses the reliability analysis as applied to the Oman India Pipeline, including selection of a collapse model, characterization of the variability in the parameters that affect pipe resistance to collapse, and implementation of first and second order reliability analyses to assess the probability of pipe failure. The reliability analysis results are used as the basis for establishing the pipe wall thicknessmore » requirements for the pipeline.« less
Dark matter constraints from a joint analysis of dwarf Spheroidal galaxy observations with VERITAS
Archambault, S.; Archer, A.; Benbow, W.; ...
2017-04-05
We present constraints on the annihilation cross section of weakly interacting massive particles dark matter based on the joint statistical analysis of four dwarf galaxies with VERITAS. These results are derived from an optimized photon weighting statistical technique that improves on standard imaging atmospheric Cherenkov telescope (IACT) analyses by utilizing the spectral and spatial properties of individual photon events.
The Importance of Practice in the Development of Statistics.
1983-01-01
RESOLUTION TEST CHART NATIONAL BUREAU OIF STANDARDS 1963 -A NRC Technical Summary Report #2471 C THE IMORTANCE OF PRACTICE IN to THE DEVELOPMENT OF STATISTICS...component analysis, bioassay, limits for a ratio, quality control, sampling inspection, non-parametric tests , transformation theory, ARIMA time series...models, sequential tests , cumulative sum charts, data analysis plotting techniques, and a resolution of the Bayes - frequentist controversy. It appears
Cartilage Restoration of the Knee: A Systematic Review and Meta-analysis of Level 1 Studies.
Mundi, Raman; Bedi, Asheesh; Chow, Linda; Crouch, Sarah; Simunovic, Nicole; Sibilsky Enselman, Elizabeth; Ayeni, Olufemi R
2016-07-01
Focal cartilage defects of the knee are a substantial cause of pain and disability in active patients. There has been an emergence of randomized controlled trials evaluating surgical techniques to manage such injuries, including marrow stimulation (MS), autologous chondrocyte implantation (ACI), and osteochondral autograft transfer (OAT). A meta-analysis was conducted to determine if any single technique provides superior clinical results at intermediate follow-up. Systematic review and meta-analysis of randomized controlled trials. The MEDLINE, EMBASE, and Cochrane Library databases were systematically searched and supplemented with manual searches of PubMed and reference lists. Eligible studies consisted exclusively of randomized controlled trials comparing MS, ACI, or OAT techniques in patients with focal cartilage defects of the knee. The primary outcome of interest was function (Lysholm score, International Knee Documentation Committee score, Knee Osteoarthritis Outcome Score) and pain at 24 months postoperatively. A meta-analysis using standardized mean differences was performed to provide a pooled estimate of effect comparing treatments. A total of 12 eligible randomized trials with a cumulative sample size of 765 patients (62% males) and a mean (±SD) lesion size of 3.9 ± 1.3 cm(2) were included in this review. There were 5 trials comparing ACI with MS, 3 comparing ACI with OAT, and 3 evaluating different generations of ACI. In a pooled analysis comparing ACI with MS, there was no difference in outcomes at 24-month follow-up for function (standardized mean difference, 0.47 [95% CI, -0.19 to 1.13]; P = .16) or pain (standardized mean difference, -0.13 [95% CI, -0.39 to 0.13]; P = .33). The comparisons of ACI to OAT or between different generations of ACI were not amenable to pooled analysis. Overall, 5 of the 6 trials concluded that there was no significant difference in functional outcomes between ACI and OAT or between generations of ACI. There is no significant difference between MS, ACI, and OAT in improving function and pain at intermediate-term follow-up. Further randomized trials with long-term outcomes are warranted. © 2015 The Author(s).
Integrating multiparametric prostate MRI into clinical practice
2011-01-01
Abstract Multifunctional magnetic resonance imaging (MRI) techniques are increasingly being used to address bottlenecks in prostate cancer patient management. These techniques yield qualitative, semi-quantitative and fully quantitative biomarkers that reflect on the underlying biological status of a tumour. If these techniques are to have a role in patient management, then standard methods of data acquisition, analysis and reporting have to be developed. Effective communication by the use of scoring systems, structured reporting and a graphical interface that matches prostate anatomy are key elements. Practical guidelines for integrating multiparametric MRI into clinical practice are presented. PMID:22187067
Zeynoddin, Mohammad; Bonakdari, Hossein; Azari, Arash; Ebtehaj, Isa; Gharabaghi, Bahram; Riahi Madavar, Hossein
2018-09-15
A novel hybrid approach is presented that can more accurately predict monthly rainfall in a tropical climate by integrating a linear stochastic model with a powerful non-linear extreme learning machine method. This new hybrid method was then evaluated by considering four general scenarios. In the first scenario, the modeling process is initiated without preprocessing input data as a base case. While in other three scenarios, the one-step and two-step procedures are utilized to make the model predictions more precise. The mentioned scenarios are based on a combination of stationarization techniques (i.e., differencing, seasonal and non-seasonal standardization and spectral analysis), and normality transforms (i.e., Box-Cox, John and Draper, Yeo and Johnson, Johnson, Box-Cox-Mod, log, log standard, and Manly). In scenario 2, which is a one-step scenario, the stationarization methods are employed as preprocessing approaches. In scenario 3 and 4, different combinations of normality transform, and stationarization methods are considered as preprocessing techniques. In total, 61 sub-scenarios are evaluated resulting 11013 models (10785 linear methods, 4 nonlinear models, and 224 hybrid models are evaluated). The uncertainty of the linear, nonlinear and hybrid models are examined by Monte Carlo technique. The best preprocessing technique is the utilization of Johnson normality transform and seasonal standardization (respectively) (R 2 = 0.99; RMSE = 0.6; MAE = 0.38; RMSRE = 0.1, MARE = 0.06, UI = 0.03 &UII = 0.05). The results of uncertainty analysis indicated the good performance of proposed technique (d-factor = 0.27; 95PPU = 83.57). Moreover, the results of the proposed methodology in this study were compared with an evolutionary hybrid of adaptive neuro fuzzy inference system (ANFIS) with firefly algorithm (ANFIS-FFA) demonstrating that the new hybrid methods outperformed ANFIS-FFA method. Copyright © 2018 Elsevier Ltd. All rights reserved.
Impervious surfaces mapping using high resolution satellite imagery
NASA Astrophysics Data System (ADS)
Shirmeen, Tahmina
In recent years, impervious surfaces have emerged not only as an indicator of the degree of urbanization, but also as an indicator of environmental quality. As impervious surface area increases, storm water runoff increases in velocity, quantity, temperature and pollution load. Any of these attributes can contribute to the degradation of natural hydrology and water quality. Various image processing techniques have been used to identify the impervious surfaces, however, most of the existing impervious surface mapping tools used moderate resolution imagery. In this project, the potential of standard image processing techniques to generate impervious surface data for change detection analysis using high-resolution satellite imagery was evaluated. The city of Oxford, MS was selected as the study site for this project. Standard image processing techniques, including Normalized Difference Vegetation Index (NDVI), Principal Component Analysis (PCA), a combination of NDVI and PCA, and image classification algorithms, were used to generate impervious surfaces from multispectral IKONOS and QuickBird imagery acquired in both leaf-on and leaf-off conditions. Accuracy assessments were performed, using truth data generated by manual classification, with Kappa statistics and Zonal statistics to select the most appropriate image processing techniques for impervious surface mapping. The performance of selected image processing techniques was enhanced by incorporating Soil Brightness Index (SBI) and Greenness Index (GI) derived from Tasseled Cap Transformed (TCT) IKONOS and QuickBird imagery. A time series of impervious surfaces for the time frame between 2001 and 2007 was made using the refined image processing techniques to analyze the changes in IS in Oxford. It was found that NDVI and the combined NDVI--PCA methods are the most suitable image processing techniques for mapping impervious surfaces in leaf-off and leaf-on conditions respectively, using high resolution multispectral imagery. It was also found that IS data generated by these techniques can be refined by removing the conflicting dry soil patches using SBI and GI obtained from TCT of the same imagery used for IS data generation. The change detection analysis of the IS time series shows that Oxford experienced the major changes in IS from the year 2001 to 2004 and 2006 to 2007.
Proust, Gwénaëlle; Trimby, Patrick; Piazolo, Sandra; Retraint, Delphine
2017-01-01
One of the challenges in microstructure analysis nowadays resides in the reliable and accurate characterization of ultra-fine grained (UFG) and nanocrystalline materials. The traditional techniques associated with scanning electron microscopy (SEM), such as electron backscatter diffraction (EBSD), do not possess the required spatial resolution due to the large interaction volume between the electrons from the beam and the atoms of the material. Transmission electron microscopy (TEM) has the required spatial resolution. However, due to a lack of automation in the analysis system, the rate of data acquisition is slow which limits the area of the specimen that can be characterized. This paper presents a new characterization technique, Transmission Kikuchi Diffraction (TKD), which enables the analysis of the microstructure of UFG and nanocrystalline materials using an SEM equipped with a standard EBSD system. The spatial resolution of this technique can reach 2 nm. This technique can be applied to a large range of materials that would be difficult to analyze using traditional EBSD. After presenting the experimental set up and describing the different steps necessary to realize a TKD analysis, examples of its use on metal alloys and minerals are shown to illustrate the resolution of the technique and its flexibility in term of material to be characterized. PMID:28447998
Proust, Gwénaëlle; Trimby, Patrick; Piazolo, Sandra; Retraint, Delphine
2017-04-01
One of the challenges in microstructure analysis nowadays resides in the reliable and accurate characterization of ultra-fine grained (UFG) and nanocrystalline materials. The traditional techniques associated with scanning electron microscopy (SEM), such as electron backscatter diffraction (EBSD), do not possess the required spatial resolution due to the large interaction volume between the electrons from the beam and the atoms of the material. Transmission electron microscopy (TEM) has the required spatial resolution. However, due to a lack of automation in the analysis system, the rate of data acquisition is slow which limits the area of the specimen that can be characterized. This paper presents a new characterization technique, Transmission Kikuchi Diffraction (TKD), which enables the analysis of the microstructure of UFG and nanocrystalline materials using an SEM equipped with a standard EBSD system. The spatial resolution of this technique can reach 2 nm. This technique can be applied to a large range of materials that would be difficult to analyze using traditional EBSD. After presenting the experimental set up and describing the different steps necessary to realize a TKD analysis, examples of its use on metal alloys and minerals are shown to illustrate the resolution of the technique and its flexibility in term of material to be characterized.
Multiscale Image Processing of Solar Image Data
NASA Astrophysics Data System (ADS)
Young, C.; Myers, D. C.
2001-12-01
It is often said that the blessing and curse of solar physics is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also increased the amount of highly complex data. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present several applications of multiscale techniques applied to solar image data. Specifically, we discuss uses of the wavelet, curvelet, and related transforms to define a multiresolution support for EIT, LASCO and TRACE images.
NASA Astrophysics Data System (ADS)
Bordovsky, Michal; Catrysse, Peter; Dods, Steven; Freitas, Marcio; Klein, Jackson; Kotacka, Libor; Tzolov, Velko; Uzunov, Ivan M.; Zhang, Jiazong
2004-05-01
We present the state of the art for commercial design and simulation software in the 'front end' of photonic circuit design. One recent advance is to extend the flexibility of the software by using more than one numerical technique on the same optical circuit. There are a number of popular and proven techniques for analysis of photonic devices. Examples of these techniques include the Beam Propagation Method (BPM), the Coupled Mode Theory (CMT), and the Finite Difference Time Domain (FDTD) method. For larger photonic circuits, it may not be practical to analyze the whole circuit by any one of these methods alone, but often some smaller part of the circuit lends itself to at least one of these standard techniques. Later the whole problem can be analyzed on a unified platform. This kind of approach can enable analysis for cases that would otherwise be cumbersome, or even impossible. We demonstrate solutions for more complex structures ranging from the sub-component layout, through the entire device characterization, to the mask layout and its editing. We also present recent advances in the above well established techniques. This includes the analysis of nano-particles, metals, and non-linear materials by FDTD, photonic crystal design and analysis, and improved models for high concentration Er/Yb co-doped glass waveguide amplifiers.
TEMORA 1: A new zircon standard for Phanerozoic U-Pb geochronology
Black, L.P.; Kamo, S.L.; Allen, C.M.; Aleinikoff, J.N.; Davis, D.W.; Korsch, R.J.; Foudoulis, C.
2003-01-01
The role of the standard is critical to the derivation of reliable U-Pb zircon ages by micro-beam analysis. For maximum reliability, it is critically important that the utilised standard be homogeneous at all scales of analysis. It is equally important that the standard has been precisely and accurately dated by an independent technique. This study reports the emergence of a new zircon standard that meets those criteria, as demonstrated by Sensitive High Resolution Ion MicroProbe (SHRIMP), isotope dilution thermal ionisation mass-spectrometry (IDTIMS) and excimer laser ablation- inductively coupled plasma-mass-spectrometry (ELA-ICP-MS) documentation. The TEMORA 1 zircon standard derives from the Middledale Gabbroic Diorite, a high-level mafic stock within the Palaeozoic Lachlan Orogen of eastern Australia. Its 206Pb/238U IDTIMS age has been determined to be 416.75??0.24 Ma (95% confidence limits), based on measurement errors alone. Spike-calibration uncertainty limits the accuracy to 416.8??1.1 Ma for U-Pb intercomparisons between different laboratories that do not use a common spike. ?? 2003 Published by Elsevier Science B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Johnson, Benjamin W.; Drage, Natashia; Spence, Jody; Hanson, Nova; El-Sabaawi, Rana; Goldblatt, Colin
2017-03-01
Long viewed as a mostly noble, atmospheric species, recent work demonstrates that nitrogen in fact cycles throughout the Earth system, including the atmosphere, biosphere, oceans, and solid Earth. Despite this new-found behaviour, more thorough investigation of N in geologic materials is limited due to its low concentration (one to tens of parts per million) and difficulty in analysis. In addition, N can exist in multiple species (NO3-, NH4+, N2, organic N), and determining which species is actually quantified can be difficult. In rocks and minerals, NH4+ is the most stable form of N over geologic timescales. As such, techniques designed to measure NH4+ can be particularly useful.We measured a number of geochemical rock standards using three different techniques: elemental analyzer (EA) mass spectrometry, colorimetry, and fluorometry. The fluorometry approach is a novel adaptation of a technique commonly used in biologic science, applied herein to geologic NH4+. Briefly, NH4+ can be quantified by HF dissolution, neutralization, addition of a fluorescing reagent, and analysis on a standard fluorometer. We reproduce published values for several rock standards (BCR-2, BHVO-2, and G-2), especially if an additional distillation step is performed. While it is difficult to assess the quality of each method, due to lack of international geologic N standards, fluorometry appears better suited to analyzing mineral-bound NH4+ than EA mass spectrometry and is a simpler, quicker alternative to colorimetry.To demonstrate a potential application of fluorometry, we calculated a continental crust N budget based on new measurements. We used glacial tills as a proxy for upper crust and analyzed several poorly constrained rock types (volcanics, mid-crustal xenoliths) to determine that the continental crust contains ˜ 2 × 1018 kg N. This estimate is consistent with recent budget estimates and shows that fluorometry is appropriate for large-scale questions where high sample throughput is helpful.Lastly, we report the first δ15N values of six rock standards: BCR-2 (1. 05 ± 0. 4 ‰), BHVO-2 (-0. 3 ± 0. 2 ‰), G-2 (1. 23 ± 1. 32 ‰), LKSD-4 (3. 59 ± 0. 1 ‰), Till-4 (6. 33 ± 0. 1 ‰), and SY-4 (2. 13 ± 0. 5 ‰). The need for international geologic N standards is crucial for further investigation of the Earth system N cycle, and we suggest that existing rock standards may be suited to this need.
Standardized pivot shift test improves measurement accuracy.
Hoshino, Yuichi; Araujo, Paulo; Ahlden, Mattias; Moore, Charity G; Kuroda, Ryosuke; Zaffagnini, Stefano; Karlsson, Jon; Fu, Freddie H; Musahl, Volker
2012-04-01
The variability of the pivot shift test techniques greatly interferes with achieving a quantitative and generally comparable measurement. The purpose of this study was to compare the variation of the quantitative pivot shift measurements with different surgeons' preferred techniques to a standardized technique. The hypothesis was that standardizing the pivot shift test would improve consistency in the quantitative evaluation when compared with surgeon-specific techniques. A whole lower body cadaveric specimen was prepared to have a low-grade pivot shift on one side and high-grade pivot shift on the other side. Twelve expert surgeons performed the pivot shift test using (1) their preferred technique and (2) a standardized technique. Electromagnetic tracking was utilized to measure anterior tibial translation and acceleration of the reduction during the pivot shift test. The variation of the measurement was compared between the surgeons' preferred technique and the standardized technique. The anterior tibial translation during pivot shift test was similar between using surgeons' preferred technique (left 24.0 ± 4.3 mm; right 15.5 ± 3.8 mm) and using standardized technique (left 25.1 ± 3.2 mm; right 15.6 ± 4.0 mm; n.s.). However, the variation in acceleration was significantly smaller with the standardized technique (left 3.0 ± 1.3 mm/s(2); right 2.5 ± 0.7 mm/s(2)) compared with the surgeons' preferred technique (left 4.3 ± 3.3 mm/s(2); right 3.4 ± 2.3 mm/s(2); both P < 0.01). Standardizing the pivot shift test maneuver provides a more consistent quantitative evaluation and may be helpful in designing future multicenter clinical outcome trials. Diagnostic study, Level I.
Yu, Chen; Zhang, Qian; Xu, Peng-Yao; Bai, Yin; Shen, Wen-Bin; Di, Bin; Su, Meng-Xiang
2018-01-01
Quantitative nuclear magnetic resonance (qNMR) is a well-established technique in quantitative analysis. We presented a validated 1 H-qNMR method for assay of octreotide acetate, a kind of cyclic octopeptide. Deuterium oxide was used to remove the undesired exchangeable peaks, which was referred to as proton exchange, in order to make the quantitative signals isolated in the crowded spectrum of the peptide and ensure precise quantitative analysis. Gemcitabine hydrochloride was chosen as the suitable internal standard. Experimental conditions, including relaxation delay time, the numbers of scans, and pulse angle, were optimized first. Then method validation was carried out in terms of selectivity, stability, linearity, precision, and robustness. The assay result was compared with that by means of high performance liquid chromatography, which is provided by Chinese Pharmacopoeia. The statistical F test, Student's t test, and nonparametric test at 95% confidence level indicate that there was no significant difference between these two methods. qNMR is a simple and accurate quantitative tool with no need for specific corresponding reference standards. It has the potential of the quantitative analysis of other peptide drugs and standardization of the corresponding reference standards. Copyright © 2017 John Wiley & Sons, Ltd.
Li, Yongtao; Whitaker, Joshua S; McCarty, Christina L
2012-07-06
A large volume direct aqueous injection method was developed for the analysis of iodinated haloacetic acids in drinking water by using reversed-phase liquid chromatography/electrospray ionization/tandem mass spectrometry in the negative ion mode. Both the external and internal standard calibration methods were studied for the analysis of monoiodoacetic acid, chloroiodoacetic acid, bromoiodoacetic acid, and diiodoacetic acid in drinking water. The use of a divert valve technique for the mobile phase solvent delay, along with isotopically labeled analogs used as internal standards, effectively reduced and compensated for the ionization suppression typically caused by coexisting common inorganic anions. Under the optimized method conditions, the mean absolute and relative recoveries resulting from the replicate fortified deionized water and chlorinated drinking water analyses were 83-107% with a relative standard deviation of 0.7-11.7% and 84-111% with a relative standard deviation of 0.8-12.1%, respectively. The method detection limits resulting from the external and internal standard calibrations, based on seven fortified deionized water replicates, were 0.7-2.3 ng/L and 0.5-1.9 ng/L, respectively. Copyright © 2012 Elsevier B.V. All rights reserved.
Sil'veĭstrova, O Iu; Domonova, É A; Shipulina, O Iu
2014-04-01
The validation of kit of reagents destined to detection and quantitative evaluation of DNA of human cytomegalovirus in biological material using polymerase chain reaction technique in real time operation mode was implemented. The comparison was made against international WHO standard--The first WHO international standard for human cytomegalovirus to implement measures the kit of reagents "AmpliSens CMV-screen/monitor-FL" and standard sample of enterprise DNA HCMV (The central research institute of epidemiology of Rospotrebnadzor) was applied. The fivefold dilution of international WHO standard and standard sample of enterprise were carried out in concentrations of DNA HCMV from 106 to 102. The arrangement of polymerase chain reaction and analysis of results were implemented using programed amplifier with system of detection of fluorescent signal in real-time mode "Rotor-Gene Q" ("Qiagen", Germany). In the total of three series of experiments, all stages of polymerase chain reaction study included, the coefficient of translation of quantitative evaluation of DNA HCMV from copy/ml to ME/ml equal to 0.6 was introduced for this kit of reagents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Anders T., E-mail: andehans@rm.dk; Lukacova, Slavka; Lassen-Ramshad, Yasmin
2015-01-01
When standard conformal x-ray technique for craniospinal irradiation is used, it is a challenge to achieve satisfactory dose coverage of the target including the area of the cribriform plate, while sparing organs at risk. We present a new intensity-modulated radiation therapy (IMRT), noncoplanar technique, for delivering irradiation to the cranial part and compare it with 3 other techniques and previously published results. A total of 13 patients who had previously received craniospinal irradiation with standard conformal x-ray technique were reviewed. New treatment plans were generated for each patient using the noncoplanar IMRT-based technique, a coplanar IMRT-based technique, and a coplanarmore » volumetric-modulated arch therapy (VMAT) technique. Dosimetry data for all patients were compared with the corresponding data from the conventional treatment plans. The new noncoplanar IMRT technique substantially reduced the mean dose to organs at risk compared with the standard radiation technique. The 2 other coplanar techniques also reduced the mean dose to some of the critical organs. However, this reduction was not as substantial as the reduction obtained by the noncoplanar technique. Furthermore, compared with the standard technique, the IMRT techniques reduced the total calculated radiation dose that was delivered to the normal tissue, whereas the VMAT technique increased this dose. Additionally, the coverage of the target was significantly improved by the noncoplanar IMRT technique. Compared with the standard technique, the coplanar IMRT and the VMAT technique did not improve the coverage of the target significantly. All the new planning techniques increased the number of monitor units (MU) used—the noncoplanar IMRT technique by 99%, the coplanar IMRT technique by 122%, and the VMAT technique by 26%—causing concern for leak radiation. The noncoplanar IMRT technique covered the target better and decreased doses to organs at risk compared with the other techniques. All the new techniques increased the number of MU compared with the standard technique.« less
Larson, S.J.; Capel, P.D.; VanderLoop, A.G.
1996-01-01
Laboratory and quality assurance procedures for the analysis of ground-water samples for herbicides at the Management Systems Evaluation Area near Princeton, Minnesota are described. The target herbicides include atrazine, de-ethylatrazine, de-isopropylatrazine, metribuzin, alachlor, 2,6-diethylaniline, and metolachlor. The analytical techniques used are solid-phase extraction, and analysis by gas chromatography with mass-selective detection. Descriptions of cleaning procedures, preparation of standard solutions, isolation of analytes from water, sample transfer methods, instrumental analysis, and data analysis are included.
NASA Astrophysics Data System (ADS)
Oudry, Jennifer; Lynch, Ted; Vappou, Jonathan; Sandrin, Laurent; Miette, Véronique
2014-10-01
Elastographic techniques used in addition to imaging techniques (ultrasound, resonance magnetic or optical) provide new clinical information on the pathological state of soft tissues. However, system-dependent variation in elastographic measurements may limit the clinical utility of these measurements by introducing uncertainty into the measurement. This work is aimed at showing differences in the evaluation of the elastic properties of phantoms performed by four different techniques: quasi-static compression, dynamic mechanical analysis, vibration-controlled transient elastography and hyper-frequency viscoelastic spectroscopy. Four Zerdine® gel materials were tested and formulated to yield a Young’s modulus over the range of normal and cirrhotic liver stiffnesses. The Young’s modulus and the shear wave speed obtained with each technique were compared. Results suggest a bias in elastic property measurement which varies with systems and highlight the difficulty in finding a reference method to determine and assess the elastic properties of tissue-mimicking materials. Additional studies are needed to determine the source of this variation, and control for them so that accurate, reproducible reference standards can be made for the absolute measurement of soft tissue elasticity.
Oudry, Jennifer; Lynch, Ted; Vappou, Jonathan; Sandrin, Laurent; Miette, Véronique
2014-10-07
Elastographic techniques used in addition to imaging techniques (ultrasound, resonance magnetic or optical) provide new clinical information on the pathological state of soft tissues. However, system-dependent variation in elastographic measurements may limit the clinical utility of these measurements by introducing uncertainty into the measurement. This work is aimed at showing differences in the evaluation of the elastic properties of phantoms performed by four different techniques: quasi-static compression, dynamic mechanical analysis, vibration-controlled transient elastography and hyper-frequency viscoelastic spectroscopy. Four Zerdine® gel materials were tested and formulated to yield a Young's modulus over the range of normal and cirrhotic liver stiffnesses. The Young's modulus and the shear wave speed obtained with each technique were compared. Results suggest a bias in elastic property measurement which varies with systems and highlight the difficulty in finding a reference method to determine and assess the elastic properties of tissue-mimicking materials. Additional studies are needed to determine the source of this variation, and control for them so that accurate, reproducible reference standards can be made for the absolute measurement of soft tissue elasticity.
Determination of double bond conversion in dental resins by near infrared spectroscopy.
Stansbury, J W; Dickens, S H
2001-01-01
This study determined the validity and practicality of near infrared (NIR) spectroscopic techniques for measurement of conversion in dental resins. Conversion measurements by NIR and mid-IR were compared using two techniques: (1) The conversion of 3mm thick photopolymerized Bis-GMA/TEGDMA resin specimens was determined by transmission NIR. Specimens were then ground and reanalyzed in KBr pellet form by mid-IR. (2) As further verification, thin resin films were photocured and analyzed by mid-IR. Multiple thin films were then compressed into a thick pellet for examination by NIR. Conversion values obtained by NIR and mid-IR techniques did not differ significantly. A correction for changing specimen thickness due to polymerization shrinkage was applied to NIR conversion measurements since an internal standard reference peak was not employed. Sensitivity of the NIR technique was superior to those based on the mid-IR. The nondestructive analysis of conversion in dental resins by NIR offers advantages of convenience, practical specimen dimensions and precision compared with standard mid-IR analytical procedures. Because glass is virtually transparent in the NIR spectrum, this technique has excellent potential for use with filled dental resins as well.
Determination of the effective sample thickness via radiative capture
Hurst, A. M.; Summers, N. C.; Szentmiklosi, L.; ...
2015-09-14
Our procedure for determining the effective thickness of non-uniform irregular-shaped samples via radiative capture is described. In this technique, partial γ-ray production cross sections of a compound nucleus produced in a neutron-capture reaction are measured using Prompt Gamma Activation Analysis and compared to their corresponding standardized absolute values. For the low-energy transitions, the measured cross sections are lower than their standard values due to significant photoelectric absorption of the γ rays within the bulk-sample volume itself. Using standard theoretical techniques, the amount of γ-ray self absorption and neutron self shielding can then be calculated by iteratively varying the sample thicknessmore » until the observed cross sections converge with the known standards. The overall attenuation provides a measure of the effective sample thickness illuminated by the neutron beam. This procedure is illustrated through radiative neutron capture using powdered oxide samples comprising enriched 186W and 182W from which their tungsten-equivalent effective thicknesses are deduced to be 0.077(3) mm and 0.042(8) mm, respectively.« less
The ILRS Reanalysis 1983 - 2009 Contributed To ITRF2008
NASA Astrophysics Data System (ADS)
Pavlis, E. C.; Luceri, V.; Sciarretta, C.; Kelm, R.
2009-12-01
For over two decades, Satellite Laser Ranging (SLR) data contribute to the definition of the Terrestrial Reference Frame (TRF). Until the development of ITRF2000, the contributions were submitted in the form of a set of normal equations or a covariance matrix of station coordinates and their linear rates at a standard epoch. The development of ITRF2005 ushered a new era with the use of weekly or session contributions, allowing greater flexibility in the relative weighting and the combination of information from various techniques. Moreover, the need of a unique, official, representative solution for each Technique Service, based on the rigorous combination of the various Analysis Centers’ contributions, gave the opportunity to all techniques to verify, as a first step, the intra-technique solution consistency and, immediately after, to engage in discussions and comparison of the internal procedures, leading to a harmonization and validation of these procedures and the adopted models in the inter-technique context. In many occasions, the time series approach joint with the intra- and inter-technique comparison steps also highlighted differences that previously went unnoticed, and corrected incompatibilities. During the past year we have been preparing the ILRS contribution to a second TRF developed in the same way, the ITRF2008. The ILRS approach is based strictly on the current IERS Conventions 2003 and our internal standards. The Unified Analysis Workshop in 2007 stressed a number of areas where each technique needed to focus more attention in future analyses. In the case of SLR, the primary areas of concern were tracking station biases, extending the data span used in the analysis, and target characteristics. The present re-analysis extends from 1983 to 2009, covering a 25-year period, the longest for any of the contributing techniques; although the network and data quality for the 1983-1993 period are significantly poorer than for the latter years, the overall SLR contribution will reinforce the stability of the datum definition, especially in terms of origin and scale. Engineers and analysts have also worked closely over the past two years to determine station biases, rationalize them through correlation with engineering events at the stations, and validate them through analysis. A separate effort focused on developing accurate satellite target signatures for the primary targets contributing to the ITRF product (primarily LAGEOS 1 & 2). A detailed discussion of these works will be presented along with a description of the individual series contributing to the combination, examining their relative quality and temporal coverage, and the statistics of the combined products.
The ILRS contribution to ITRF2008
NASA Astrophysics Data System (ADS)
Pavlis, E. C.; Luceri, V.; Sciarretta, C.; Kelm, R.
2009-04-01
Since over two decades, Satellite Laser Ranging (SLR) data contribute to the definition of the Terrestrial Reference Frame (TRF). Until the development of ITRF2000, the contributions were submitted in the form of a set of normal equations or a covariance matrix of station coordinates and their linear rates at a standard epoch. The development of ITRF2005 ushered a new era with the use of weekly or session contributions, allowing greater flexibility in the relative weighting and the combination of information from various techniques. Moreover, the need of a unique, official, representative solution for each Technique Service, based on the rigorous combination of the various Analysis Centers' contributions, gave the opportunity to all techniques to verify, as a first step, the intra-technique solution consistency and, immediately after, to engage in discussions and comparison of the internal procedures, leading to a harmonization and validation of these procedures and the adopted models in the inter-technique context. In many occasions, the time series approach joint with the intra- and inter-technique comparison steps also highlighted differences that previously went unnoticed, and corrected incompatibilities. During the past year we have been preparing the ILRS contribution to a second TRF developed in the same way, the ITRF2008. The ILRS approach is based strictly on the current IERS Conventions 2003 and our internal standards. The Unified Analysis Workshop in 2007 stressed a number of areas where each technique needed to focus more attention in future analyses. In the case of SLR, the primary areas of concern were tracking station biases, extending the data span used in the analysis, and target characteristics. The present re-analysis extends from 1983 to 2008, covering a 25-year period, the longest for any of the contributing techniques; although the network and data quality for the 1983-1993 period are significantly poorer than for the latter years, the overall SLR contribution will reinforce the stability of the datum definition, especially in terms of origin and scale. Engineers and analysts have also worked closely over the past two years to determine station biases, rationalize them through correlation with engineering events at the stations, and validate them through analysis. A separate effort focused on developing accurate satellite target signatures for the primary targets contributing to the ITRF product (primarily LAGEOS 1 & 2). A detailed discussion of these works will be presented in a separate presentation. Here, we will restrict our presentation to the description of the individual series contributing to the combination, examine their relative quality and temporal coverage, and statistics of the initial, preliminary combined products.
Baker, Jay B; Maskell, Kevin F; Matlock, Aaron G; Walsh, Ryan M; Skinner, Carl G
2015-07-01
We compared intubating with a preloaded bougie (PB) against standard bougie technique in terms of success rates, time to successful intubation and provider preference on a cadaveric airway model. In this prospective, crossover study, healthcare providers intubated a cadaver using the PB technique and the standard bougie technique. Participants were randomly assigned to start with either technique. Following standardized training and practice, procedural success and time for each technique was recorded for each participant. Subsequently, participants were asked to rate their perceived ease of intubation on a visual analogue scale of 1 to 10 (1=difficult and 10=easy) and to select which technique they preferred. 47 participants with variable experience intubating were enrolled at an emergency medicine intern airway course. The success rate of all groups for both techniques was equal (95.7%). The range of times to completion for the standard bougie technique was 16.0-70.2 seconds, with a mean time of 29.7 seconds. The range of times to completion for the PB technique was 15.7-110.9 seconds, with a mean time of 29.4 seconds. There was a non-significant difference of 0.3 seconds (95% confidence interval -2.8 to 3.4 seconds) between the two techniques. Participants rated the relative ease of intubation as 7.3/10 for the standard technique and 7.6/10 for the preloaded technique (p=0.53, 95% confidence interval of the difference -0.97 to 0.50). Thirty of 47 participants subjectively preferred the PB technique (p=0.039). There was no significant difference in success or time to intubation between standard bougie and PB techniques. The majority of participants in this study preferred the PB technique. Until a clear and clinically significant difference is found between these techniques, emergency airway operators should feel confident in using the technique with which they are most comfortable.
NASA Astrophysics Data System (ADS)
Tian, Lunfu; Wang, Lili; Gao, Wei; Weng, Xiaodong; Liu, Jianhui; Zou, Deshuang; Dai, Yichun; Huang, Shuke
2018-03-01
For the quantitative analysis of the principal elements in lead-antimony-tin alloys, directly X-ray fluorescence (XRF) method using solid metal disks introduces considerable errors due to the microstructure inhomogeneity. To solve this problem, an aqueous solution XRF method is proposed for determining major amounts of Sb, Sn, Pb in lead-based bearing alloys. The alloy samples were dissolved by a mixture of nitric acid and tartaric acid to eliminated the effects of microstructure of these alloys on the XRF analysis. Rh Compton scattering was used as internal standard for Sb and Sn, and Bi was added as internal standard for Pb, to correct for matrix effects, instrumental and operational variations. High-purity lead, antimony and tin were used to prepare synthetic standards. Using these standards, calibration curves were constructed for the three elements after optimizing the spectrometer parameters. The method has been successfully applied to the analysis of lead-based bearing alloys and is more rapid than classical titration methods normally used. The determination results are consistent with certified values or those obtained by titrations.
Anderson, Carl A; McRae, Allan F; Visscher, Peter M
2006-07-01
Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.
FINANCIAL ANALYSIS OF CURRENT OPERATIONS OF COLLEGES AND UNIVERSITIES.
ERIC Educational Resources Information Center
SWANSON, JOHN E.; AND OTHERS
TECHNIQUES FOR DEVELOPING FINANCIAL AND RELATED COST-EFFECTIVENESS DATA FOR PUBLIC AND PRIVATELY SUPPORTED AMERICAN COLLEGES AND UNIVERSITIES WERE STUDIED TO FORMULATE PRINCIPLES, PROCEDURES, AND STANDARDS FOR THE ACCUMULATION AND ANALYSES OF CURRENT OPERATING COSTS. AFTER SEPARATE ANALYSES OF INSTITUTIONAL PROCEDURES AND REPORTS, ANALYTIC UNITS…
ERIC Educational Resources Information Center
School Science Review, 1972
1972-01-01
Short articles describe the production, photography, and analysis of diffraction patterns using a small laser, a technique for measuring electrical resistance without a standard resistor, a demonstration of a thermocouple effect in a galvanometer with a built-in light source, and a common error in deriving the expression for centripetal force. (AL)
Student Effort and Performance over the Semester
ERIC Educational Resources Information Center
Krohn, Gregory A.; O'Connor, Catherine M.
2005-01-01
The authors extend the standard education production function and student time allocation analysis to focus on the interactions between student effort and performance over the semester. The purged instrumental variable technique is used to obtain consistent estimators of the structural parameters of the model using data from intermediate…
Qualitative Examination of Children's Naming Skills through Test Adaptations.
ERIC Educational Resources Information Center
Fried-Oken, Melanie
1987-01-01
The Double Administration Naming Technique assists clinicians in obtaining qualitative information about a client's visual confrontation naming skills through administration of a standard naming test; readministration of the same test; identification of single and double errors; cuing for double naming errors; and qualitative analysis of naming…
Cost considerations in using simulations for medical training.
Fletcher, J D; Wind, Alexander P
2013-10-01
This article reviews simulation used for medical training, techniques for assessing simulation-based training, and cost analyses that can be included in such assessments. Simulation in medical training appears to take four general forms: human actors who are taught to simulate illnesses and ailments in standardized ways; virtual patients who are generally presented via computer-controlled, multimedia displays; full-body manikins that simulate patients using electronic sensors, responders, and controls; and part-task anatomical simulations of various body parts and systems. Techniques for assessing costs include benefit-cost analysis, return on investment, and cost-effectiveness analysis. Techniques for assessing the effectiveness of simulation-based medical training include the use of transfer effectiveness ratios and incremental transfer effectiveness ratios to measure transfer of knowledge and skill provided by simulation to the performance of medical procedures. Assessment of costs and simulation effectiveness can be combined with measures of transfer using techniques such as isoperformance analysis to identify ways of minimizing costs without reducing performance effectiveness or maximizing performance without increasing costs. In sum, economic analysis must be considered in training assessments if training budgets are to compete successfully with other requirements for funding. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.
Magenes, G; Bellazzi, R; Malovini, A; Signorini, M G
2016-08-01
The onset of fetal pathologies can be screened during pregnancy by means of Fetal Heart Rate (FHR) monitoring and analysis. Noticeable advances in understanding FHR variations were obtained in the last twenty years, thanks to the introduction of quantitative indices extracted from the FHR signal. This study searches for discriminating Normal and Intra Uterine Growth Restricted (IUGR) fetuses by applying data mining techniques to FHR parameters, obtained from recordings in a population of 122 fetuses (61 healthy and 61 IUGRs), through standard CTG non-stress test. We computed N=12 indices (N=4 related to time domain FHR analysis, N=4 to frequency domain and N=4 to non-linear analysis) and normalized them with respect to the gestational week. We compared, through a 10-fold crossvalidation procedure, 15 data mining techniques in order to select the more reliable approach for identifying IUGR fetuses. The results of this comparison highlight that two techniques (Random Forest and Logistic Regression) show the best classification accuracy and that both outperform the best single parameter in terms of mean AUROC on the test sets.
A modeling analysis of alternative primary and secondary US ozone standards in urban and rural areas
NASA Astrophysics Data System (ADS)
Nopmongcol, Uarporn; Emery, Chris; Sakulyanontvittaya, Tanarit; Jung, Jaegun; Knipping, Eladio; Yarwood, Greg
2014-12-01
This study employed the High-Order Decoupled Direct Method (HDDM) of sensitivity analysis in a photochemical grid model to determine US anthropogenic emissions reductions required from 2006 levels to meet alternative US primary (health-based) and secondary (welfare-based) ozone (O3) standards. Applying the modeling techniques developed by Yarwood et al. (2013), we specifically evaluated sector-wide emission reductions needed to meet primary standards in the range of 60-75 ppb, and secondary standards in the range of 7-15 ppm-h, in 22 cities and at 20 rural sites across the US for NOx-only, combined NOx and VOC, and VOC-only scenarios. Site-specific model biases were taken into account by applying adjustment factors separately for the primary and secondary standard metrics, analogous to the US Environmental Protection Agency's (EPA) relative response factor technique. Both bias-adjusted and unadjusted results are presented and analyzed. We found that the secondary metric does not necessarily respond to emission reductions the same way the primary metric does, indicating sensitivity to their different forms. Combined NOx and VOC reductions are most effective for cities, whereas NOx-only reductions are sufficient at rural sites. Most cities we examined require more than 50% US anthropogenic emission reductions from 2006 levels to meet the current primary 75 ppb US standard and secondary 15 ppm-h target. Most rural sites require less than 20% reductions to meet the primary 75 ppb standard and less than 40% reductions to meet the secondary 15 ppm-h target. Whether the primary standard is protective of the secondary standard depends on the combination of alternative standard levels. Our modeling suggests that the current 75 ppb standard achieves a 15 ppm-h secondary target in most (17 of 22) cities, but only half of the rural sites; the inability for several western cities and rural areas to achieve the seasonally-summed secondary 15 ppm-h target while meeting the 75 ppb primary target is likely driven by higher background O3 that is commonly reported in the western US. However, a 70 ppb primary standard is protective of a 15 ppm-h secondary standard in all cities and 18 of 20 rural sites we examined, and a 60 ppb primary standard is protective of a 7 ppm-h secondary standard in all cities and 19 of 20 rural sites. If EPA promulgates separate primary and secondary standards, exceedance areas will need to develop and demonstrate control strategies to achieve both. This HDDM analysis provides an illustrative screening assessment by which to estimate emissions reductions necessary to satisfy both standards.
NASA Astrophysics Data System (ADS)
Pandzic, K.; Likso, T.
2012-04-01
Conventional Palmer Drought Index (PDI) and recent Standardized Precipitation Index (SPI) for Zagreb Gric Observatory are compared by spectral analysis technique. Data for a period 1862-2010 are used. The results indicate that SPI is simpler for interpretation but PDI more comprehensive index. On the other side, lack of temperature within SPI, make impossible application of it on climate change interpretation. Possible applications of them in irrigation scheduling system is considered as well for drought risk assessment.
Cristiano, Bárbara F G; Delgado, José Ubiratan; da Silva, José Wanderley S; de Barros, Pedro D; de Araújo, Radier M S; Dias, Fábio C; Lopes, Ricardo T
2012-09-01
The potentiometric titration method was used for characterization of uranium compounds to be applied in intercomparison programs. The method is applied with traceability assured using a potassium dichromate primary standard. A semi-automatic version was developed to reduce the analysis time and the operator variation. The standard uncertainty in determining the total concentration of uranium was around 0.01%, which is suitable for uranium characterization and compatible with those obtained by manual techniques. Copyright © 2012 Elsevier Ltd. All rights reserved.
Ullattuthodi, Sujana; Cherian, Kandathil Phillip; Anandkumar, R; Nambiar, M Sreedevi
2017-01-01
This in vitro study seeks to evaluate and compare the marginal and internal fit of cobalt-chromium copings fabricated using the conventional and direct metal laser sintering (DMLS) techniques. A master model of a prepared molar tooth was made using cobalt-chromium alloy. Silicone impression of the master model was made and thirty standardized working models were then produced; twenty working models for conventional lost-wax technique and ten working models for DMLS technique. A total of twenty metal copings were fabricated using two different production techniques: conventional lost-wax method and DMLS; ten samples in each group. The conventional and DMLS copings were cemented to the working models using glass ionomer cement. Marginal gap of the copings were measured at predetermined four points. The die with the cemented copings are standardized-sectioned with a heavy duty lathe. Then, each sectioned samples were analyzed for the internal gap between the die and the metal coping using a metallurgical microscope. Digital photographs were taken at ×50 magnification and analyzed using measurement software. Statistical analysis was done by unpaired t -test and analysis of variance (ANOVA). The results of this study reveal that no significant difference was present in the marginal gap of conventional and DMLS copings ( P > 0.05) by means of ANOVA. The mean values of internal gap of DMLS copings were significantly greater than that of conventional copings ( P < 0.05). Within the limitations of this in vitro study, it was concluded that the internal fit of conventional copings was superior to that of the DMLS copings. Marginal fit of the copings fabricated by two different techniques had no significant difference.
Considerations for monitoring raptor population trends based on counts of migrants
Titus, K.; Fuller, M.R.; Ruos, J.L.; Meyburg, B-U.; Chancellor, R.D.
1989-01-01
Various problems were identified with standardized hawk count data as annually collected at six sites. Some of the hawk lookouts increased their hours of observation from 1979-1985, thereby confounding the total counts. Data recording and missing data hamper coding of data and their use with modern analytical techniques. Coefficients of variation among years in counts averaged about 40%. The advantages and disadvantages of various analytical techniques are discussed including regression, non-parametric rank correlation trend analysis, and moving averages.
Error analysis of Dobson spectrophotometer measurements of the total ozone content
NASA Technical Reports Server (NTRS)
Holland, A. C.; Thomas, R. W. L.
1975-01-01
A study of techniques for measuring atmospheric ozone is reported. This study represents the second phase of a program designed to improve techniques for the measurement of atmospheric ozone. This phase of the program studied the sensitivity of Dobson direct sun measurements and the ozone amounts inferred from those measurements to variation in the atmospheric temperature profile. The study used the plane - parallel Monte-Carlo model developed and tested under the initial phase of this program, and a series of standard model atmospheres.
Goldfarb, Charles A; Strauss, Nicole L; Wall, Lindley B; Calfee, Ryan P
2011-02-01
The measurement technique for ulnar variance in the adolescent population has not been well established. The purpose of this study was to assess the reliability of a standard ulnar variance assessment in the adolescent population. Four orthopedic surgeons measured 138 adolescent wrist radiographs for ulnar variance using a standard technique. There were 62 male and 76 female radiographs obtained in a standardized fashion for subjects aged 12 to 18 years. Skeletal age was used for analysis. We determined mean variance and assessed for differences related to age and gender. We also determined the interrater reliability. The mean variance was -0.7 mm for boys and -0.4 mm for girls; there was no significant difference between the 2 groups overall. When subdivided by age and gender, the younger group (≤ 15 y of age) was significantly less negative for girls (boys, -0.8 mm and girls, -0.3 mm, p < .05). There was no significant difference between boys and girls in the older group. The greatest difference between any 2 raters was 1 mm; exact agreement was obtained in 72 subjects. Correlations between raters were high (r(p) 0.87-0.97 in boys and 0.82-0.96 for girls). Interrater reliability was excellent (Cronbach's alpha, 0.97-0.98). Standard assessment techniques for ulnar variance are reliable in the adolescent population. Open growth plates did not interfere with this assessment. Young adolescent boys demonstrated a greater degree of negative ulnar variance compared with young adolescent girls. Copyright © 2011 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
Lu, Z. Q. J.; Lowhorn, N. D.; Wong-Ng, W.; Zhang, W.; Thomas, E. L.; Otani, M.; Green, M. L.; Tran, T. N.; Caylor, C.; Dilley, N. R.; Downey, A.; Edwards, B.; Elsner, N.; Ghamaty, S.; Hogan, T.; Jie, Q.; Li, Q.; Martin, J.; Nolas, G.; Obara, H.; Sharp, J.; Venkatasubramanian, R.; Willigan, R.; Yang, J.; Tritt, T.
2009-01-01
In an effort to develop a Standard Reference Material (SRM™) for Seebeck coefficient, we have conducted a round-robin measurement survey of two candidate materials—undoped Bi2Te3 and Constantan (55 % Cu and 45 % Ni alloy). Measurements were performed in two rounds by twelve laboratories involved in active thermoelectric research using a number of different commercial and custom-built measurement systems and techniques. In this paper we report the detailed statistical analyses on the interlaboratory measurement results and the statistical methodology for analysis of irregularly sampled measurement curves in the interlaboratory study setting. Based on these results, we have selected Bi2Te3 as the prototype standard material. Once available, this SRM will be useful for future interlaboratory data comparison and instrument calibrations. PMID:27504212
Cost Per Flying Hour Analysis of the C-141
1997-09-01
Government Printing Office, 1996. Horngren , Charles T. Cost Accounting : A Managerial Emphasis (Eighth Edition). New Jersey: Prentice Hall, 1994. Hough...standard accounting techniques. This analysis of AMC’s current costs and their applicability to the price charged to the customer shall be the focus of... Horngren et al.,1994:864). There are three generally recognized methods of determining a transfer price (Arnstein and Gilabert, 1980:189). Cost based
Kim, Paul Jeong; Peace, Ruth; Mieras, Jamie; Thoms, Tanya; Freeman, Denise; Page, Jeffrey
2011-01-01
Goniometric measurement is currently being used as a diagnostic and outcomes assessment tool for ankle joint dorsiflexion. Despite its common use, its interrater and intrarater reliability has been questioned. This is a prospective study examining whether the experience of the examiner or the technique used affects the interrater and intrarater reliability for measuring ankle joint dorsiflexion. Fourteen asymptomatic individuals (8 male and 6 female) with a mean age of 28.2 years (range, 23-52) were enrolled into this study. The years of clinical experience of the five examiners averaged 10.4 years (range, 0-26). Four examiners used a modified Root, Weed and Orien method of measuring ankle joint dorsiflexion. The fifth examiner utilized a nonstandardized technique. A standard goniometer was used for bilateral measurements of ankle joint dorsiflexion with the knee extended and flexed. All five examiners repeated each measurement three times during each of the three sessions, with each session spaced at least 1 week apart. The interclass correlation coefficient reveals a moderate intrarater and poor interrater reliability in ankle joint dorsiflexion measurements using a standard goniometer. More importantly, further analysis indicates that the use of a standardized technique for measurement of ankle joint dorsiflexion or years of clinical experience does not increase the intrarater or interrater reliability. The utility of the goniometric measurement of ankle joint dorsiflexion may be limited.
NASA Technical Reports Server (NTRS)
Feng, Wanda; Evans, Cynthia; Gruener, John; Eppler, Dean
2014-01-01
Geologic mapping involves interpreting relationships between identifiable units and landforms to understand the formative history of a region. Traditional field techniques are used to accomplish this on Earth. Mapping proves more challenging for other planets, which are studied primarily by orbital remote sensing and, less frequently, by robotic and human surface exploration. Systematic comparative assessments of geologic maps created by traditional mapping versus photogeology together with data from planned traverses are limited. The objective of this project is to produce a geologic map from data collected on the Desert Research and Technology Studies (RATS) 2010 analog mission using Apollo-style traverses in conjunction with remote sensing data. This map is compared with a geologic map produced using standard field techniques.
Micro-quantity tissue digestion for metal measurements by use of a microwave acid-digestion bomb.
Nicholson, J R; Savory, M G; Savory, J; Wills, M R
1989-03-01
We describe a simple and convenient method for processing small amounts of tissue samples for trace-metal measurements by atomic absorption spectrometry, by use of a modified Parr microwave digestion bomb. Digestion proceeds rapidly (less than or equal to 90 s) in a sealed Teflon-lined vessel that eliminates contamination or loss from volatilization. Small quantities of tissue (5-100 mg dry weight) are digested in high-purity nitric acid, yielding concentrations of analyte that can be measured directly without further sample manipulation. We analyzed National Institute of Standards and Technology bovine liver Standard Reference Material to verify the accuracy of the technique. We assessed the applicability of the technique to analysis for aluminum in bone by comparison with a dry ashing procedure.
A new software tool for 3D motion analyses of the musculo-skeletal system.
Leardini, A; Belvedere, C; Astolfi, L; Fantozzi, S; Viceconti, M; Taddei, F; Ensini, A; Benedetti, M G; Catani, F
2006-10-01
Many clinical and biomechanical research studies, particularly in orthopaedics, nowadays involve forms of movement analysis. Gait analysis, video-fluoroscopy of joint replacement, pre-operative planning, surgical navigation, and standard radiostereometry would require tools for easy access to three-dimensional graphical representations of rigid segment motion. Relevant data from this variety of sources need to be organised in structured forms. Registration, integration, and synchronisation of segment position data are additional necessities. With this aim, the present work exploits the features of a software tool recently developed within a EU-funded project ('Multimod') in a series of different research studies. Standard and advanced gait analysis on a normal subject, in vivo fluoroscopy-based three-dimensional motion of a replaced knee joint, patellar and ligament tracking on a knee specimen by a surgical navigation system, stem-to-femur migration pattern on a patient operated on total hip replacement, were analysed with standard techniques and all represented by this innovative software tool. Segment pose data were eventually obtained from these different techniques, and were successfully imported and organised in a hierarchical tree within the tool. Skeletal bony segments, prosthesis component models and ligament links were registered successfully to corresponding marker position data for effective three-dimensional animations. These were shown in various combinations, in different views, from different perspectives, according to possible specific research interests. Bioengineering and medical professionals would be much facilitated in the interpretation of the motion analysis measurements necessary in their research fields, and would benefit therefore from this software tool.
40 CFR 60.583 - Test methods and procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) as follows: (1) Method 24 for analysis of inks. If nonphotochemically reactive solvents are used in the inks, standard gas chromatographic techniques may be used to identify and quantify these solvents... of an affected facility shall determine the weighted average VOC content of the inks according to the...
40 CFR 60.583 - Test methods and procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) as follows: (1) Method 24 for analysis of inks. If nonphotochemically reactive solvents are used in the inks, standard gas chromatographic techniques may be used to identify and quantify these solvents... of an affected facility shall determine the weighted average VOC content of the inks according to the...
40 CFR 60.583 - Test methods and procedures.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) as follows: (1) Method 24 for analysis of inks. If nonphotochemically reactive solvents are used in the inks, standard gas chromatographic techniques may be used to identify and quantify these solvents... of an affected facility shall determine the weighted average VOC content of the inks according to the...
On the Bias-Amplifying Effect of Near Instruments in Observational Studies
ERIC Educational Resources Information Center
Steiner, Peter M.; Kim, Yongnam
2014-01-01
In contrast to randomized experiments, the estimation of unbiased treatment effects from observational data requires an analysis that conditions on all confounding covariates. Conditioning on covariates can be done via standard parametric regression techniques or nonparametric matching like propensity score (PS) matching. The regression or…
An incremental economic analysis of establishing early successional habitat for biodiversity
Slayton W. Hazard-Daniel; Patrick Hiesl; Susan C. Loeb; Thomas J. Straka
2017-01-01
Early successional habitat (ESH) is an important component of natural landscapes and is crucial to maintaining biodiversity. ESH also impacts endangered species. The extent of forest disturbances resulting in ESH has been diminishing, and foresters have developed timber management regimes using standard silvicultural techniques that...
Evaluating the Skill of Students to Determine Soil Morphology Characteristics
ERIC Educational Resources Information Center
Post, Donald F.; Parikh, Sanjai J.; Papp, Rae Ann; Ferriera, Laerta
2006-01-01
Precise and accurate pedon descriptions prepared by field scientists using standard techniques with defined terminology and methodology are essential in describing soil pedons. The accuracy of field measurements generally are defined in terms of how well they agree with objective criteria (e.g., laboratory analysis), such as mechanical analysis…
ERIC Educational Resources Information Center
Loehlin, James H.; Norton, Alexandra P.
1988-01-01
Describes a crystallography experiment using both diffraction-angle and diffraction-intensity information to determine the lattice constant and a lattice independent molecular parameter, while still employing standard X-ray powder diffraction techniques. Details the method, experimental details, and analysis for this activity. (CW)
ERIC Educational Resources Information Center
Okazaki, Shintaro; Alonso Rivas, Javier
2002-01-01
Discussion of research methodology for evaluating the degree of standardization in multinational corporations' online communication strategies across differing cultures focuses on a research framework for cross-cultural comparison of corporate Web pages, applying traditional advertising content study techniques. Describes pre-tests that examined…
Measuring the Impact of Education on Productivity. Working Paper #261.
ERIC Educational Resources Information Center
Plant, Mark; Welch, Finis
A theoretical and conceptual analysis of techniques used to measure education's contribution to productivity is followed by a discussion of the empirical measures implemented by various researchers. Standard methods of growth accounting make sense for simple measurement of factor contributions where outputs are well measured and when factor growth…
The measurement of dissolved gases such as methane, ethane, and ethylene in ground water is important in determining whether intrinsic bioremediation is occurring in a fuel- or solvent-contaminated aquifer. A simple procedure is described for the collection and subsequent analys...
NASA Astrophysics Data System (ADS)
Nardi, F.; Grimaldi, S.; Petroselli, A.
2012-12-01
Remotely sensed Digital Elevation Models (DEMs), largely available at high resolution, and advanced terrain analysis techniques built in Geographic Information Systems (GIS), provide unique opportunities for DEM-based hydrologic and hydraulic modelling in data-scarce river basins paving the way for flood mapping at the global scale. This research is based on the implementation of a fully continuous hydrologic-hydraulic modelling optimized for ungauged basins with limited river flow measurements. The proposed procedure is characterized by a rainfall generator that feeds a continuous rainfall-runoff model producing flow time series that are routed along the channel using a bidimensional hydraulic model for the detailed representation of the inundation process. The main advantage of the proposed approach is the characterization of the entire physical process during hydrologic extreme events of channel runoff generation, propagation, and overland flow within the floodplain domain. This physically-based model neglects the need for synthetic design hyetograph and hydrograph estimation that constitute the main source of subjective analysis and uncertainty of standard methods for flood mapping. Selected case studies show results and performances of the proposed procedure as respect to standard event-based approaches.
Effect size calculation in meta-analyses of psychotherapy outcome research.
Hoyt, William T; Del Re, A C
2018-05-01
Meta-analysis of psychotherapy intervention research normally examines differences between treatment groups and some form of comparison group (e.g., wait list control; alternative treatment group). The effect of treatment is normally quantified as a standardized mean difference (SMD). We describe procedures for computing unbiased estimates of the population SMD from sample data (e.g., group Ms and SDs), and provide guidance about a number of complications that may arise related to effect size computation. These complications include (a) incomplete data in research reports; (b) use of baseline data in computing SMDs and estimating the population standard deviation (σ); (c) combining effect size data from studies using different research designs; and (d) appropriate techniques for analysis of data from studies providing multiple estimates of the effect of interest (i.e., dependent effect sizes). Clinical or Methodological Significance of this article: Meta-analysis is a set of techniques for producing valid summaries of existing research. The initial computational step for meta-analyses of research on intervention outcomes involves computing an effect size quantifying the change attributable to the intervention. We discuss common issues in the computation of effect sizes and provide recommended procedures to address them.
Surface contamination analysis technology team overview
NASA Technical Reports Server (NTRS)
Burns, H. Dewitt
1995-01-01
A team was established which consisted of representatives from NASA (Marshall Space Flight Center and Langley Research Center), Thiokol Corporation, the University of Alabama in Huntsville, AC Engineering, SAIC, Martin Marietta, and Aerojet. The team's purpose was to bring together the appropriate personnel to determine what surface inspection techniques were applicable to multiprogram bonding surface cleanliness inspection. In order to identify appropriate techniques and their sensitivity to various contaminant families, calibration standards were developed. Producing standards included development of consistent low level contamination application techniques. Oxidation was also considered for effect on inspection equipment response. Ellipsometry was used for oxidation characterization. Verification testing was then accomplished to show that selected inspection techniques could detect subject contaminants at levels found to be detrimental to critical bond systems of interest. Once feasibility of identified techniques was shown, selected techniques and instrumentation could then be incorporated into a multipurpose inspection head and integrated with a robot for critical surface inspection. Inspection techniques currently being evaluated include optically stimulated electron emission (OSEE); near infrared (NIR) spectroscopy utilizing fiber optics; Fourier transform infrared (FTIR) spectroscopy; and ultraviolet (UV) fluorescence. Current plans are to demonstrate an integrated system in MSFC's Productivity Enhancement Complex within five years from initiation of this effort in 1992 assuming appropriate funding levels are maintained. This paper gives an overview of work accomplished by the team and future plans.
Virtual lab demonstrations improve students' mastery of basic biology laboratory techniques.
Maldarelli, Grace A; Hartmann, Erica M; Cummings, Patrick J; Horner, Robert D; Obom, Kristina M; Shingles, Richard; Pearlman, Rebecca S
2009-01-01
Biology laboratory classes are designed to teach concepts and techniques through experiential learning. Students who have never performed a technique must be guided through the process, which is often difficult to standardize across multiple lab sections. Visual demonstration of laboratory procedures is a key element in teaching pedagogy. The main goals of the study were to create videos explaining and demonstrating a variety of lab techniques that would serve as teaching tools for undergraduate and graduate lab courses and to assess the impact of these videos on student learning. Demonstrations of individual laboratory procedures were videotaped and then edited with iMovie. Narration for the videos was edited with Audacity. Undergraduate students were surveyed anonymously prior to and following screening to assess the impact of the videos on student lab performance by completion of two Participant Perception Indicator surveys. A total of 203 and 171 students completed the pre- and posttesting surveys, respectively. Statistical analyses were performed to compare student perceptions of knowledge of, confidence in, and experience with the lab techniques before and after viewing the videos. Eleven demonstrations were recorded. Chi-square analysis revealed a significant increase in the number of students reporting increased knowledge of, confidence in, and experience with the lab techniques after viewing the videos. Incorporation of instructional videos as prelaboratory exercises has the potential to standardize techniques and to promote successful experimental outcomes.
Friedman, S N; Bambrough, P J; Kotsarini, C; Khandanpour, N; Hoggard, N
2012-12-01
Despite the established role of MRI in the diagnosis of brain tumours, histopathological assessment remains the clinically used technique, especially for the glioma group. Relative cerebral blood volume (rCBV) is a dynamic susceptibility-weighted contrast-enhanced perfusion MRI parameter that has been shown to correlate to tumour grade, but assessment requires a specialist and is time consuming. We developed analysis software to determine glioma gradings from perfusion rCBV scans in a manner that is quick, easy and does not require a specialist operator. MRI perfusion data from 47 patients with different histopathological grades of glioma were analysed with custom-designed software. Semi-automated analysis was performed with a specialist and non-specialist operator separately determining the maximum rCBV value corresponding to the tumour. Automated histogram analysis was performed by calculating the mean, standard deviation, median, mode, skewness and kurtosis of rCBV values. All values were compared with the histopathologically assessed tumour grade. A strong correlation between specialist and non-specialist observer measurements was found. Significantly different values were obtained between tumour grades using both semi-automated and automated techniques, consistent with previous results. The raw (unnormalised) data single-pixel maximum rCBV semi-automated analysis value had the strongest correlation with glioma grade. Standard deviation of the raw data had the strongest correlation of the automated analysis. Semi-automated calculation of raw maximum rCBV value was the best indicator of tumour grade and does not require a specialist operator. Both semi-automated and automated MRI perfusion techniques provide viable non-invasive alternatives to biopsy for glioma tumour grading.
The coming paradigm shift: A transition from manual to automated microscopy.
Farahani, Navid; Monteith, Corey E
2016-01-01
The field of pathology has used light microscopy (LM) extensively since the mid-19(th) century for examination of histological tissue preparations. This technology has remained the foremost tool in use by pathologists even as other fields have undergone a great change in recent years through new technologies. However, as new microscopy techniques are perfected and made available, this reliance on the standard LM will likely begin to change. Advanced imaging involving both diffraction-limited and subdiffraction techniques are bringing nondestructive, high-resolution, molecular-level imaging to pathology. Some of these technologies can produce three-dimensional (3D) datasets from sampled tissues. In addition, block-face/tissue-sectioning techniques are already providing automated, large-scale 3D datasets of whole specimens. These datasets allow pathologists to see an entire sample with all of its spatial information intact, and furthermore allow image analysis such as detection, segmentation, and classification, which are impossible in standard LM. It is likely that these technologies herald a major paradigm shift in the field of pathology.
Histogram analysis for smartphone-based rapid hematocrit determination
Jalal, Uddin M.; Kim, Sang C.; Shim, Joon S.
2017-01-01
A novel and rapid analysis technique using histogram has been proposed for the colorimetric quantification of blood hematocrits. A smartphone-based “Histogram” app for the detection of hematocrits has been developed integrating the smartphone embedded camera with a microfluidic chip via a custom-made optical platform. The developed histogram analysis shows its effectiveness in the automatic detection of sample channel including auto-calibration and can analyze the single-channel as well as multi-channel images. Furthermore, the analyzing method is advantageous to the quantification of blood-hematocrit both in the equal and varying optical conditions. The rapid determination of blood hematocrits carries enormous information regarding physiological disorders, and the use of such reproducible, cost-effective, and standard techniques may effectively help with the diagnosis and prevention of a number of human diseases. PMID:28717569
Setting Standards for Reporting and Quantification in Fluorescence-Guided Surgery.
Hoogstins, Charlotte; Burggraaf, Jan Jaap; Koller, Marjory; Handgraaf, Henricus; Boogerd, Leonora; van Dam, Gooitzen; Vahrmeijer, Alexander; Burggraaf, Jacobus
2018-05-29
Intraoperative fluorescence imaging (FI) is a promising technique that could potentially guide oncologic surgeons toward more radical resections and thus improve clinical outcome. Despite the increase in the number of clinical trials, fluorescent agents and imaging systems for intraoperative FI, a standardized approach for imaging system performance assessment and post-acquisition image analysis is currently unavailable. We conducted a systematic, controlled comparison between two commercially available imaging systems using a novel calibration device for FI systems and various fluorescent agents. In addition, we analyzed fluorescence images from previous studies to evaluate signal-to-background ratio (SBR) and determinants of SBR. Using the calibration device, imaging system performance could be quantified and compared, exposing relevant differences in sensitivity. Image analysis demonstrated a profound influence of background noise and the selection of the background on SBR. In this article, we suggest clear approaches for the quantification of imaging system performance assessment and post-acquisition image analysis, attempting to set new standards in the field of FI.
Deng, Xi; Schröder, Simone; Redweik, Sabine; Wätzig, Hermann
2011-06-01
Gel electrophoresis (GE) is a very common analytical technique for proteome research and protein analysis. Despite being developed decades ago, there is still a considerable need to improve its precision. Using the fluorescence of Colloidal Coomassie Blue -stained proteins in near-infrared (NIR), the major error source caused by the unpredictable background staining is strongly reduced. This result was generalized for various types of detectors. Since GE is a multi-step procedure, standardization of every single step is required. After detailed analysis of all steps, the staining and destaining were identified as the major source of the remaining variation. By employing standardized protocols, pooled percent relative standard deviations of 1.2-3.1% for band intensities were achieved for one-dimensional separations in repetitive experiments. The analysis of variance suggests that the same batch of staining solution should be used for gels of one experimental series to minimize day-to-day variation and to obtain high precision. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Analytical electron microscopy in mineralogy; exsolved phases in pyroxenes
Nord, G.L.
1982-01-01
Analytical scanning transmission electron microscopy has been successfully used to characterize the structure and composition of lamellar exsolution products in pyroxenes. At operating voltages of 100 and 200 keV, microanalytical techniques of x-ray energy analysis, convergent-beam electron diffraction, and lattice imaging have been used to chemically and structurally characterize exsolution lamellae only a few unit cells wide. Quantitative X-ray energy analysis using ratios of peak intensities has been adopted for the U.S. Geological Survey AEM in order to study the compositions of exsolved phases and changes in compositional profiles as a function of time and temperature. The quantitative analysis procedure involves 1) removal of instrument-induced background, 2) reduction of contamination, and 3) measurement of correction factors obtained from a wide range of standard compositions. The peak-ratio technique requires that the specimen thickness at the point of analysis be thin enough to make absorption corrections unnecessary (i.e., to satisfy the "thin-foil criteria"). In pyroxenes, the calculated "maximum thicknesses" range from 130 to 1400 nm for the ratios Mg/Si, Fe/Si, and Ca/Si; these "maximum thicknesses" have been contoured in pyroxene composition space as a guide during analysis. Analytical spatial resolutions of 50-100 nm have been achieved in AEM at 200 keV from the composition-profile studies, and analytical reproducibility in AEM from homogeneous pyroxene standards is ?? 1.5 mol% endmember. ?? 1982.
Cross-Calibration of Secondary Electron Multiplier in Noble Gas Analysis
NASA Astrophysics Data System (ADS)
Santato, Alessandro; Hamilton, Doug; Deerberg, Michael; Wijbrans, Jan; Kuiper, Klaudia; Bouman, Claudia
2015-04-01
The latest generation of multi-collector noble gas mass spectrometers has decisively improved the precision in isotopic ratio analysis [1, 2] and helped the scientific community to address new questions [3]. Measuring numerous isotopes simultaneously has two significant advantages: firstly, any fluctuations in signal intensity have no effect on the isotope ratio and secondly, the analysis time is reduced. This particular point becomes very important in static vacuum mass spectrometry where during the analysis, the signal intensity decays and at the same time the background increases. However, when multi-collector analysis is utilized, it is necessary to pay special attention to the cross calibration of the detectors. This is a key point in order to have accurate and reproducible isotopic ratios. In isotope ratio mass spectrometry, with regard to the type of detector (i.e. Faraday or Secondary Electron Multiplier, SEM), analytical technique (TIMS, MC-ICP-MS or IRMS) and isotope system of interest, several techniques are currently applied to cross-calibrate the detectors. Specifically, the gain of the Faraday cups is generally stable and only the associated amplifier must be calibrated. For example, on the Thermo Scientific instrument control systems, the 1011 and 1012 ohm amplifiers can easily be calibrated through a fully software controlled procedure by inputting a constant electric signal to each amplifier sequentially [4]. On the other hand, the yield of the SEMs can drift up to 0.2% / hour and other techniques such as peak hopping, standard-sample bracketing and multi-dynamic measurement must be used. Peak hopping allows the detectors to be calibrated by measuring an ion beam of constant intensity across the detectors whereas standard-sample bracketing corrects the drift of the detectors through the analysis of a reference standard of a known isotopic ratio. If at least one isotopic pair of the sample is known, multi-dynamic measurement can be used; in this case the known isotopic ratio is measured on different pairs of detectors and the true value of the isotopic ratio of interest can be determined by a specific equation. In noble gas analysis, due to the decay of the ion beam during the measurement as well as the special isotopic systematic of the gases themselves, the cross-calibration of the SEM using these techniques becomes more complex and other methods should be investigated. In this work we present a comparison between different approaches to cross-calibrate multiple SEM's in noble gas analysis in order to evaluate the most suitable and reliable method. References: [1] Mark et al. (2009) Geochem. Geophys. Geosyst. 10, 1-9. [2] Mark et al. (2011) Geochim. Cosmochim. 75, 7494-7501. [3] Phillips and Matchan (2013) Geochimica et Cosmochimica Acta 121, 229-239. [4] Koornneef et al. (2014) Journal of Analytical Atomic Spectrometry 28, 749-754.
A randomized trial of specialized versus standard neck physiotherapy in cervical dystonia.
Counsell, Carl; Sinclair, Hazel; Fowlie, Jillian; Tyrrell, Elaine; Derry, Natalie; Meager, Peter; Norrie, John; Grosset, Donald
2016-02-01
Anecdotal reports suggested that a specialized physiotherapy technique developed in France (the Bleton technique) improved primary cervical dystonia. We evaluated the technique in a randomized trial. A parallel-group, single-blind, two-centre randomized trial compared the specialized outpatient physiotherapy programme given by trained physiotherapists up to once a week for 24 weeks with standard physiotherapy advice for neck problems. Randomization was by a central telephone service. The primary outcome was the change in the total Toronto Western Spasmodic Torticollis Rating (TWSTR) scale, measured before any botulinum injections that were due, between baseline and 24 weeks evaluated by a clinician masked to treatment. Analysis was by intention-to-treat. 110 patients were randomized (55 in each group) with 24 week outcomes available for 84. Most (92%) were receiving botulinum toxin injections. Physiotherapy adherence was good. There was no difference between the groups in the change in TWSTR score over 24 weeks (mean adjusted difference 1.44 [95% CI -3.63, 6.51]) or 52 weeks (mean adjusted difference 2.47 [-2.72, 7.65]) nor in any of the secondary outcome measures (Cervical Dystonia Impact Profile-58, clinician and patient-rated global impression of change, mean botulinum toxin dose). Both groups showed large sustained improvements compared to baseline in the TWSTR, most of which occurred in the first four weeks. There were no major adverse events. Subgroup analysis suggested a centre effect. There was no statistically or clinically significant benefit from the specialized physiotherapy compared to standard neck physiotherapy advice but further trials are warranted. Copyright © 2015 Elsevier Ltd. All rights reserved.
In vivo testing for gold nanoparticle toxicity.
Simpson, Carrie A; Huffman, Brian J; Cliffel, David E
2013-01-01
A technique for measuring the toxicity of nanomaterials using a murine model is described. Blood samples are collected via submandibular bleeding while urine samples are collected on cellophane sheets. Both biosamples are then analyzed by inductively coupled plasma optical emission spectroscopy (ICP-OES) for nanotoxicity. Blood samples are further tested for immunological response using a standard Coulter counter. The major organs of interest for filtration are also digested and analyzed via ICP-OES, producing useful information regarding target specificity of the nanomaterial of interest. Collection of the biosamples and analysis afterward is detailed, and the operation of the technique is described and illustrated by analysis of the nanotoxicity of an injection of a modified tiopronin monolayer-protected cluster.
NASA Astrophysics Data System (ADS)
Casasent, David; Telfer, Brian
1988-02-01
The storage capacity, noise performance, and synthesis of associative memories for image analysis are considered. Associative memory synthesis is shown to be very similar to that of linear discriminant functions used in pattern recognition. These lead to new associative memories and new associative memory synthesis and recollection vector encodings. Heteroassociative memories are emphasized in this paper, rather than autoassociative memories, since heteroassociative memories provide scene analysis decisions, rather than merely enhanced output images. The analysis of heteroassociative memories has been given little attention. Heteroassociative memory performance and storage capacity are shown to be quite different from those of autoassociative memories, with much more dependence on the recollection vectors used and less dependence on M/N. This allows several different and preferable synthesis techniques to be considered for associative memories. These new associative memory synthesis techniques and new techniques to update associative memories are included. We also introduce a new SNR performance measure that is preferable to conventional noise standard deviation ratios.
Braunecker, S; Douglas, B; Hinkelbein, J
2015-07-01
Since astronauts are selected carefully, are usually young, and are intensively observed before and during training, relevant medical problems are rare. Nevertheless, there is a certain risk for a cardiac arrest in space requiring cardiopulmonary resuscitation (CPR). Up to now, there are 5 known techniques to perform CPR in microgravity. The aim of the present study was to analyze different techniques for CPR during microgravity about quality of CPR. To identify relevant publications on CPR quality in microgravity, a systematic analysis with defined searching criteria was performed in the PubMed database (http://www.pubmed.com). For analysis, the keywords ("reanimation" or "CPR" or "resuscitation") and ("space" or "microgravity" or "weightlessness") and the specific names of the techniques ("Standard-technique" or "Straddling-manoeuvre" or "Reverse-bear-hug-technique" or "Evetts-Russomano-technique" or "Hand-stand-technique") were used. To compare quality and effectiveness of different techniques, we used the compression product (CP), a mathematical estimation for cardiac output. Using the predefined keywords for literature search, 4 different publications were identified (parabolic flight or under simulated conditions on earth) dealing with CPR efforts in microgravity and giving specific numbers. No study was performed under real-space conditions. Regarding compression depth, the handstand (HS) technique as well as the reverse bear hug (RBH) technique met parameters of the guidelines for CPR in 1G environments best (HS ratio, 0.91 ± 0.07; RBH ratio, 0.82 ± 0.13). Concerning compression rate, 4 of 5 techniques reached the required compression rate (ratio: HS, 1.08 ± 0.11; Evetts-Russomano [ER], 1.01 ± 0.06; standard side straddle, 1.00 ± 0.03; and straddling maneuver, 1.03 ± 0.12). The RBH method did not meet the required criteria (0.89 ± 0.09). The HS method showed the highest cardiac output (69.3% above the required CP), followed by the ER technique (33.0% above the required CP). Concerning CPR quality, the HS seems to be most effective to treat a cardiac arrest. In some environmental conditions where this technique cannot be used, the ER technique is a good alternative because CPR quality is only slightly lower. Copyright © 2015 Elsevier Inc. All rights reserved.
Proceedings of the 21st DOE/NRC Nuclear Air Cleaning Conference; Sessions 1--8
DOE Office of Scientific and Technical Information (OSTI.GOV)
First, M.W.
1991-02-01
Separate abstracts have been prepared for the papers presented at the meeting on nuclear facility air cleaning technology in the following specific areas of interest: air cleaning technologies for the management and disposal of radioactive wastes; Canadian waste management program; radiological health effects models for nuclear power plant accident consequence analysis; filter testing; US standard codes on nuclear air and gas treatment; European community nuclear codes and standards; chemical processing off-gas cleaning; incineration and vitrification; adsorbents; nuclear codes and standards; mathematical modeling techniques; filter technology; safety; containment system venting; and nuclear air cleaning programs around the world. (MB)
On statistical inference in time series analysis of the evolution of road safety.
Commandeur, Jacques J F; Bijleveld, Frits D; Bergel-Hayat, Ruth; Antoniou, Constantinos; Yannis, George; Papadimitriou, Eleonora
2013-11-01
Data collected for building a road safety observatory usually include observations made sequentially through time. Examples of such data, called time series data, include annual (or monthly) number of road traffic accidents, traffic fatalities or vehicle kilometers driven in a country, as well as the corresponding values of safety performance indicators (e.g., data on speeding, seat belt use, alcohol use, etc.). Some commonly used statistical techniques imply assumptions that are often violated by the special properties of time series data, namely serial dependency among disturbances associated with the observations. The first objective of this paper is to demonstrate the impact of such violations to the applicability of standard methods of statistical inference, which leads to an under or overestimation of the standard error and consequently may produce erroneous inferences. Moreover, having established the adverse consequences of ignoring serial dependency issues, the paper aims to describe rigorous statistical techniques used to overcome them. In particular, appropriate time series analysis techniques of varying complexity are employed to describe the development over time, relating the accident-occurrences to explanatory factors such as exposure measures or safety performance indicators, and forecasting the development into the near future. Traditional regression models (whether they are linear, generalized linear or nonlinear) are shown not to naturally capture the inherent dependencies in time series data. Dedicated time series analysis techniques, such as the ARMA-type and DRAG approaches are discussed next, followed by structural time series models, which are a subclass of state space methods. The paper concludes with general recommendations and practice guidelines for the use of time series models in road safety research. Copyright © 2012 Elsevier Ltd. All rights reserved.
Slonecker, E. Terrence; Fisher, Gary B.
2014-01-01
Operational problems with site access and information, XRF instrument operation, and imagery collections hampered the effective data collection and analysis process. Of the 24 sites imaged and analyzed, 17 appeared to be relatively clean with no discernible metal contamination, hydrocarbons, or asbestos in the soil. None of the samples for the sites in Louisiana had any result exceeding the appropriate industrial or residential standard for arsenic or lead. One site in South Carolina (North Street Dump) had two samples that exceeded the residential standard for lead. One site in Texas (Cadiz Street), and four sites in Florida (210 North 12th Street, Encore Retail Site, Clearwater Auto, and 22nd Street Mixed Use) were found to have some level of residual metal contamination above the applicable residential or commercial Risk-Based Concentration (RBC) standard. Three of the Florida sites showing metal contamination also showed a pattern of vegetation stress based on standard vegetation analysis techniques.
Report of the panel on international programs
NASA Technical Reports Server (NTRS)
Anderson, Allen Joel; Fuchs, Karl W.; Ganeka, Yasuhiro; Gaur, Vinod; Green, Andrew A.; Siegfried, W.; Lambert, Anthony; Rais, Jacub; Reighber, Christopher; Seeger, Herman
1991-01-01
The panel recommends that NASA participate and take an active role in the continuous monitoring of existing regional networks, the realization of high resolution geopotential and topographic missions, the establishment of interconnection of the reference frames as defined by different space techniques, the development and implementation of automation for all ground-to-space observing systems, calibration and validation experiments for measuring techniques and data, the establishment of international space-based networks for real-time transmission of high density space data in standardized formats, tracking and support for non-NASA missions, and the extension of state-of-the art observing and analysis techniques to developing nations.
NASA Astrophysics Data System (ADS)
Morton, Kenneth D., Jr.; Torrione, Peter A.; Collins, Leslie
2011-05-01
Laser induced breakdown spectroscopy (LIBS) can provide rapid, minimally destructive, chemical analysis of substances with the benefit of little to no sample preparation. Therefore, LIBS is a viable technology for the detection of substances of interest in near real-time fielded remote sensing scenarios. Of particular interest to military and security operations is the detection of explosive residues on various surfaces. It has been demonstrated that LIBS is capable of detecting such residues, however, the surface or substrate on which the residue is present can alter the observed spectra. Standard chemometric techniques such as principal components analysis and partial least squares discriminant analysis have previously been applied to explosive residue detection, however, the classification techniques developed on such data perform best against residue/substrate pairs that were included in model training but do not perform well when the residue/substrate pairs are not in the training set. Specifically residues in the training set may not be correctly detected if they are presented on a previously unseen substrate. In this work, we explicitly model LIBS spectra resulting from the residue and substrate to attempt to separate the response from each of the two components. This separation process is performed jointly with classifier design to ensure that the classifier that is developed is able to detect residues of interest without being confused by variations in the substrates. We demonstrate that the proposed classification algorithm provides improved robustness to variations in substrate compared to standard chemometric techniques for residue detection.
MacLean, Adam L; Harrington, Heather A; Stumpf, Michael P H; Byrne, Helen M
2016-01-01
The last decade has seen an explosion in models that describe phenomena in systems medicine. Such models are especially useful for studying signaling pathways, such as the Wnt pathway. In this chapter we use the Wnt pathway to showcase current mathematical and statistical techniques that enable modelers to gain insight into (models of) gene regulation and generate testable predictions. We introduce a range of modeling frameworks, but focus on ordinary differential equation (ODE) models since they remain the most widely used approach in systems biology and medicine and continue to offer great potential. We present methods for the analysis of a single model, comprising applications of standard dynamical systems approaches such as nondimensionalization, steady state, asymptotic and sensitivity analysis, and more recent statistical and algebraic approaches to compare models with data. We present parameter estimation and model comparison techniques, focusing on Bayesian analysis and coplanarity via algebraic geometry. Our intention is that this (non-exhaustive) review may serve as a useful starting point for the analysis of models in systems medicine.
NASA Astrophysics Data System (ADS)
Morse, Brad S.; Pohll, Greg; Huntington, Justin; Rodriguez Castillo, Ramiro
2003-06-01
In 1992, Mexican researchers discovered concentrations of arsenic in excess of World Heath Organization (WHO) standards in several municipal wells in the Zimapan Valley of Mexico. This study describes a method to delineate a capture zone for one of the most highly contaminated wells to aid in future well siting. A stochastic approach was used to model the capture zone because of the high level of uncertainty in several input parameters. Two stochastic techniques were performed and compared: "standard" Monte Carlo analysis and the generalized likelihood uncertainty estimator (GLUE) methodology. The GLUE procedure differs from standard Monte Carlo analysis in that it incorporates a goodness of fit (termed a likelihood measure) in evaluating the model. This allows for more information (in this case, head data) to be used in the uncertainty analysis, resulting in smaller prediction uncertainty. Two likelihood measures are tested in this study to determine which are in better agreement with the observed heads. While the standard Monte Carlo approach does not aid in parameter estimation, the GLUE methodology indicates best fit models when hydraulic conductivity is approximately 10-6.5 m/s, with vertically isotropic conditions and large quantities of interbasin flow entering the basin. Probabilistic isochrones (capture zone boundaries) are then presented, and as predicted, the GLUE-derived capture zones are significantly smaller in area than those from the standard Monte Carlo approach.
DART-MS: A New Analytical Technique for Forensic Paint Analysis.
Marić, Mark; Marano, James; Cody, Robert B; Bridge, Candice
2018-06-05
Automotive paint evidence is one of the most significant forms of evidence obtained in automotive-related incidents. Therefore, the analysis of automotive paint evidence is imperative in forensic casework. Most analytical schemes for automotive paint characterization involve optical microscopy, followed by infrared spectroscopy and pyrolysis-gas chromatography mass spectrometry ( py-GCMS) if required. The main drawback with py-GCMS, aside from its destructive nature, is that this technique is relatively time intensive in comparison to other techniques. Direct analysis in real-time-time-of-flight mass spectrometry (DART-TOFMS) may provide an alternative to py-GCMS, as the rapidity of analysis and minimal sample preparation affords a significant advantage. In this study, automotive clear coats from four vehicles were characterized by DART-TOFMS and a standard py-GCMS protocol. Principal component analysis was utilized to interpret the resultant data and suggested the two techniques provided analogous sample discrimination. Moreover, in some instances DART-TOFMS was able to identify components not observed by py-GCMS and vice versa, which indicates that the two techniques may provide complementary information. Additionally, a thermal desorption/pyrolysis DART-TOFMS methodology was also evaluated to characterize the intact paint chips from the vehicles to ascertain if the linear temperature gradient provided additional discriminatory information. All the paint samples were able to be discriminated based on the distinctive thermal desorption plots afforded from this technique, which may also be utilized for sample discrimination. On the basis of the results, DART-TOFMS may provide an additional tool to the forensic paint examiner.
O'Rourke, Matthew B; Padula, Matthew P
2016-01-01
Since emerging in the late 19(th) century, formaldehyde fixation has become a standard method for preservation of tissues from clinical samples. The advantage of formaldehyde fixation is that fixed tissues can be stored at room temperature for decades without concern for degradation. This has led to the generation of huge tissue banks containing thousands of clinically significant samples. Here we review techniques for proteomic analysis of formalin-fixed, paraffin-embedded (FFPE) tissue samples with a specific focus on the methods used to extract and break formaldehyde crosslinks. We also discuss an error-of-interpretation associated with the technique known as "antigen retrieval." We have discovered that this term has been mistakenly applied to two disparate molecular techniques; therefore, we argue that a terminology change is needed to ensure accurate reporting of experimental results. Finally, we suggest that more investigation is required to fully understand the process of formaldehyde fixation and its subsequent reversal.
Dabarakis, Nikolaos N; Alexander, Veis; Tsirlis, Anastasios T; Parissis, Nikolaos A; Nikolaos, Maroufidis
2007-01-01
To clinically evaluate the jet injection Injex (Rösch AG Medizintechnik) using 2 different anesthetic solutions, and to compare the jet injection and the standard needle injection techniques. Of the 32 patients in the study, 10 received mepivacaine 3% anesthetic solution by means of the jet injection technique, while the remaining 22 patients received lidocaine 2% with epinephrine 1:80,000 by the same method. The 14 patients in whom pulp anesthesia was achieved were selected for an additional evaluation of the pulp reaction using standard needle injection anesthesia. The differences between the 2 compounds with Injex were statistically evaluated by means of independent-samples t test analysis. The differences between subgroups receiving both jet injection and needle injection anesthesia were evaluated by means of paired t test analysis. The administration of mepivacaine 3% using Injex did not achieve pulp anesthesia in any of the 10 patients, although the soft tissue anesthesia was successful. The administration of lidocaine with epinephrine using Injex resulted in pulp anesthesia in only 14 patients; soft tissue anesthesia was observed in all patients of this group. There was no statistically significant difference between Injex and the needle injection technique in onset of anesthesia. However, the duration of anesthesia was significantly longer for the needle infiltration group than for the Injex injection group. The anesthetic solution should be combined with a vasoconstriction agent when the Injex technique is implemented.
Chapiro, Julius; Wood, Laura D.; Lin, MingDe; Duran, Rafael; Cornish, Toby; Lesage, David; Charu, Vivek; Schernthaner, Rüdiger; Wang, Zhijun; Tacher, Vania; Savic, Lynn Jeanette; Kamel, Ihab R.
2014-01-01
Purpose To evaluate the diagnostic performance of three-dimensional (3Dthree-dimensional) quantitative enhancement-based and diffusion-weighted volumetric magnetic resonance (MR) imaging assessment of hepatocellular carcinoma (HCChepatocellular carcinoma) lesions in determining the extent of pathologic tumor necrosis after transarterial chemoembolization (TACEtransarterial chemoembolization). Materials and Methods This institutional review board–approved retrospective study included 17 patients with HCChepatocellular carcinoma who underwent TACEtransarterial chemoembolization before surgery. Semiautomatic 3Dthree-dimensional volumetric segmentation of target lesions was performed at the last MR examination before orthotopic liver transplantation or surgical resection. The amount of necrotic tumor tissue on contrast material–enhanced arterial phase MR images and the amount of diffusion-restricted tumor tissue on apparent diffusion coefficient (ADCapparent diffusion coefficient) maps were expressed as a percentage of the total tumor volume. Visual assessment of the extent of tumor necrosis and tumor response according to European Association for the Study of the Liver (EASLEuropean Association for the Study of the Liver) criteria was performed. Pathologic tumor necrosis was quantified by using slide-by-slide segmentation. Correlation analysis was performed to evaluate the predictive values of the radiologic techniques. Results At histopathologic examination, the mean percentage of tumor necrosis was 70% (range, 10%–100%). Both 3Dthree-dimensional quantitative techniques demonstrated a strong correlation with tumor necrosis at pathologic examination (R2 = 0.9657 and R2 = 0.9662 for quantitative EASLEuropean Association for the Study of the Liver and quantitative ADCapparent diffusion coefficient, respectively) and a strong intermethod agreement (R2 = 0.9585). Both methods showed a significantly lower discrepancy with pathologically measured necrosis (residual standard error [RSEresidual standard error] = 6.38 and 6.33 for quantitative EASLEuropean Association for the Study of the Liver and quantitative ADCapparent diffusion coefficient, respectively), when compared with non-3Dthree-dimensional techniques (RSEresidual standard error = 12.18 for visual assessment). Conclusion This radiologic-pathologic correlation study demonstrates the diagnostic accuracy of 3Dthree-dimensional quantitative MR imaging techniques in identifying pathologically measured tumor necrosis in HCChepatocellular carcinoma lesions treated with TACEtransarterial chemoembolization. © RSNA, 2014 Online supplemental material is available for this article. PMID:25028783
A novel CT acquisition and analysis technique for breathing motion modeling
NASA Astrophysics Data System (ADS)
Low, Daniel A.; White, Benjamin M.; Lee, Percy P.; Thomas, David H.; Gaudio, Sergio; Jani, Shyam S.; Wu, Xiao; Lamb, James M.
2013-06-01
To report on a novel technique for providing artifact-free quantitative four-dimensional computed tomography (4DCT) image datasets for breathing motion modeling. Commercial clinical 4DCT methods have difficulty managing irregular breathing. The resulting images contain motion-induced artifacts that can distort structures and inaccurately characterize breathing motion. We have developed a novel scanning and analysis method for motion-correlated CT that utilizes standard repeated fast helical acquisitions, a simultaneous breathing surrogate measurement, deformable image registration, and a published breathing motion model. The motion model differs from the CT-measured motion by an average of 0.65 mm, indicating the precision of the motion model. The integral of the divergence of one of the motion model parameters is predicted to be a constant 1.11 and is found in this case to be 1.09, indicating the accuracy of the motion model. The proposed technique shows promise for providing motion-artifact free images at user-selected breathing phases, accurate Hounsfield units, and noise characteristics similar to non-4D CT techniques, at a patient dose similar to or less than current 4DCT techniques.
Locating Stardust-like Particles in Aerogel Using X-Ray Techniques
NASA Technical Reports Server (NTRS)
Jurewicz, A. J. G.; Jones, S. M.; Tsapin, A.; Mih, D. T.; Connolly, H. C., Jr.; Graham, G. A.
2003-01-01
Silica aerogel is the material that the spacecraft STARDUST is using to collect interstellar and cometary silicates. Anticipating the return of the samples to earth in January of 2006, MANY individual investigators and, especially, the investigators in NASA's SRLIDAP program are studying means of both in situ analysis of particles, as well as particle extraction. To help individual PI's with extraction of particles from aerogel in their own laboratories, we are exploring the use of standard laboratory x-ray equipment and commercial techniques for precisely locating specific particles in aerogel. We approached the evaluation of commercial x-ray techniques as follows. First, we determined the most appropriate detector for use with aerogel and particulates. Then, we compared and contrasted techniques useful for university laboratories.
The measurement of linear frequency drift in oscillators
NASA Astrophysics Data System (ADS)
Barnes, J. A.
1985-04-01
A linear drift in frequency is an important element in most stochastic models of oscillator performance. Quartz crystal oscillators often have drifts in excess of a part in ten to the tenth power per day. Even commercial cesium beam devices often show drifts of a few parts in ten to the thirteenth per year. There are many ways to estimate the drift rates from data samples (e.g., regress the phase on a quadratic; regress the frequency on a linear; compute the simple mean of the first difference of frequency; use Kalman filters with a drift term as one element in the state vector; and others). Although most of these estimators are unbiased, they vary in efficiency (i.e., confidence intervals). Further, the estimation of confidence intervals using the standard analysis of variance (typically associated with the specific estimating technique) can give amazingly optimistic results. The source of these problems is not an error in, say, the regressions techniques, but rather the problems arise from correlations within the residuals. That is, the oscillator model is often not consistent with constraints on the analysis technique or, in other words, some specific analysis techniques are often inappropriate for the task at hand. The appropriateness of a specific analysis technique is critically dependent on the oscillator model and can often be checked with a simple whiteness test on the residuals.
Modeling and Hazard Analysis Using STPA
NASA Astrophysics Data System (ADS)
Ishimatsu, Takuto; Leveson, Nancy; Thomas, John; Katahira, Masa; Miyamoto, Yuko; Nakao, Haruka
2010-09-01
A joint research project between MIT and JAXA/JAMSS is investigating the application of a new hazard analysis to the system and software in the HTV. Traditional hazard analysis focuses on component failures but software does not fail in this way. Software most often contributes to accidents by commanding the spacecraft into an unsafe state(e.g., turning off the descent engines prematurely) or by not issuing required commands. That makes the standard hazard analysis techniques of limited usefulness on software-intensive systems, which describes most spacecraft built today. STPA is a new hazard analysis technique based on systems theory rather than reliability theory. It treats safety as a control problem rather than a failure problem. The goal of STPA, which is to create a set of scenarios that can lead to a hazard, is the same as FTA but STPA includes a broader set of potential scenarios including those in which no failures occur but the problems arise due to unsafe and unintended interactions among the system components. STPA also provides more guidance to the analysts that traditional fault tree analysis. Functional control diagrams are used to guide the analysis. In addition, JAXA uses a model-based system engineering development environment(created originally by Leveson and called SpecTRM) which also assists in the hazard analysis. One of the advantages of STPA is that it can be applied early in the system engineering and development process in a safety-driven design process where hazard analysis drives the design decisions rather than waiting until reviews identify problems that are then costly or difficult to fix. It can also be applied in an after-the-fact analysis and hazard assessment, which is what we did in this case study. This paper describes the experimental application of STPA to the JAXA HTV in order to determine the feasibility and usefulness of the new hazard analysis technique. Because the HTV was originally developed using fault tree analysis and following the NASA standards for safety-critical systems, the results of our experimental application of STPA can be compared with these more traditional safety engineering approaches in terms of the problems identified and the resources required to use it.
Qualitative computer aided evaluation of dental impressions in vivo.
Luthardt, Ralph G; Koch, Rainer; Rudolph, Heike; Walter, Michael H
2006-01-01
Clinical investigations dealing with the precision of different impression techniques are rare. Objective of the present study was to develop and evaluate a procedure for the qualitative analysis of the three-dimensional impression precision based on an established in-vitro procedure. The zero hypothesis to be tested was that the precision of impressions does not differ depending on the impression technique used (single-step, monophase and two-step-techniques) and on clinical variables. Digital surface data of patient's teeth prepared for crowns were gathered from standardized manufactured master casts after impressions with three different techniques were taken in a randomized order. Data-sets were analyzed for each patient in comparison with the one-step impression chosen as the reference. The qualitative analysis was limited to data-points within the 99.5%-range. Based on the color-coded representation areas with maximum deviations were determined (preparation margin and the mantle and occlusal surface). To qualitatively analyze the precision of the impression techniques, the hypothesis was tested in linear models for repeated measures factors (p < 0.05). For the positive 99.5% deviations no variables with significant influence were determined in the statistical analysis. In contrast, the impression technique and the position of the preparation margin significantly influenced the negative 99.5% deviations. The influence of clinical parameter on the deviations between impression techniques can be determined reliably using the 99.5 percentile of the deviations. An analysis regarding the areas with maximum deviations showed high clinical relevance. The preparation margin was pointed out as the weak spot of impression taking.
Schoeller, D A; Colligan, A S; Shriver, T; Avak, H; Bartok-Olson, C
2000-09-01
The doubly labeled water method is commonly used to measure total energy expenditure in free-living subjects. The method, however, requires accurate and precise deuterium abundance determinations, which can be laborious. The aim of this study was to evaluate a fully automated, high-throughput, chromium reduction technique for the measurement of deuterium abundances in physiological fluids. The chromium technique was compared with an off-line zinc bomb reduction technique and also subjected to test-retest analysis. Analysis of international water standards demonstrated that the chromium technique was accurate and had a within-day precision of <1 per thousand. Addition of organic matter to water samples demonstrated that the technique was sensitive to interference at levels between 2 and 5 g l(-1). Physiological samples could be analyzed without this interference, plasma by 10000 Da exclusion filtration, saliva by sedimentation and urine by decolorizing with carbon black. Chromium reduction of urine specimens from doubly labeled water studies indicated no bias relative to zinc reduction with a mean difference in calculated energy expenditure of -0.2 +/- 3.9%. Blinded reanalysis of urine specimens from a second doubly labeled water study demonstrated a test-retest coefficient of variation of 4%. The chromium reduction method was found to be a rapid, accurate and precise method for the analysis of urine specimens from doubly labeled water. Copyright 2000 John Wiley & Sons, Ltd.
The gravitational self-interaction of the Earth's tidal bulge
NASA Astrophysics Data System (ADS)
Norsen, Travis; Dreese, Mackenzie; West, Christopher
2017-09-01
According to a standard, idealized analysis, the Moon would produce a 54 cm equilibrium tidal bulge in the Earth's oceans. This analysis omits many factors (beyond the scope of the simple idealized model) that dramatically influence the actual height and timing of the tides at different locations, but it is nevertheless an important foundation for more detailed studies. Here, we show that the standard analysis also omits another factor—the gravitational interaction of the tidal bulge with itself—which is entirely compatible with the simple, idealized equilibrium model and which produces a surprisingly non-trivial correction to the predicted size of the tidal bulge. Our analysis uses ideas and techniques that are familiar from electrostatics, and should thus be of interest to teachers and students of undergraduate E&M, Classical Mechanics (and/or other courses that cover the tides), and geophysics courses that cover the closely related topic of Earth's equatorial bulge.
NASA Astrophysics Data System (ADS)
Verma, Shivcharan; Mohanty, Biraja P.; Singh, Karn P.; Kumar, Ashok
2018-02-01
The proton beam facility at variable energy cyclotron (VEC) Panjab University, Chandigarh, India is being used for Particle Induced X-ray Emission (PIXE) analysis of different environmental, biological and industrial samples. The PIXE method, however, does not provide any information of low Z elements like carbon, nitrogen, oxygen and fluorine. As a result of the increased need for rapid and multi-elemental analysis of biological and environmental samples, the PIXE facility was upgraded and standardized to facilitate simultaneous measurements using PIXE and Proton Elastic Scattering Analysis (PESA). Both PIXE and PESA techniques were calibrated and standardized individually. Finally, the set up was tested by carrying out simultaneous PIXE and PESA measurements using a 2 mm diameter proton beam of 2.7 MeV on few multilayered thin samples. The results obtained show excellent agreement between PIXE and PESA measurements and confirm adequate sensitivity and precision of the experimental set up.
Large data series: Modeling the usual to identify the unusual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Downing, D.J.; Fedorov, V.V.; Lawkins, W.F.
{open_quotes}Standard{close_quotes} approaches such as regression analysis, Fourier analysis, Box-Jenkins procedure, et al., which handle a data series as a whole, are not useful for very large data sets for at least two reasons. First, even with computer hardware available today, including parallel processors and storage devices, there are no effective means for manipulating and analyzing gigabyte, or larger, data files. Second, in general it can not be assumed that a very large data set is {open_quotes}stable{close_quotes} by the usual measures, like homogeneity, stationarity, and ergodicity, that standard analysis techniques require. Both reasons dictate the necessity to use {open_quotes}local{close_quotes} data analysismore » methods whereby the data is segmented and ordered, where order leads to a sense of {open_quotes}neighbor,{close_quotes} and then analyzed segment by segment. The idea of local data analysis is central to the study reported here.« less
Computed Tomography Inspection and Analysis for Additive Manufacturing Components
NASA Technical Reports Server (NTRS)
Beshears, Ronald D.
2016-01-01
Computed tomography (CT) inspection was performed on test articles additively manufactured from metallic materials. Metallic AM and machined wrought alloy test articles with programmed flaws were inspected using a 2MeV linear accelerator based CT system. Performance of CT inspection on identically configured wrought and AM components and programmed flaws was assessed using standard image analysis techniques to determine the impact of additive manufacturing on inspectability of objects with complex geometries.
Residual-Mean Analysis of the Air-Sea Fluxes and Associated Oceanic Meridional Overturning
2006-12-01
the adiabatic component of the MOC which is based entirely on the sea surface data . The coordinate system introduced in this study is somewhat...heat capacity of water. The technique utilizes the observational data based on meteorological re- analysis (density flux at the sea surface) and...Figure 8. Annual mean and temporal standard deviation of the zonally-averaged mixed- layer depth. The plotted data are based on Levitus 94 climatology
Diaby, Vakaramoko; Sanogo, Vassiki; Moussa, Kouame Richard
2016-01-01
In this paper, the readers are introduced to ELICIT, an imprecise weight elicitation technique for multicriteria decision analysis for healthcare. The application of ELICIT consists of two steps: the rank ordering of evaluation criteria based on decision-makers' (DMs) preferences using the principal component analysis; and the estimation of criteria weights and their descriptive statistics using the variable interdependent analysis and the Monte Carlo method. The application of ELICIT is illustrated with a hypothetical case study involving the elicitation of weights for five criteria used to select the best device for eye surgery. The criteria were ranked from 1-5, based on a strict preference relationship established by the DMs. For each criterion, the deterministic weight was estimated as well as the standard deviation and 95% credibility interval. ELICIT is appropriate in situations where only ordinal DMs' preferences are available to elicit decision criteria weights.
Temperature analysis of laser ignited metalized material using spectroscopic technique
NASA Astrophysics Data System (ADS)
Bassi, Ishaan; Sharma, Pallavi; Daipuriya, Ritu; Singh, Manpreet
2018-05-01
The temperature measurement of the laser ignited aluminized Nano energetic mixture using spectroscopy has a great scope in in analysing the material characteristic and combustion analysis. The spectroscopic analysis helps to do in depth study of combustion of materials which is difficult to do using standard pyrometric methods. Laser ignition was used because it consumes less energy as compared to electric ignition but ignited material dissipate the same energy as dissipated by electric ignition and also with the same impact. Here, the presented research is primarily focused on the temperature analysis of energetic material which comprises of explosive material mixed with nano-material and is ignited with the help of laser. Spectroscopy technique is used here to estimate the temperature during the ignition process. The Nano energetic mixture used in the research does not comprise of any material that is sensitive to high impact.
Cognitive task analysis: harmonizing tasks to human capacities.
Neerincx, M A; Griffioen, E
1996-04-01
This paper presents the development of a cognitive task analysis that assesses the task load of jobs and provides indicators for the redesign of jobs. General principles of human task performance were selected and, subsequently, integrated into current task modelling techniques. The resulting cognitive task analysis centres around four aspects of task load: the number of actions in a period, the ratio between knowledge- and rule-based actions, lengthy uninterrupted actions, and momentary overloading. The method consists of three stages: (1) construction of a hierarchical task model, (2) a time-line analysis and task load assessment, and (3), if necessary, adjustment of the task model. An application of the cognitive task analysis in railway traffic control showed its benefits over the 'old' task load analysis of the Netherlands Railways. It provided a provisional standard for traffic control jobs, conveyed two load risks -- momentary overloading and underloading -- and resulted in proposals to satisfy the standard and to diminish the two load risk.
Quaranta, Alessandro; DʼIsidoro, Orlando; Bambini, Fabrizio; Putignano, Angelo
2016-02-01
To compare the available potential bone-implant contact (PBIC) area of standard and short dental implants by micro-computed tomography (μCT) assessment. Three short implants with different diameters (4.5 × 6 mm, 4.1 × 7 mm, and 4.1 × 6 mm) and 2 standard implants (3.5 × 10 mm and 3.3 × 9 mm) with diverse design and surface features were scanned with μCT. Cross-sectional images were obtained. Image data were manually processed to find the plane that corresponds to the most coronal contact point between the crestal bone and implant. The available PBIC was calculated for each sample. Later on, the cross-sectional slices were processed by a 3-dimensional (3D) software, and 3D images of each sample were used for descriptive analysis and display the microtopography and macrotopography. The wide-diameter short implant (4.5 × 6 mm) showed the higher PBIC (210.89 mm) value followed by the standard (178.07 mm and 185.37 mm) and short implants (130.70 mm and 110.70 mm). Wide-diameter short implants show a surface area comparable with standard implants. Micro-CT analysis is a promising technique to evaluate surface area in dental implants with different macrodesign, microdesign, and surface features.
Interior moisture design loads for residences
Anton TenWolde; Iain S. Walker
2001-01-01
This paper outlines a methodology to obtain design values for indoor boundary conditions for moisture design calculations for residences. This is part of a larger effort by ASHRAE Standard Project Committee 160P, Design Criteria for Moisture Control in Buildings, to formulate criteria for moisture design loads, analysis techniques, and material and building performance...
ERIC Educational Resources Information Center
Huang, Francis L.; Cornell, Dewey G.
2016-01-01
Advances in multilevel modeling techniques now make it possible to investigate the psychometric properties of instruments using clustered data. Factor models that overlook the clustering effect can lead to underestimated standard errors, incorrect parameter estimates, and model fit indices. In addition, factor structures may differ depending on…
ERIC Educational Resources Information Center
Hoz, Ron; Bowman, Dan; Chacham, Tova
1997-01-01
Students (N=14) in a geomorphology course took an objective geomorphology test, the tree construction task, and the Standardized Concept Structuring Analysis Technique (SConSAT) version of concept mapping. Results suggest that the SConSAT knowledge structure dimensions have moderate to good construct validity. Contains 82 references. (DDR)
A Rapid PCR-RFLP Method for Monitoring Genetic Variation among Commercial Mushroom Species
ERIC Educational Resources Information Center
Martin, Presley; Muruke, Masoud; Hosea, Kenneth; Kivaisi, Amelia; Zerwas, Nick; Bauerle, Cynthia
2004-01-01
We report the development of a simplified procedure for restriction fragment length polymorphism (RFLP) analysis of mushrooms. We have adapted standard molecular techniques to be amenable to an undergraduate laboratory setting in order to allow students to explore basic questions about fungal diversity and relatedness among mushroom species. The…
John Hogland; Nedret Billor; Nathaniel Anderson
2013-01-01
Discriminant analysis, referred to as maximum likelihood classification within popular remote sensing software packages, is a common supervised technique used by analysts. Polytomous logistic regression (PLR), also referred to as multinomial logistic regression, is an alternative classification approach that is less restrictive, more flexible, and easy to interpret. To...
ERIC Educational Resources Information Center
Maxwell, James R.; Gilberti, Anthony F.; Mupinga, Davison M.
2006-01-01
This paper will study some of the problems associated with case studies and make recommendations using standard and innovative methodologies effectively. Human resource management (HRM) and resource development cases provide context for analysis and decision-making designs in different industries. In most HRM development and training courses…
This abstract is included for completeness of documentation. The technique described in the SOP title was planned in writing the QSIP. It was subsequently not used, and the SOP was not written.
The National Human Exposure Assessment Survey (NHEXAS) is a federal interagency r...
A real-time compliance mapping system using standard endoscopic surgical forceps.
Fakhry, Morkos; Bello, Fernando; Hanna, George B
2009-04-01
In endoscopic surgery, the use of long surgical instruments through access ports diminishes tactile feedback and degrades the surgeon's ability to identify hidden tissue abnormalities. To overcome this constraint, we developed a real-time compliance mapping system that is composed of: 1) a standard surgical instrument with a high-precision sensor configuration design; 2) real-time objective interpretation of the output signals for tissue identification; and 3) a novel human-computer interaction technique using interactive voice and handle force monitoring techniques to suit operating theater working environment. The system was calibrated and used in clinical practice in four routine endoscopic human procedures. In a laboratory-based experiment to compare the tissue discriminatory power of the system with that of surgeons' hands, the system's tissue discriminatory power was three times more sensitive and 10% less specific. The data acquisition precision was tested using principal component analysis (R(2)X = 0.975, Q2 [cumulative (cum)] = 0.808 ) and partial least square discriminate analysis (R(2)X = 0.903, R(2)Y = 0.729, Q2 (cum) = 0.572).
Chatake, Toshiyuki; Fujiwara, Satoru
2016-01-01
A difference in the neutron scattering length between hydrogen and deuterium leads to a high density contrast in neutron Fourier maps. In this study, a technique for determining the deuterium/hydrogen (D/H) contrast map in neutron macromolecular crystallography is developed and evaluated using ribonuclease A. The contrast map between the D2O-solvent and H2O-solvent crystals is calculated in real space, rather than in reciprocal space as performed in previous neutron D/H contrast crystallography. The present technique can thus utilize all of the amplitudes of the neutron structure factors for both D2O-solvent and H2O-solvent crystals. The neutron D/H contrast maps clearly demonstrate the powerful detectability of H/D exchange in proteins. In fact, alternative protonation states and alternative conformations of hydroxyl groups are observed at medium resolution (1.8 Å). Moreover, water molecules can be categorized into three types according to their tendency towards rotational disorder. These results directly indicate improvement in the neutron crystal structure analysis. This technique is suitable for incorporation into the standard structure-determination process used in neutron protein crystallography; consequently, more precise and efficient determination of the D-atom positions is possible using a combination of this D/H contrast technique and standard neutron structure-determination protocols.
NASA Astrophysics Data System (ADS)
Cherry, M.; Dierken, J.; Boehnlein, T.; Pilchak, A.; Sathish, S.; Grandhi, R.
2018-01-01
A new technique for performing quantitative scanning acoustic microscopy imaging of Rayleigh surface wave (RSW) velocity was developed based on b-scan processing. In this technique, the focused acoustic beam is moved through many defocus distances over the sample and excited with an impulse excitation, and advanced algorithms based on frequency filtering and the Hilbert transform are used to post-process the b-scans to estimate the Rayleigh surface wave velocity. The new method was used to estimate the RSW velocity on an optically flat E6 glass sample, and the velocity was measured at ±2 m/s and the scanning time per point was on the order of 1.0 s, which are both improvement from the previous two-point defocus method. The new method was also applied to the analysis of two titanium samples, and the velocity was estimated with very low standard deviation in certain large grains on the sample. A new behavior was observed with the b-scan analysis technique where the amplitude of the surface wave decayed dramatically on certain crystallographic orientations. The new technique was also compared with previous results, and the new technique has been found to be much more reliable and to have higher contrast than previously possible with impulse excitation.
Narayanan, Ajit; Chen, Yi; Pang, Shaoning; Tao, Ban
2013-01-01
The continuous growth of malware presents a problem for internet computing due to increasingly sophisticated techniques for disguising malicious code through mutation and the time required to identify signatures for use by antiviral software systems (AVS). Malware modelling has focused primarily on semantics due to the intended actions and behaviours of viral and worm code. The aim of this paper is to evaluate a static structure approach to malware modelling using the growing malware signature databases now available. We show that, if malware signatures are represented as artificial protein sequences, it is possible to apply standard sequence alignment techniques in bioinformatics to improve accuracy of distinguishing between worm and virus signatures. Moreover, aligned signature sequences can be mined through traditional data mining techniques to extract metasignatures that help to distinguish between viral and worm signatures. All bioinformatics and data mining analysis were performed on publicly available tools and Weka.
The Effects of Different Representations on Static Structure Analysis of Computer Malware Signatures
Narayanan, Ajit; Chen, Yi; Pang, Shaoning; Tao, Ban
2013-01-01
The continuous growth of malware presents a problem for internet computing due to increasingly sophisticated techniques for disguising malicious code through mutation and the time required to identify signatures for use by antiviral software systems (AVS). Malware modelling has focused primarily on semantics due to the intended actions and behaviours of viral and worm code. The aim of this paper is to evaluate a static structure approach to malware modelling using the growing malware signature databases now available. We show that, if malware signatures are represented as artificial protein sequences, it is possible to apply standard sequence alignment techniques in bioinformatics to improve accuracy of distinguishing between worm and virus signatures. Moreover, aligned signature sequences can be mined through traditional data mining techniques to extract metasignatures that help to distinguish between viral and worm signatures. All bioinformatics and data mining analysis were performed on publicly available tools and Weka. PMID:23983644
Space Shuttle and Space Station Radio Frequency (RF) Exposure Analysis
NASA Technical Reports Server (NTRS)
Hwu, Shian U.; Loh, Yin-Chung; Sham, Catherine C.; Kroll, Quin D.
2005-01-01
This paper outlines the modeling techniques and important parameters to define a rigorous but practical procedure that can verify the compliance of RF exposure to the NASA standards for astronauts and electronic equipment. The electromagnetic modeling techniques are applied to analyze RF exposure in Space Shuttle and Space Station environments with reasonable computing time and resources. The modeling techniques are capable of taking into account the field interactions with Space Shuttle and Space Station structures. The obtained results illustrate the multipath effects due to the presence of the space vehicle structures. It's necessary to include the field interactions with the space vehicle in the analysis for an accurate assessment of the RF exposure. Based on the obtained results, the RF keep out zones are identified for appropriate operational scenarios, flight rules and necessary RF transmitter constraints to ensure a safe operating environment and mission success.
Novel permutation measures for image encryption algorithms
NASA Astrophysics Data System (ADS)
Abd-El-Hafiz, Salwa K.; AbdElHaleem, Sherif H.; Radwan, Ahmed G.
2016-10-01
This paper proposes two measures for the evaluation of permutation techniques used in image encryption. First, a general mathematical framework for describing the permutation phase used in image encryption is presented. Using this framework, six different permutation techniques, based on chaotic and non-chaotic generators, are described. The two new measures are, then, introduced to evaluate the effectiveness of permutation techniques. These measures are (1) Percentage of Adjacent Pixels Count (PAPC) and (2) Distance Between Adjacent Pixels (DBAP). The proposed measures are used to evaluate and compare the six permutation techniques in different scenarios. The permutation techniques are applied on several standard images and the resulting scrambled images are analyzed. Moreover, the new measures are used to compare the permutation algorithms on different matrix sizes irrespective of the actual parameters used in each algorithm. The analysis results show that the proposed measures are good indicators of the effectiveness of the permutation technique.
NASA Astrophysics Data System (ADS)
Vanvyve, E.; Magontier, P.; Vandenberghe, F. C.; Delle Monache, L.; Dickinson, K.
2012-12-01
Wind energy is amongst the fastest growing sources of renewable energy in the U.S. and could supply up to 20 % of the U.S power production by 2030. An accurate and reliable wind resource assessment for prospective wind farm sites is a challenging task, yet is crucial for evaluating the long-term profitability and feasibility of a potential development. We have developed an accurate and computationally efficient wind resource assessment technique for prospective wind farm sites, which incorporates innovative statistical techniques and the new NASA Earth science dataset MERRA. This technique produces a wind resource estimate that is more accurate than that obtained by the wind energy industry's standard technique, while providing a reliable quantification of its uncertainty. The focus now is on evaluating the socio-economic value of this new technique upon using the industry's standard technique. Would it yield lower financing costs? Could it result in lower electricity prices? Are there further down-the-line positive consequences, e.g. job creation, time saved, greenhouse gas decrease? Ultimately, we expect our results will inform efforts to refine and disseminate the new technique to support the development of the U.S. renewable energy infrastructure. In order to address the above questions, we are carrying out a cost-benefit analysis based on the net present worth of the technique. We will describe this approach, including the cash-flow process of wind farm financing, how the wind resource assessment factors in, and will present current results for various hypothetical candidate wind farm sites.
Economic and outcomes consequences of TachoSil®: a systematic review.
Colombo, Giorgio L; Bettoni, Daria; Di Matteo, Sergio; Grumi, Camilla; Molon, Cinzia; Spinelli, Daniela; Mauro, Gaetano; Tarozzo, Alessia; Bruno, Giacomo M
2014-01-01
TachoSil(®) is a medicated sponge coated with human fibrinogen and human thrombin. It is indicated as a support treatment in adult surgery to improve hemostasis, promote tissue sealing, and support sutures when standard surgical techniques are insufficient. This review systematically analyses the international scientific literature relating to the use of TachoSil in hemostasis and as a surgical sealant, from the point of view of its economic impact. We carried out a systematic review of the PubMed literature up to November 2013. Based on the selection criteria, papers were grouped according to the following outcomes: reduction of time to hemostasis; decrease in length of hospital stay; and decrease in postoperative complications. Twenty-four scientific papers were screened, 13 (54%) of which were randomized controlled trials and included a total of 2,116 patients, 1,055 of whom were treated with TachoSil. In the clinical studies carried out in patients undergoing hepatic, cardiac, or renal surgery, the time to hemostasis obtained with TachoSil was lower (1-4 minutes) than the time measured with other techniques and hemostatic drugs, with statistically significant differences. Moreover, in 13 of 15 studies, TachoSil showed a statistically significant reduction in postoperative complications in comparison with the standard surgical procedure. The range of the observed decrease in the length of hospital stay for TachoSil patients was 2.01-3.58 days versus standard techniques, with a statistically significant difference in favor of TachoSil in eight of 15 studies. This analysis shows that TachoSil has a role as a supportive treatment in surgery to improve hemostasis and promote tissue sealing when standard techniques are insufficient, with a consequent decrease in postoperative complications and hospital costs.
Berger, Cezar Augusto Sarraf; Freitas, Renato da Silva; Malafaia, Osvaldo; Pinto, José Simão de Paula; Macedo Filho, Evaldo Dacheux; Mocellin, Marcos; Fagundes, Marina Serrato Coelho
2014-01-01
Introduction The knowledge and study of surgical techniques and anthropometric measurements of the nose make possible a qualitative and quantitative analysis of surgical results. Objective Study the main technique used in rhinoplasty on Caucasian noses and compare preoperative and postoperative anthropometric measurements of the nose. Methods A prospective study with 170 patients was performed at a private hospital. Data were collected using the Electronic System Integrated of Protocols software (Sistema Integrado de Protocolos Eletrônicos, SINPE©). The surgical techniques used in the nasal dorsum and tip were evaluated. Preoperative and 12-month follow-up photos as well as the measurements compared with the ideal aesthetic standard of a Caucasian nose were analyzed objectively. Student t test and standard deviation test were applied. Results There was a predominance of endonasal access (94.4%). The most common dorsum technique was hump removal (33.33%), and the predominance of sutures (24.76%) was observed on the nasal tip, with the lateral intercrural the most frequent (32.39%). Comparison between preoperative and postoperative photos found statistically significant alterations on the anthropometric measurements of the noses. Conclusion The main surgical techniques on Caucasian noses were evaluated, and a great variety was found. The evaluation of anthropometric measurements of the nose proved the efficiency of the performed procedures. PMID:25992149
Electron-Excited X-Ray Microanalysis at Low Beam Energy: Almost Always an Adventure!
Newbury, Dale E; Ritchie, Nicholas W M
2016-08-01
Scanning electron microscopy with energy-dispersive spectrometry has been applied to the analysis of various materials at low-incident beam energies, E 0≤5 keV, using peak fitting and following the measured standards/matrix corrections protocol embedded in the National Institute of Standards and Technology Desktop Spectrum Analyzer-II analytical software engine. Low beam energy analysis provides improved spatial resolution laterally and in-depth. The lower beam energy restricts the atomic shells that can be ionized, reducing the number of X-ray peak families available to the analyst. At E 0=5 keV, all elements of the periodic table except H and He can be measured. As the beam energy is reduced below 5 keV, elements become inaccessible due to lack of excitation of useful characteristic X-ray peaks. The shallow sampling depth of low beam energy microanalysis makes the technique more sensitive to surface compositional modification due to formation of oxides and other reaction layers. Accurate and precise analysis is possible with the use of appropriate standards and by accumulating high count spectra of unknowns and standards (>1 million counts integrated from 0.1 keV to E 0).
Wille, M-L; Zapf, M; Ruiter, N V; Gemmeke, H; Langton, C M
2015-06-21
The quality of ultrasound computed tomography imaging is primarily determined by the accuracy of ultrasound transit time measurement. A major problem in analysis is the overlap of signals making it difficult to detect the correct transit time. The current standard is to apply a matched-filtering approach to the input and output signals. This study compares the matched-filtering technique with active set deconvolution to derive a transit time spectrum from a coded excitation chirp signal and the measured output signal. The ultrasound wave travels in a direct and a reflected path to the receiver, resulting in an overlap in the recorded output signal. The matched-filtering and deconvolution techniques were applied to determine the transit times associated with the two signal paths. Both techniques were able to detect the two different transit times; while matched-filtering has a better accuracy (0.13 μs versus 0.18 μs standard deviations), deconvolution has a 3.5 times improved side-lobe to main-lobe ratio. A higher side-lobe suppression is important to further improve image fidelity. These results suggest that a future combination of both techniques would provide improved signal detection and hence improved image fidelity.
Cheng, Cynthia; Lee, Chadd W; Daskalakis, Constantine
2015-10-27
Capillaroscopy is a non-invasive, efficient, relatively inexpensive and easy to learn methodology for directly visualizing the microcirculation. The capillaroscopy technique can provide insight into a patient's microvascular health, leading to a variety of potentially valuable dermatologic, ophthalmologic, rheumatologic and cardiovascular clinical applications. In addition, tumor growth may be dependent on angiogenesis, which can be quantitated by measuring microvessel density within the tumor. However, there is currently little to no standardization of techniques, and only one publication to date reports the reliability of a currently available, complex computer based algorithms for quantitating capillaroscopy data.(1) This paper describes a new, simpler, reliable, standardized capillary counting algorithm for quantitating nailfold capillaroscopy data. A simple, reproducible computerized capillaroscopy algorithm such as this would facilitate more widespread use of the technique among researchers and clinicians. Many researchers currently analyze capillaroscopy images by hand, promoting user fatigue and subjectivity of the results. This paper describes a novel, easy-to-use automated image processing algorithm in addition to a reproducible, semi-automated counting algorithm. This algorithm enables analysis of images in minutes while reducing subjectivity; only a minimal amount of training time (in our experience, less than 1 hr) is needed to learn the technique.
Daskalakis, Constantine
2015-01-01
Capillaroscopy is a non-invasive, efficient, relatively inexpensive and easy to learn methodology for directly visualizing the microcirculation. The capillaroscopy technique can provide insight into a patient’s microvascular health, leading to a variety of potentially valuable dermatologic, ophthalmologic, rheumatologic and cardiovascular clinical applications. In addition, tumor growth may be dependent on angiogenesis, which can be quantitated by measuring microvessel density within the tumor. However, there is currently little to no standardization of techniques, and only one publication to date reports the reliability of a currently available, complex computer based algorithms for quantitating capillaroscopy data.1 This paper describes a new, simpler, reliable, standardized capillary counting algorithm for quantitating nailfold capillaroscopy data. A simple, reproducible computerized capillaroscopy algorithm such as this would facilitate more widespread use of the technique among researchers and clinicians. Many researchers currently analyze capillaroscopy images by hand, promoting user fatigue and subjectivity of the results. This paper describes a novel, easy-to-use automated image processing algorithm in addition to a reproducible, semi-automated counting algorithm. This algorithm enables analysis of images in minutes while reducing subjectivity; only a minimal amount of training time (in our experience, less than 1 hr) is needed to learn the technique. PMID:26554744
NASA Astrophysics Data System (ADS)
Ding, Xiang; Li, Fei; Zhang, Jiyan; Liu, Wenli
2016-10-01
Raman spectrometers are usually calibrated periodically to ensure their measurement accuracy of Raman shift. A combination of a piece of monocrystalline silicon chip and a low pressure discharge lamp is proposed as a candidate for the reference standard of Raman shift. A high precision calibration technique is developed to accurately determine the standard value of the silicon's Raman shift around 520cm-1. The technique is described and illustrated by measuring a piece of silicon chip against three atomic spectral lines of a neon lamp. A commercial Raman spectrometer is employed and its error characteristics of Raman shift are investigated. Error sources are evaluated based on theoretical analysis and experiments, including the sample factor, the instrumental factor, the laser factor and random factors. Experimental results show that the expanded uncertainty of the silicon's Raman shift around 520cm-1 can acheive 0.3 cm-1 (k=2), which is more accurate than most of currently used reference materials. The results are validated by comparison measurement between three Raman spectrometers. It is proved that the technique can remarkably enhance the accuracy of Raman shift, making it possible to use the silicon and the lamp to calibrate Raman spectrometers.
Laboratory techniques and rhythmometry
NASA Technical Reports Server (NTRS)
Halberg, F.
1973-01-01
Some of the procedures used for the analysis of rhythms are illustrated, notably as these apply to current medical and biological practice. For a quantitative approach to medical and broader socio-ecologic goals, the chronobiologist gathers numerical objective reference standards for rhythmic biophysical, biochemical, and behavioral variables. These biological reference standards can be derived by specialized computer analyses of largely self-measured (until eventually automatically recorded) time series (autorhythmometry). Objective numerical values for individual and population parameters of reproductive cycles can be obtained concomitantly with characteristics of about-yearly (circannual), about-daily (circadian) and other rhythms.
Kühnemund, Malte; Hernández-Neuta, Iván; Sharif, Mohd Istiaq; Cornaglia, Matteo; Gijs, Martin A.M.
2017-01-01
Abstract Single molecule quantification assays provide the ultimate sensitivity and precision for molecular analysis. However, most digital analysis techniques, i.e. droplet PCR, require sophisticated and expensive instrumentation for molecule compartmentalization, amplification and analysis. Rolling circle amplification (RCA) provides a simpler means for digital analysis. Nevertheless, the sensitivity of RCA assays has until now been limited by inefficient detection methods. We have developed a simple microfluidic strategy for enrichment of RCA products into a single field of view of a low magnification fluorescent sensor, enabling ultra-sensitive digital quantification of nucleic acids over a dynamic range from 1.2 aM to 190 fM. We prove the broad applicability of our analysis platform by demonstrating 5-plex detection of as little as ∼1 pg (∼300 genome copies) of pathogenic DNA with simultaneous antibiotic resistance marker detection, and the analysis of rare oncogene mutations. Our method is simpler, more cost-effective and faster than other digital analysis techniques and provides the means to implement digital analysis in any laboratory equipped with a standard fluorescent microscope. PMID:28077562
NASA Astrophysics Data System (ADS)
Aida, S.; Matsuno, T.; Hasegawa, T.; Tsuji, K.
2017-07-01
Micro X-ray fluorescence (micro-XRF) analysis is repeated as a means of producing elemental maps. In some cases, however, the XRF images of trace elements that are obtained are not clear due to high background intensity. To solve this problem, we applied principal component analysis (PCA) to XRF spectra. We focused on improving the quality of XRF images by applying PCA. XRF images of the dried residue of standard solution on the glass substrate were taken. The XRF intensities for the dried residue were analyzed before and after PCA. Standard deviations of XRF intensities in the PCA-filtered images were improved, leading to clear contrast of the images. This improvement of the XRF images was effective in cases where the XRF intensity was weak.
Ferrero, A; Campos, J; Rabal, A M; Pons, A; Hernanz, M L; Corróns, A
2011-09-26
The Bidirectional Reflectance Distribution Function (BRDF) is essential to characterize an object's reflectance properties. This function depends both on the various illumination-observation geometries as well as on the wavelength. As a result, the comprehensive interpretation of the data becomes rather complex. In this work we assess the use of the multivariable analysis technique of Principal Components Analysis (PCA) applied to the experimental BRDF data of a ceramic colour standard. It will be shown that the result may be linked to the various reflection processes occurring on the surface, assuming that the incoming spectral distribution is affected by each one of these processes in a specific manner. Moreover, this procedure facilitates the task of interpolating a series of BRDF measurements obtained for a particular sample. © 2011 Optical Society of America
48 CFR 9904.401-50 - Techniques for application.
Code of Federal Regulations, 2010 CFR
2010-10-01
.... 9904.401-50 Section 9904.401-50 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.401-50 Techniques for application. (a) The standard...
NASA Technical Reports Server (NTRS)
Edwards, S. F.; Kantsios, A. G.; Voros, J. P.; Stewart, W. F.
1975-01-01
The development of a radiometric technique for determining the spectral and total normal emittance of materials heated to temperatures of 800, 1100, and 1300 K by direct comparison with National Bureau of Standards (NBS) reference specimens is discussed. Emittances are measured over the spectral range of 1 to 15 microns and are statistically compared with NBS reference specimens. Results are included for NBS reference specimens, Rene 41, alundum, zirconia, AISI type 321 stainless steel, nickel 201, and a space-shuttle reusable surface insulation.
Yu, Xiaodong; Li, Yang; Gu, Xiaofeng; Bao, Jiming; Yang, Huizhong; Sun, Li
2014-12-01
Water quality monitoring is a critical part of environmental management and protection, and to be able to qualitatively and quantitatively determine contamination and impurity levels in water is especially important. Compared to the currently available water quality monitoring methods and techniques, laser-induced breakdown spectroscopy (LIBS) has several advantages, including no need for sample pre-preparation, fast and easy operation, and chemical free during the process. Therefore, it is of great importance to understand the fundamentals of aqueous LIBS analysis and effectively apply this technique to environmental monitoring. This article reviews the research conducted on LIBS analysis for liquid samples, and the article content includes LIBS theory, history and applications, quantitative analysis of metallic species in liquids, LIBS signal enhancement methods and data processing, characteristics of plasma generated by laser in water, and the factors affecting accuracy of analysis results. Although there have been many research works focusing on aqueous LIBS analysis, detection limit and stability of this technique still need to be improved to satisfy the requirements of environmental monitoring standard. In addition, determination of nonmetallic species in liquid by LIBS is equally important and needs immediate attention from the community. This comprehensive review will assist the readers to better understand the aqueous LIBS technique and help to identify current research needs for environmental monitoring of water quality.
Development of a New VLBI Data Analysis Software
NASA Technical Reports Server (NTRS)
Bolotin, Sergei; Gipson, John M.; MacMillan, Daniel S.
2010-01-01
We present an overview of a new VLBI analysis software under development at NASA GSFC. The new software will replace CALC/SOLVE and many related utility programs. It will have the capabilities of the current system as well as incorporate new models and data analysis techniques. In this paper we give a conceptual overview of the new software. We formulate the main goals of the software. The software should be flexible and modular to implement models and estimation techniques that currently exist or will appear in future. On the other hand it should be reliable and possess production quality for processing standard VLBI sessions. Also, it needs to be capable of processing observations from a fully deployed network of VLBI2010 stations in a reasonable time. We describe the software development process and outline the software architecture.
Prosa, T J; Alvis, R; Tsakalakos, L; Smentkowski, V S
2010-08-01
Three-dimensional quantitative compositional analysis of nanowires is a challenge for standard techniques such as secondary ion mass spectrometry because of specimen size and geometry considerations; however, it is precisely the size and geometry of nanowires that makes them attractive candidates for analysis via atom probe tomography. The resulting boron composition of various trimethylboron vapour-liquid-solid grown silicon nanowires were measured both with time-of-flight secondary ion mass spectrometry and pulsed-laser atom probe tomography. Both characterization techniques yielded similar results for relative composition. Specialized specimen preparation for pulsed-laser atom probe tomography was utilized and is described in detail whereby individual silicon nanowires are first protected, then lifted out, trimmed, and finally wet etched to remove the protective layer for subsequent three-dimensional analysis.
Determination of neutron spectra within the energy of 1 keV to 1 MeV by means of reactor dosimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sergeyeva, Victoria; Destouches, Christophe; Lyoussi, Abdallah
2015-07-01
The standard procedure for neutron reactor dosimetry is based on neutron irradiation of a target and its post-irradiation analysis by Gamma and/or X-ray spectrometry. Nowadays, the neutron spectra can be easily characterized for thermal and fast energies (respectively 0.025 eV and >1 MeV). In this work we propose a new target and an innovating post-irradiation technique of analysis in order to detect the neutron spectra within the energy of 1 keV to 1 MeV. This article will present the calculations performed for the selection of a suitable nuclear reaction and isotope, the results predicted by simulations, the irradiation campaign thatmore » is proposed and the post-irradiation technique of analysis. (authors)« less
Quantification of lithium at ppm level in geological samples using nuclear reaction analysis.
De La Rosa, Nathaly; Kristiansson, Per; Nilsson, E J Charlotta; Ros, Linus; Pallon, Jan; Skogby, Henrik
2018-01-01
Proton-induced reaction (p,α) is one type of nuclear reaction analysis (NRA) suitable especially for light element quantification. In the case of lithium quantification presented in this work, accelerated protons with an energy about of 850 keV were used to induce the 7 Li(p,α) 4 He reaction in standard reference and geological samples such as tourmaline and other Li-minerals. It is shown that this technique for lithium quantification allowed for measurement of concentrations down below one ppm. The possibility to relate the lithium content with the boron content in a single analysis was also demonstrated using tourmaline samples, both in absolute concentration and in lateral distribution. In addition, Particle induced X-ray emission (PIXE) was utilized as a complementary IBA technique for simultaneous mapping of elements heavier than sodium.
Costa, Francesco; Ortolina, Alessandro; Galbusera, Fabio; Cardia, Andrea; Sala, Giuseppe; Ronchi, Franco; Uccelli, Carlo; Grosso, Rossella; Fornari, Maurizio
2016-02-01
Pedicle screws with polymethyl methacrylate (PMMA) cement augmentation have been shown to significantly improve the fixation strength in a severely osteoporotic spine. However, the efficacy of screw fixation for different cement augmentation techniques remains unknown. This study aimed to determine the difference in pullout strength between different cement augmentation techniques. Uniform synthetic bones simulating severe osteoporosis were used to provide a platform for each augmentation technique. In all cases a polyaxial screw and acrylic cement (PMMA) at medium viscosity were used. Five groups were analyzed: I) only screw without PMMA (control group); II) retrograde cement pre-filling of the tapped area; III) cannulated and fenestrate screw with cement injection through perforation; IV) injection using a standard trocar of PMMA (vertebroplasty) and retrograde pre-filling of the tapped area; V) injection through a fenestrated trocar and retrograde pre-filling of the tapped area. Standard X-rays were taken in order to visualize cement distribution in each group. Pedicle screws at full insertion were then tested for axial pullout failure using a mechanical testing machine. A total of 30 screws were tested. The results of pullout analysis revealed better results of all groups with respect to the control group. In particular the statistical analysis showed a difference of Group V (p = 0.001) with respect to all other groups. These results confirm that the cement augmentation grants better results in pullout axial forces. Moreover they suggest better load resistance to axial forces when the distribution of the PMMA is along all the screw combining fenestration and pre-filling augmentation technique. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Ringuet, Stephanie; Sassano, Lara; Johnson, Zackary I
2011-02-01
A sensitive, accurate and rapid analysis of major nutrients in aquatic systems is essential for monitoring and maintaining healthy aquatic environments. In particular, monitoring ammonium (NH(4)(+)) concentrations is necessary for maintenance of many fish stocks, while accurate monitoring and regulation of ammonium, orthophosphate (PO(4)(3-)), silicate (Si(OH)(4)) and nitrate (NO(3)(-)) concentrations are required for regulating algae production. Monitoring of wastewater streams is also required for many aquaculture, municipal and industrial wastewater facilities to comply with local, state or federal water quality effluent regulations. Traditional methods for quantifying these nutrient concentrations often require laborious techniques or expensive specialized equipment making these analyses difficult. Here we present four alternative microcolorimetric assays that are based on a standard 96-well microplate format and microplate reader that simplify the quantification of each of these nutrients. Each method uses small sample volumes (200 µL), has a detection limit ≤ 1 µM in freshwater and ≤ 2 µM in saltwater, precision of at least 8% and compares favorably with standard analytical procedures. Routine use of these techniques in the laboratory and at an aquaculture facility to monitor nutrient concentrations associated with microalgae growth demonstrates that they are rapid, accurate and highly reproducible among different users. These techniques offer an alternative to standard nutrient analyses and because they are based on the standard 96-well format, they significantly decrease the cost and time of processing while maintaining high precision and sensitivity.
Ulibarri, Roy M.; Bonar, Scott A.; Rees, Christopher B.; Amberg, Jon J.; Ladell, Bridget; Jackson, Craig
2017-01-01
Analysis of environmental DNA (eDNA) is an emerging technique used to detect aquatic species through water sampling and the extraction of biological material for amplification. Our study compared the efficacy of eDNA methodology to American Fisheries Society (AFS) standard snorkeling surveys with regard to detecting the presence of rare fish species. Knowing which method is more efficient at detecting target species will help managers to determine the best way to sample when both traditional sampling methods and eDNA sampling are available. Our study site included three Navajo Nation streams that contained Navajo Nation Genetic Subunit Bluehead Suckers Catostomus discobolus and Zuni Bluehead Suckers C. discobolus yarrowi. We first divided the entire wetted area of streams into consecutive 100-m reaches and then systematically selected 10 reaches/stream for snorkel and eDNA surveys. Surface water samples were taken in 10-m sections within each 100-m reach, while fish presence was noted via snorkeling in each 10-m section. Quantitative PCR was run on each individual water sample in quadruplicate to test for the presence or absence of the target species. With eDNA sampling techniques, we were able to positively detect both species in two out of the three streams. Snorkeling resulted in positive detection of both species in all three streams. In streams where the target species were detected with eDNA sampling, snorkeling detected fish at 11–29 sites/stream, whereas eDNA detected fish at 3–12 sites/stream. Our results suggest that AFS standard snorkeling is more effective than eDNA for detecting target fish species. To improve our eDNA procedures, the amount of water collected and tested should be increased. Additionally, filtering water on-site may improve eDNA techniques for detecting fish. Future research should focus on standardization of eDNA sampling to provide a widely operational sampling tool.
Rigatelli, Gianluca; Zuin, Marco; Dell'Avvocata, Fabio; Cardaioli, Paolo; Vassiliev, Dobrin; Ferenc, Miroslaw; Nghia, Nguyen Tuan; Nguyen, Thach; Foin, Nicholas
2018-04-01
Multiple BRSs and specifically the Absorb scaffold (BVS) (Abbott Vascular, Santa Clara, CA USA) have been often used to treat long diffuse coronary artery lesions. We evaluate by a computational fluid dynamic(CFD) study the impact on the intravascular fluid rheology on multiple bioabsorbable scaffolds (BRS) by standard overlapping versus edge-to-edge technique. We simulated the treatment of a real long significant coronary lesion (>70% luminal narrowing) involving the left anterior descending artery (LAD) treated with a standard or edge-to-edge technique, respectively. Simulations were performed after BVS implantations in two different conditions: 1) Edge-to-edge technique, where the scaffolds are kissed but not overlapped resulting in a luminal encroachment of 0.015cm (150μm); 2) Standard overlapping, where the scaffolds are overlapped resulting in a luminal encroachment of 0.030cm (300μm). After positioning the BVS across the long lesion, the implantation procedure was performed in-silico following all the usual procedural steps. Analysis of the wall shear stress (WSS) suggested that at the vessel wall level the WSS were lower in the overlapping zones overlapping compared to the edge-to-edge zone (∆=0.061Pa, p=0.01). At the struts level the difference between the two WSS was more striking (∆=1.065e-004 p=0.01) favouring the edge-to-edge zone. Our study suggested that at both vessel wall and scaffold struts levels, there was lowering WSS when multiple BVS were implanted with the standard overlapping technique compared to the "edge-to-edge" technique. This lower WSS might represent a substrate for restenosis, early and late BVS thrombosis, potentially explaining at least in part the recent evidences of devices poor performance. Copyright © 2017 Elsevier Inc. All rights reserved.
2012-01-01
Background Aseptic loosening is one of the greatest problems in hip replacement surgery. The rotation center of the hip is believed to influence the longevity of fixation. The aim of this study was to compare the influence of cemented and cementless cup fixation techniques on the position of the center of rotation because cemented cup fixation requires the removal of more bone for solid fixation than the cementless technique. Methods We retrospectively compared pre- and post-operative positions of the hip rotation center in 25 and 68 patients who underwent artificial hip replacements in our department in 2007 using cemented or cementless cup fixation, respectively, with digital radiographic image analysis. Results The mean horizontal and vertical distances between the rotation center and the acetabular teardrop were compared in radiographic images taken pre- and post-operatively. The mean horizontal difference was −2.63 mm (range: -11.00 mm to 10.46 mm, standard deviation 4.23 mm) for patients who underwent cementless fixation, and −2.84 mm (range: -10.87 to 5.30 mm, standard deviation 4.59 mm) for patients who underwent cemented fixation. The mean vertical difference was 0.60 mm (range: -20.15 mm to 10.00 mm, standard deviation 3.93 mm) and 0.41 mm (range: -9.26 mm to 6.54 mm, standard deviation 3.58 mm) for the cementless and cemented fixation groups, respectively. The two fixation techniques had no significant difference on the position of the hip rotation center in the 93 patients in this study. Conclusions The hip rotation center was similarly restored using either the cemented or cementless fixation techniques in this patient cohort, indicating that the fixation technique itself does not interfere with the position of the center of rotation. To completely answer this question further studies with more patients are needed. PMID:22686355
48 CFR 9904.413-50 - Techniques for application.
Code of Federal Regulations, 2010 CFR
2010-10-01
.... 9904.413-50 Section 9904.413-50 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD... ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.413-50 Techniques for application. (a) Assignment of actuarial gains and losses. (1) In accordance with the provisions of Cost Accounting Standard 9904.412...
Theoretical basis of the DOE-2 building energy use analysis program
NASA Astrophysics Data System (ADS)
Curtis, R. B.
1981-04-01
A user-oriented, public domain, computer program was developed that will enable architects and engineers to perform design and retrofit studies of the energy-use of buildings under realistic weather conditions. The DOE-2.1A has been named by the US DOE as the standard evaluation technique for the Congressionally mandated building energy performance standards (BEPS). A number of program design decisions were made that determine the breadth of applicability of DOE-2.1. Such design decisions are intrinsic to all building energy use analysis computer programs and determine the types of buildings or the kind of HVAC systems that can be modeled. In particular, the weighting factor method used in DOE-2 has both advantages and disadvantages relative to other computer programs.
Predicting ESI/MS Signal Change for Anions in Different Solvents.
Kruve, Anneli; Kaupmees, Karl
2017-05-02
LC/ESI/MS is a technique widely used for qualitative and quantitative analysis in various fields. However, quantification is currently possible only for compounds for which the standard substances are available, as the ionization efficiency of different compounds in ESI source differs by orders of magnitude. In this paper we present an approach for quantitative LC/ESI/MS analysis without standard substances. This approach relies on accurately predicting the ionization efficiencies in ESI source based on a model, which uses physicochemical parameters of analytes. Furthermore, the model has been made transferable between different mobile phases and instrument setups by using a suitable set of calibration compounds. This approach has been validated both in flow injection and chromatographic mode with gradient elution.
TWave: High-Order Analysis of Functional MRI
Barnathan, Michael; Megalooikonomou, Vasileios; Faloutsos, Christos; Faro, Scott; Mohamed, Feroze B.
2011-01-01
The traditional approach to functional image analysis models images as matrices of raw voxel intensity values. Although such a representation is widely utilized and heavily entrenched both within neuroimaging and in the wider data mining community, the strong interactions among space, time, and categorical modes such as subject and experimental task inherent in functional imaging yield a dataset with “high-order” structure, which matrix models are incapable of exploiting. Reasoning across all of these modes of data concurrently requires a high-order model capable of representing relationships between all modes of the data in tandem. We thus propose to model functional MRI data using tensors, which are high-order generalizations of matrices equivalent to multidimensional arrays or data cubes. However, several unique challenges exist in the high-order analysis of functional medical data: naïve tensor models are incapable of exploiting spatiotemporal locality patterns, standard tensor analysis techniques exhibit poor efficiency, and mixtures of numeric and categorical modes of data are very often present in neuroimaging experiments. Formulating the problem of image clustering as a form of Latent Semantic Analysis and using the WaveCluster algorithm as a baseline, we propose a comprehensive hybrid tensor and wavelet framework for clustering, concept discovery, and compression of functional medical images which successfully addresses these challenges. Our approach reduced runtime and dataset size on a 9.3 GB finger opposition motor task fMRI dataset by up to 98% while exhibiting improved spatiotemporal coherence relative to standard tensor, wavelet, and voxel-based approaches. Our clustering technique was capable of automatically differentiating between the frontal areas of the brain responsible for task-related habituation and the motor regions responsible for executing the motor task, in contrast to a widely used fMRI analysis program, SPM, which only detected the latter region. Furthermore, our approach discovered latent concepts suggestive of subject handedness nearly 100x faster than standard approaches. These results suggest that a high-order model is an integral component to accurate scalable functional neuroimaging. PMID:21729758
1984-05-01
TEST CHART ?NATIONAL BUREAU OF STANDARDS-1963-A AD)A142 961 Repor USAFSAM-TR-84-17 USAFSAM REVIEW AND ANALYSIS OF RADIOFREQUENCY RADIATION BIOEFFECTS...1983a) AUTHOR ABSTRACT: Normal mouse B lymphocytes were tested for the ability to cap plasma antigen-antibody complexes following exposure to 2.45-GHz...treatment, the irradiated cells and the nonirradiated controls were tested for capping by the direct immunofluorescence technique. First, the cells
Radar cross section studies/compact range research
NASA Technical Reports Server (NTRS)
Burnside, W. D.; Dominek, A. K.; Gupta, I. J.; Newman, E. H.; Pathak, P. H.; Peters, L., Jr.
1989-01-01
Achievements in advancing the state-of-the-art in the measurement, control, and analysis of electromagnetic scattering from general aerodynamic targets are summarized. The major topics associated with this study include: (1) electromagnetic scattering analysis; (2) indoor scattering measurement systems; (3) RCS control; (4) waveform processing techniques; (5) material scattering and design studies; (6) design and evaluation of standard targets; and (7) antenna studies. Progress in each of these areas is reported and related publications are listed.
γγ coincidence spectrometer for instrumental neutron-activation analysis
NASA Astrophysics Data System (ADS)
Tomlin, B. E.; Zeisler, R.; Lindstrom, R. M.
2008-05-01
Neutron-activation analysis (NAA) is an important technique for the accurate and precise determination of trace and ultra-trace elemental compositions. The application of γγ coincidence counting to NAA in order to enhance specificity was first explored over 40 years ago but has not evolved into a regularly used technique. A γγ coincidence spectrometer has been constructed at the National Institute of Standards and Technology, using two HPGe γ-ray detectors and an all-digital data-acquisition system, for the purpose of exploring coincidence NAA and its value in characterizing reference materials. This paper describes the initial evaluation of the quantitative precision of coincidence counting versus singles spectrometry, based upon a sample of neutron-irradiated bovine liver material.
Tau-independent Phase Analysis: A Novel Method for Accurately Determining Phase Shifts.
Tackenberg, Michael C; Jones, Jeff R; Page, Terry L; Hughey, Jacob J
2018-06-01
Estimations of period and phase are essential in circadian biology. While many techniques exist for estimating period, comparatively few methods are available for estimating phase. Current approaches to analyzing phase often vary between studies and are sensitive to coincident changes in period and the stage of the circadian cycle at which the stimulus occurs. Here we propose a new technique, tau-independent phase analysis (TIPA), for quantifying phase shifts in multiple types of circadian time-course data. Through comprehensive simulations, we show that TIPA is both more accurate and more precise than the standard actogram approach. TIPA is computationally simple and therefore will enable accurate and reproducible quantification of phase shifts across multiple subfields of chronobiology.
Detection of Bladder CA by Microsatellite Analysis (MSA) — EDRN Public Portal
Goal 1: To determine sensitivity and specificity of microsatellite analysis (MSA) of urine sediment, using a panel of 15 microsatellite markers, in detecting bladder cancer in participants requiring cystoscopy. This technique will be compared to the diagnostic standard of cystoscopy, as well as to urine cytology. Goal 2: To determine the temporal performance characteristics of microsatellite analysis of urine sediment. Goal 3: To determine which of the 15 individual markers or combination of markers that make up the MSA test are most predictive of the presence of bladder cancer.
1982-01-01
Wastewaters in Hoboken and North Bersen, New Jersey. P00 757 An In Depth Compliance and Performance Analysis of the RBC (Rotating Biological Contactor...Contactors). PO00 770 Inhibition of Nitrification by Chromium in a Biodisc System. PO00 771 Scale-Up and Process Analysis Techniques for Plastic...with "Standard Methods for the Zxam- ination of Water and Wastewater" -o) or "’!ethods for Chemical Analysis of Water and w;astes" ). 640 o -j -c 0
Search for a standard model Higgs boson in WH --> lvbb in pp collisions at square root s = 1.96 TeV.
Aaltonen, T; Adelman, J; Akimoto, T; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzurri, P; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Beecher, D; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Beringer, J; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burke, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Calancha, C; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carls, B; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Chwalek, T; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Cordelli, M; Cortiana, G; Cox, C A; Cox, D J; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lorenzo, G; Dell'orso, M; Deluca, C; Demortier, L; Deng, J; Deninno, M; Derwent, P F; di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Elagin, A; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Frank, M J; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Genser, K; Gerberich, H; Gerdes, D; Gessler, A; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Han, B-Y; Han, J Y; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hays, C; Heck, M; Heijboer, A; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Hussein, M; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jayatilaka, B; Jeon, E J; Jha, M K; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, H W; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Knuteson, B; Ko, B R; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhr, T; Kulkarni, N P; Kurata, M; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, E; Lee, H S; Lee, S W; Leone, S; Lewis, J D; Lin, C-S; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lucchesi, D; Luci, C; Lueck, J; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Macqueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mathis, M; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzione, A; Merkel, P; Mesropian, C; Miao, T; Miladinovic, N; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moggi, N; Moon, C S; Moore, R; Morello, M J; Morlock, J; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Nett, J; Neu, C; Neubauer, M S; Neubauer, S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Osterberg, K; Pagan Griso, S; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Peiffer, T; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Pueschel, E; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Renton, P; Renz, M; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savoy-Navarro, A; Schlabach, P; Schmidt, A; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sforza, F; Sfyrla, A; Shalhout, S Z; Shears, T; Shepard, P F; Shimojima, M; Shiraishi, S; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spreitzer, T; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Strycker, G L; Stuart, D; Suh, J S; Sukhanov, A; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Ttito-Guzmán, P; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Tourneur, S; Trovato, M; Tsai, S-Y; Tu, Y; Turini, N; Ukegawa, F; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Wagner, P; Wagner, R G; Wagner, R L; Wagner, W; Wagner-Kuhr, J; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Weinelt, J; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Wilbur, S; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Würthwein, F; Xie, S; Yagil, A; Yamamoto, K; Yamaoka, J; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zhang, X; Zheng, Y; Zucchelli, S
2009-09-04
We present a search for a standard model Higgs boson produced in association with a W boson using 2.7 fb(-1) of integrated luminosity of pp collision data taken at square root s = 1.96 TeV. Limits on the Higgs boson production rate are obtained for masses between 100 and 150 GeV/c(2). Through the use of multivariate techniques, the analysis achieves an observed (expected) 95% confidence level upper limit of 5.6 (4.8) times the theoretically expected production cross section for a standard model Higgs boson with a mass of 115 GeV/c(2).
NASA Astrophysics Data System (ADS)
Basye, Austin T.
A matrix element method analysis of the Standard Model Higgs boson, produced in association with two top quarks decaying to the lepton-plus-jets channel is presented. Based on 20.3 fb--1 of s=8 TeV data, produced at the Large Hadron Collider and collected by the ATLAS detector, this analysis utilizes multiple advanced techniques to search for ttH signatures with a 125 GeV Higgs boson decaying to two b -quarks. After categorizing selected events based on their jet and b-tag multiplicities, signal rich regions are analyzed using the matrix element method. Resulting variables are then propagated to two parallel multivariate analyses utilizing Neural Networks and Boosted Decision Trees respectively. As no significant excess is found, an observed (expected) limit of 3.4 (2.2) times the Standard Model cross-section is determined at 95% confidence, using the CLs method, for the Neural Network analysis. For the Boosted Decision Tree analysis, an observed (expected) limit of 5.2 (2.7) times the Standard Model cross-section is determined at 95% confidence, using the CLs method. Corresponding unconstrained fits of the Higgs boson signal strength to the observed data result in the measured signal cross-section to Standard Model cross-section prediction of mu = 1.2 +/- 1.3(total) +/- 0.7(stat.) for the Neural Network analysis, and mu = 2.9 +/- 1.4(total) +/- 0.8(stat.) for the Boosted Decision Tree analysis.
Codex Alimentarius: food quality and safety standards for international trade.
Randell, A W; Whitehead, A J
1997-08-01
Since 1962, the Codex Alimentarius Commission (CAC) of the Food and Agriculture Organisation/World Health Organisation has been responsible for developing standards, guidelines and other recommendations on the quality and safety of food to protect the health of consumers and to ensure fair practices in food trade. The mission of the CAC remains relevant, but a number of factors have shown the need for new techniques to form the basis of food standards, the most important of which is risk analysis. The authors give a brief description of the role and work of the CAC and the efforts deployed by the Commission to respond to the challenges posed by new approaches to government regulation, harmonisation of national requirements based on international standards and the role of civil society.
Kwon, Deukwoo; Reis, Isildinha M
2015-08-12
When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.
Windowed and Wavelet Analysis of Marine Stratocumulus Cloud Inhomogeneity
NASA Technical Reports Server (NTRS)
Gollmer, Steven M.; Harshvardhan; Cahalan, Robert F.; Snider, Jack B.
1995-01-01
To improve radiative transfer calculations for inhomogeneous clouds, a consistent means of modeling inhomogeneity is needed. One current method of modeling cloud inhomogeneity is through the use of fractal parameters. This method is based on the supposition that cloud inhomogeneity over a large range of scales is related. An analysis technique named wavelet analysis provides a means of studying the multiscale nature of cloud inhomogeneity. In this paper, the authors discuss the analysis and modeling of cloud inhomogeneity through the use of wavelet analysis. Wavelet analysis as well as other windowed analysis techniques are used to study liquid water path (LWP) measurements obtained during the marine stratocumulus phase of the First ISCCP (International Satellite Cloud Climatology Project) Regional Experiment. Statistics obtained using analysis windows, which are translated to span the LWP dataset, are used to study the local (small scale) properties of the cloud field as well as their time dependence. The LWP data are transformed onto an orthogonal wavelet basis that represents the data as a number of times series. Each of these time series lies within a frequency band and has a mean frequency that is half the frequency of the previous band. Wavelet analysis combined with translated analysis windows reveals that the local standard deviation of each frequency band is correlated with the local standard deviation of the other frequency bands. The ratio between the standard deviation of adjacent frequency bands is 0.9 and remains constant with respect to time. This ratio defined as the variance coupling parameter is applicable to all of the frequency bands studied and appears to be related to the slope of the data's power spectrum. Similar analyses are performed on two cloud inhomogeneity models, which use fractal-based concepts to introduce inhomogeneity into a uniform cloud field. The bounded cascade model does this by iteratively redistributing LWP at each scale using the value of the local mean. This model is reformulated into a wavelet multiresolution framework, thereby presenting a number of variants of the bounded cascade model. One variant introduced in this paper is the 'variance coupled model,' which redistributes LWP using the local standard deviation and the variance coupling parameter. While the bounded cascade model provides an elegant two- parameter model for generating cloud inhomogeneity, the multiresolution framework provides more flexibility at the expense of model complexity. Comparisons are made with the results from the LWP data analysis to demonstrate both the strengths and weaknesses of these models.
A manual for microcomputer image analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rich, P.M.; Ranken, D.M.; George, J.S.
1989-12-01
This manual is intended to serve three basic purposes: as a primer in microcomputer image analysis theory and techniques, as a guide to the use of IMAGE{copyright}, a public domain microcomputer program for image analysis, and as a stimulus to encourage programmers to develop microcomputer software suited for scientific use. Topics discussed include the principals of image processing and analysis, use of standard video for input and display, spatial measurement techniques, and the future of microcomputer image analysis. A complete reference guide that lists the commands for IMAGE is provided. IMAGE includes capabilities for digitization, input and output of images,more » hardware display lookup table control, editing, edge detection, histogram calculation, measurement along lines and curves, measurement of areas, examination of intensity values, output of analytical results, conversion between raster and vector formats, and region movement and rescaling. The control structure of IMAGE emphasizes efficiency, precision of measurement, and scientific utility. 18 refs., 18 figs., 2 tabs.« less
A two-step electrodialysis method for DNA purification from polluted metallic environmental samples.
Rodríguez-Mejía, José Luis; Martínez-Anaya, Claudia; Folch-Mallol, Jorge Luis; Dantán-González, Edgar
2008-08-01
Extracting DNA from samples of polluted environments using standard methods often results in low yields of poor-quality material unsuited to subsequent manipulation and analysis by molecular biological techniques. Here, we report a novel two-step electrodialysis-based method for the extraction of DNA from environmental samples. This technique permits the rapid and efficient isolation of high-quality DNA based on its acidic nature, and without the requirement for phenol-chloroform-isoamyl alcohol cleanup and ethanol precipitation steps. Subsequent PCR, endonuclease restriction, and cloning reactions were successfully performed utilizing DNA obtained by electrodialysis, whereas some or all of these techniques failed using DNA extracted with two alternative methods. We also show that his technique is applicable to purify DNA from a range of polluted and nonpolluted samples.
Quigley, Elizabeth A; Tokay, Barbara A; Jewell, Sarah T; Marchetti, Michael A; Halpern, Allan C
2015-08-01
Photographs are invaluable dermatologic diagnostic, management, research, teaching, and documentation tools. Digital Imaging and Communications in Medicine (DICOM) standards exist for many types of digital medical images, but there are no DICOM standards for camera-acquired dermatologic images to date. To identify and describe existing or proposed technology and technique standards for camera-acquired dermatologic images in the scientific literature. Systematic searches of the PubMed, EMBASE, and Cochrane databases were performed in January 2013 using photography and digital imaging, standardization, and medical specialty and medical illustration search terms and augmented by a gray literature search of 14 websites using Google. Two reviewers independently screened titles of 7371 unique publications, followed by 3 sequential full-text reviews, leading to the selection of 49 publications with the most recent (1985-2013) or detailed description of technology or technique standards related to the acquisition or use of images of skin disease (or related conditions). No universally accepted existing technology or technique standards for camera-based digital images in dermatology were identified. Recommendations are summarized for technology imaging standards, including spatial resolution, color resolution, reproduction (magnification) ratios, postacquisition image processing, color calibration, compression, output, archiving and storage, and security during storage and transmission. Recommendations are also summarized for technique imaging standards, including environmental conditions (lighting, background, and camera position), patient pose and standard view sets, and patient consent, privacy, and confidentiality. Proposed standards for specific-use cases in total body photography, teledermatology, and dermoscopy are described. The literature is replete with descriptions of obtaining photographs of skin disease, but universal imaging standards have not been developed, validated, and adopted to date. Dermatologic imaging is evolving without defined standards for camera-acquired images, leading to variable image quality and limited exchangeability. The development and adoption of universal technology and technique standards may first emerge in scenarios when image use is most associated with a defined clinical benefit.
Extraction and quantitative analysis of iodine in solid and solution matrixes.
Brown, Christopher F; Geiszler, Keith N; Vickerman, Tanya S
2005-11-01
129I is a contaminant of interest in the vadose zone and groundwater at numerous federal and privately owned facilities. Several techniques have been utilized to extract iodine from solid matrixes; however, all of them rely on two fundamental approaches: liquid extraction or chemical/heat-facilitated volatilization. While these methods are typically chosen for their ease of implementation, they do not totally dissolve the solid. We defined a method that produces complete solid dissolution and conducted laboratory tests to assess its efficacy to extract iodine from solid matrixes. Testing consisted of potassium nitrate/potassium hydroxide fusion of the sample, followed by sample dissolution in a mixture of sulfuric acid and sodium bisulfite. The fusion extraction method resulted in complete sample dissolution of all solid matrixes tested. Quantitative analysis of 127I and 129I via inductively coupled plasma mass spectrometry showed better than +/-10% accuracy for certified reference standards, with the linear operating range extending more than 3 orders of magnitude (0.005-5 microg/L). Extraction and analysis of four replicates of standard reference material containing 5 microg/g 127I resulted in an average recovery of 98% with a relative deviation of 6%. This simple and cost-effective technique can be applied to solid samples of varying matrixes with little or no adaptation.
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Schmauch, Preston
2011-01-01
Turbine blades in rocket and jet engine turbomachinery experience enormous harmonic loading conditions. These loads result from the integer number of upstream and downstream stator vanes as well as the other turbine stages. Assessing the blade structural integrity is a complex task requiring an initial characterization of whether resonance is possible and then performing a forced response analysis if that condition is met. The standard technique for forced response analysis in rocket engines is to decompose a CFD-generated flow field into its harmonic components, and to then perform a frequency response analysis at the problematic natural frequencies. Recent CFD analysis and water-flow testing at NASA/MSFC, though, indicates that this technique may miss substantial harmonic and non-harmonic excitation sources that become present in complex flows. A substantial effort has been made to account for this denser spatial Fourier content in frequency response analysis (described in another paper by the author), but the question still remains whether the frequency response analysis itself is capable of capturing the excitation content sufficiently. Two studies comparing frequency response analysis with transient response analysis, therefore, of bladed-disks undergoing this complex flow environment have been performed. The first is of a bladed disk with each blade modeled by simple beam elements. Six loading cases were generated by varying a baseline harmonic excitation in different ways based upon cold-flow testing from Heritage Fuel Air Turbine Test. It was hypothesized that the randomness and other variation from the standard harmonic excitation would reduce the blade structural response, but the results showed little reduction. The second study was of a realistic model of a bladed-disk excited by the same CFD used in the J2X engine program. It was hypothesized that enforcing periodicity in the CFD (inherent in the frequency response technique) would overestimate the response. The results instead showed that the transient analysis results were up to 10% higher for "clean" nodal diameter excitations and six times larger for "messy" excitations, where substantial Fourier content around the main harmonic exists. Because the bulk of resonance problems are due to the "clean" excitations, a 10% underprediction is not necessarily a problem, especially since the average response in the transient is similar to the frequency response result, and so in a realistic finite life calculation, the life would be same. However, in the rare cases when the "messy" excitations harmonics are identified as the source of potential resonance concerns, this research does indicate that frequency response analysis is inadequate for accurate characterization of blade structural capability.
Hancewicz, Thomas M; Xiao, Chunhong; Zhang, Shuliang; Misra, Manoj
2013-12-01
In vivo confocal Raman spectroscopy has become the measurement technique of choice for skin health and skin care related communities as a way of measuring functional chemistry aspects of skin that are key indicators for care and treatment of various skin conditions. Chief among these techniques are stratum corneum water content, a critical health indicator for severe skin condition related to dryness, and natural moisturizing factor components that are associated with skin protection and barrier health. In addition, in vivo Raman spectroscopy has proven to be a rapid and effective method for quantifying component penetration in skin for topically applied skin care formulations. The benefit of such a capability is that noninvasive analytical chemistry can be performed in vivo in a clinical setting, significantly simplifying studies aimed at evaluating product performance. This presumes, however, that the data and analysis methods used are compatible and appropriate for the intended purpose. The standard analysis method used by most researchers for in vivo Raman data is ordinary least squares (OLS) regression. The focus of work described in this paper is the applicability of OLS for in vivo Raman analysis with particular attention given to use for non-ideal data that often violate the inherent limitations and deficiencies associated with proper application of OLS. We then describe a newly developed in vivo Raman spectroscopic analysis methodology called multivariate curve resolution-augmented ordinary least squares (MCR-OLS), a relatively simple route to addressing many of the issues with OLS. The method is compared with the standard OLS method using the same in vivo Raman data set and using both qualitative and quantitative comparisons based on model fit error, adherence to known data constraints, and performance against calibration samples. A clear improvement is shown in each comparison for MCR-OLS over standard OLS, thus supporting the premise that the MCR-OLS method is better suited for general-purpose multicomponent analysis of in vivo Raman spectral data. This suggests that the methodology is more readily adaptable to a wide range of component systems and is thus more generally applicable than standard OLS.
Multiphoton spectral analysis of benzo[a]pyrene uptake and metabolism in a rat liver cell line
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barhoumi, Rola, E-mail: rmouneimne@cvm.tamu.edu; Mouneimne, Youssef; Ramos, Ernesto
2011-05-15
Dynamic analysis of the uptake and metabolism of polycyclic aromatic hydrocarbons (PAHs) and their metabolites within live cells in real time has the potential to provide novel insights into genotoxic and non-genotoxic mechanisms of cellular injury caused by PAHs. The present work, combining the use of metabolite spectra generated from metabolite standards using multiphoton spectral analysis and an 'advanced unmixing process', identifies and quantifies the uptake, partitioning, and metabolite formation of one of the most important PAHs (benzo[a]pyrene, BaP) in viable cultured rat liver cells over a period of 24 h. The application of the advanced unmixing process resulted inmore » the simultaneous identification of 8 metabolites in live cells at any single time. The accuracy of this unmixing process was verified using specific microsomal epoxide hydrolase inhibitors, glucuronidation and sulfation inhibitors as well as several mixtures of metabolite standards. Our findings prove that the two-photon microscopy imaging surpasses the conventional fluorescence imaging techniques and the unmixing process is a mathematical technique that seems applicable to the analysis of BaP metabolites in living cells especially for analysis of changes of the ultimate carcinogen benzo[a]pyrene-r-7,t-8-dihydrodiol-t-9,10-epoxide. Therefore, the combination of the two-photon acquisition with the unmixing process should provide important insights into the cellular and molecular mechanisms by which BaP and other PAHs alter cellular homeostasis.« less
Use of Tc-99m-galactosyl-neoglycoalbumin (Tc-NGA) to determine hepatic blood flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stadalnik, R.C.; Vera, D.R.; Woodle, E.S.
1984-01-01
Tc-NGA is a new liver radiopharmaceutical which binds to a hepatocyte-specific membrane receptor. Three characteristics of Tc-NGA can be exploited in the measurement of hepatic blood flow (HBF): 1) ability to alter the affinity of Tc-NGA for its receptor by changing the galactose: albumin ratio; 2) ability to achieve a high specific activity with Tc-99m labeling; and 3) ability to administer a high molar dose of Tc-NGA without physiologic side effects. In addition, kinetic modeling of Tc-NGA dynamic data can provide estimates of hepatic receptor concentration. In experimental studies in young pigs, HBF was determined using two techniques: 1) kineticmore » modeling of dynamic data using moderate affinity, low specific activity Tc-NGA (Group A, n=12); and 2) clearance (CL) technique using high affinity, high specific activity Tc-NGA (Group B, n=4). In both groups, HBF was determined simultaneously by continuous infusion of indocyanine green (CI-ICG) with hepatic vein sampling. Regression analysis of HBF measurements obtained with the Tc-NGA kinetic modeling technique and the CI-ICG technique (Group A) revealed good correlation between the two techniques (r=0.802, p=0.02). Similarly, HBF determination by the clearance technique (Group B) provided highly accurate measurements when compared to the CI-ICG technique. Hepatic blood flow measurements by the clearance technique (CL-NGA) fell within one standard deviation of the error associated with each CI-ICG HBF measurement (all CI-ICG standard deviations were less than 10%).« less
SeeSway - A free web-based system for analysing and exploring standing balance data.
Clark, Ross A; Pua, Yong-Hao
2018-06-01
Computerised posturography can be used to assess standing balance, and can predict poor functional outcomes in many clinical populations. A key limitation is the disparate signal filtering and analysis techniques, with many methods requiring custom computer programs. This paper discusses the creation of a freely available web-based software program, SeeSway (www.rehabtools.org/seesway), which was designed to provide powerful tools for pre-processing, analysing and visualising standing balance data in an easy to use and platform independent website. SeeSway links an interactive web platform with file upload capability to software systems including LabVIEW, Matlab, Python and R to perform the data filtering, analysis and visualisation of standing balance data. Input data can consist of any signal that comprises an anterior-posterior and medial-lateral coordinate trace such as center of pressure or mass displacement. This allows it to be used with systems including criterion reference commercial force platforms and three dimensional motion analysis, smartphones, accelerometers and low-cost technology such as Nintendo Wii Balance Board and Microsoft Kinect. Filtering options include Butterworth, weighted and unweighted moving average, and discrete wavelet transforms. Analysis methods include standard techniques such as path length, amplitude, and root mean square in addition to less common but potentially promising methods such as sample entropy, detrended fluctuation analysis and multiresolution wavelet analysis. These data are visualised using scalograms, which chart the change in frequency content over time, scatterplots and standard line charts. This provides the user with a detailed understanding of their results, and how their different pre-processing and analysis method selections affect their findings. An example of the data analysis techniques is provided in the paper, with graphical representation of how advanced analysis methods can better discriminate between someone with neurological impairment and a healthy control. The goal of SeeSway is to provide a simple yet powerful educational and research tool to explore how standing balance is affected in aging and clinical populations. Copyright © 2018 Elsevier B.V. All rights reserved.
Interpreting international governance standards for health IT use within general medical practice.
Mahncke, Rachel J; Williams, Patricia A H
2014-01-01
General practices in Australia recognise the importance of comprehensive protective security measures. Some elements of information security governance are incorporated into recommended standards, however the governance component of information security is still insufficiently addressed in practice. The International Organistion for Standardisation (ISO) released a new global standard in May 2013 entitled, ISO/IEC 27014:2013 Information technology - Security techniques - Governance of information security. This standard, applicable to organisations of all sizes, offers a framework against which to assess and implement the governance components of information security. The standard demonstrates the relationship between governance and the management of information security, provides strategic principles and processes, and forms the basis for establishing a positive information security culture. An analysis interpretation of this standard for use in Australian general practice was performed. This work is unique as such interpretation for the Australian healthcare environment has not been undertaken before. It demonstrates an application of the standard at a strategic level to inform existing development of an information security governance framework.
Assessing estimation techniques for missing plot observations in the U.S. forest inventory
Grant M. Domke; Christopher W. Woodall; Ronald E. McRoberts; James E. Smith; Mark A. Hatfield
2012-01-01
The U.S. Forest Service, Forest Inventory and Analysis Program made a transition from state-by-state periodic forest inventories--with reporting standards largely tailored to regional requirements--to a nationally consistent, annual inventory tailored to large-scale strategic requirements. Lack of measurements on all forest land during the periodic inventory, along...
ERIC Educational Resources Information Center
Tian, Wei; Yin, Heng; Redett, Richard J.; Shi, Bing; Shi, Jin; Zhang, Rui; Zheng, Qian
2010-01-01
Purpose: Recent applications of the magnetic resonance imaging (MRI) technique introduced accurate 3-dimensional measurements of the velopharyngeal mechanism. Further standardization of the data acquisition and analysis protocol was successfully applied to imaging adults at rest and during phonation. This study was designed to test and modify a…
Effective Report Preparation: Streamlining the Reporting Process. AIR 1999 Annual Forum Paper.
ERIC Educational Resources Information Center
Dalrymple, Margaret; Wang, Mindy; Frost, Jacquelyn
This paper describes the processes and techniques used to improve and streamline the standard student reports used at Purdue University (Indiana). Various models for analyzing reporting processes are described, especially the model used in the study, the Shewart or Deming Cycle, a method that aids in continuous analysis and improvement through a…
A Content Analysis of Themes That Emerge from School Principals' Web2.0 Conversations
ERIC Educational Resources Information Center
Manning, Rory
2011-01-01
The purpose of this qualitative study was to analyze the self initiated conversations held by school principals on web2.o platforms, such as blogs, through the lens of current leadership standards. The online writings of thirteen school principals were analyzed using grounded theory techniques (Strauss and Corbin, 1998) to elucidate emerging…
NASA Astrophysics Data System (ADS)
Lovejoy, McKenna R.; Wickert, Mark A.
2017-05-01
A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.
Russo, Russell R; Burn, Matthew B; Ismaily, Sabir K; Gerrie, Brayden J; Han, Shuyang; Alexander, Jerry; Lenherr, Christopher; Noble, Philip C; Harris, Joshua D; McCulloch, Patrick C
2017-09-07
Accurate measurements of knee and hip motion are required for management of musculoskeletal pathology. The purpose of this investigation was to compare three techniques for measuring motion at the hip and knee. The authors hypothesized that digital photography would be equivalent in accuracy and show higher precision compared to the other two techniques. Using infrared motion capture analysis as the reference standard, hip flexion/abduction/internal rotation/external rotation and knee flexion/extension were measured using visual estimation, goniometry, and photography on 10 fresh frozen cadavers. These measurements were performed by three physical therapists and three orthopaedic surgeons. Accuracy was defined by the difference from the reference standard, while precision was defined by the proportion of measurements within either 5° or 10°. Analysis of variance (ANOVA), t-tests, and chi-squared tests were used. Although two statistically significant differences were found in measurement accuracy between the three techniques, neither of these differences met clinical significance (difference of 1.4° for hip abduction and 1.7° for the knee extension). Precision of measurements was significantly higher for digital photography than: (i) visual estimation for hip abduction and knee extension, and (ii) goniometry for knee extension only. There was no clinically significant difference in measurement accuracy between the three techniques for hip and knee motion. Digital photography only showed higher precision for two joint motions (hip abduction and knee extension). Overall digital photography shows equivalent accuracy and near-equivalent precision to visual estimation and goniometry.
Davari, Seyyed Ali; Hu, Sheng; Mukherjee, Dibyendu
2017-03-01
Intermetallic nanoalloys (NAs) and nanocomposites (NCs) have increasingly gained prominence as efficient catalytic materials in electrochemical energy conversion and storage systems. But their morphology and chemical compositions play critical role in tuning their catalytic activities, and precious metal contents. While advanced microscopy techniques facilitate morphological characterizations, traditional chemical characterizations are either qualitative or extremely involved. In this study, we apply Laser Induced Breakdown Spectroscopy (LIBS) for quantitative compositional analysis of NAs and NCs synthesized with varied elemental ratios by our in-house built pulsed laser ablation technique. Specifically, elemental ratios of binary PtNi, PdCo (NAs) and PtCo (NCs) of different compositions are determined from LIBS measurements employing an internal calibration scheme using the bulk matrix species as internal standards. Morphology and qualitative elemental compositions of the aforesaid NAs and NCs are confirmed from Transmission Electron Microscopy (TEM) images and Energy Dispersive X-ray Spectroscopy (EDX) measurements. LIBS experiments are carried out in ambient conditions with the NA and NC samples drop cast on silicon wafers after centrifugation to increase their concentrations. The technique does not call for cumbersome sample preparations including acid digestions and external calibration standards commonly required in Inductively Coupled Plasma-Optical Emission Spectroscopy (ICP-OES) techniques. Yet the quantitative LIBS results are in good agreement with the results from ICP-OES measurements. Our results indicate the feasibility of using LIBS in future for rapid and in-situ quantitative chemical characterizations of wide classes of synthesized NAs and NCs. Copyright © 2016 Elsevier B.V. All rights reserved.
Sowa, Mandy; Hiemann, Rico; Schierack, Peter; Reinhold, Dirk; Conrad, Karsten; Roggenbuck, Dirk
2017-08-01
Occurrence of autoantibodies (autoAbs) is a hallmark of autoimmune diseases, and the analysis thereof is an essential part in the diagnosis of organ-specific autoimmune and systemic autoimmune rheumatic diseases (SARD), especially connective tissue diseases (CTDs). Due to the appearance of autoAb profiles in SARD patients and the complexity of the corresponding serological diagnosis, different diagnostic strategies have been suggested for appropriate autoAb testing. Thus, evolving assay techniques and the continuous discovery of novel autoantigens have greatly influenced the development of these strategies. Antinuclear antibody (ANA) analysis by indirect immunofluorescence (IIF) on tissue and later cellular substrates was one of the first tests introduced into clinical routine and is still an indispensable tool for CTD serology. Thus, screening for ANA by IIF is recommended to be followed by confirmatory testing of positive findings employing different assay techniques. Given the continuous growth in the demand for autoAb testing, IIF has been challenged as the standard method for ANA and other autoAb analyses due to lacking automation, standardization, modern data management, and human bias in IIF pattern interpretation. To address these limitations of autoAb testing, the CytoBead® technique has been introduced recently which enables automated interpretation of cell-based IIF and quantitative autoAb multiplexing by addressable microbead immunoassays in one reaction environment. Thus, autoAb screening and confirmatory testing can be combined for the first time. The present review discusses the history of autoAb assay techniques in this context and gives an overview and outlook of the recent progress in emerging technologies.
Prediction models for clustered data: comparison of a random intercept and standard regression model
2013-01-01
Background When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Methods Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. Results The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. Conclusion The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters. PMID:23414436
Bouwmeester, Walter; Twisk, Jos W R; Kappen, Teus H; van Klei, Wilton A; Moons, Karel G M; Vergouwe, Yvonne
2013-02-15
When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters.
A method for measuring low-weight carboxylic acids from biosolid compost.
Himanen, Marina; Latva-Kala, Kyösti; Itävaara, Merja; Hänninen, Kari
2006-01-01
Concentration of low-weight carboxylic acids (LWCA) is one of the important parameters that should be taken into consideration when compost is applied as soil improver for plant cultivation, because high amounts of LWCA can be toxic to plants. The present work describes a method for analysis of LWCA in compost as a useful tool for monitoring compost quality and safety. The method was tested on compost samples of two different ages: 3 (immature) and 6 (mature) months old. Acids from compost samples were extracted at high pH, filtered, and freeze-dried. The dried sodium salts were derivatized with a sulfuric acid-methanol mixture and concentrations of 11 low-weight fatty acids (C1-C10) were analyzed using headspace gas chromatography. The material was analyzed with two analytical techniques: the external calibration method (tested on 11 LWCA) and the standard addition method (tested only on formic, acetic, propionic, butyric, and iso-butyric acids). The two techniques were compared for efficiency of acids quantification. The method allowed good separation and quantification of a wide range of individual acids with high sensitivity at low concentrations. Detection limit for propionic, butyric, caproic, caprylic, and capric acids was 1 mg kg(-1) compost; for formic, acetic, valeric, enanthoic and pelargonic acids it was 5 mg kg(-1) compost; and for iso-butyric acid it was 10 mg kg(-1) compost. Recovery rates of LWCA were higher in 3-mo-old compost (57-99%) than in 6-mo-old compost (29-45%). In comparison with the external calibration technique the standard addition technique proved to be three to four times more precise for older compost and two times for younger compost. Disadvantages of the standard addition technique are that it is more time demanding and laborious.
Quantitative Image Analysis Techniques with High-Speed Schlieren Photography
NASA Technical Reports Server (NTRS)
Pollard, Victoria J.; Herron, Andrew J.
2017-01-01
Optical flow visualization techniques such as schlieren and shadowgraph photography are essential to understanding fluid flow when interpreting acquired wind tunnel test data. Output of the standard implementations of these visualization techniques in test facilities are often limited only to qualitative interpretation of the resulting images. Although various quantitative optical techniques have been developed, these techniques often require special equipment or are focused on obtaining very precise and accurate data about the visualized flow. These systems are not practical in small, production wind tunnel test facilities. However, high-speed photography capability has become a common upgrade to many test facilities in order to better capture images of unsteady flow phenomena such as oscillating shocks and flow separation. This paper describes novel techniques utilized by the authors to analyze captured high-speed schlieren and shadowgraph imagery from wind tunnel testing for quantification of observed unsteady flow frequency content. Such techniques have applications in parametric geometry studies and in small facilities where more specialized equipment may not be available.
Data Mining of Macromolecular Structures.
van Beusekom, Bart; Perrakis, Anastassis; Joosten, Robbie P
2016-01-01
The use of macromolecular structures is widespread for a variety of applications, from teaching protein structure principles all the way to ligand optimization in drug development. Applying data mining techniques on these experimentally determined structures requires a highly uniform, standardized structural data source. The Protein Data Bank (PDB) has evolved over the years toward becoming the standard resource for macromolecular structures. However, the process selecting the data most suitable for specific applications is still very much based on personal preferences and understanding of the experimental techniques used to obtain these models. In this chapter, we will first explain the challenges with data standardization, annotation, and uniformity in the PDB entries determined by X-ray crystallography. We then discuss the specific effect that crystallographic data quality and model optimization methods have on structural models and how validation tools can be used to make informed choices. We also discuss specific advantages of using the PDB_REDO databank as a resource for structural data. Finally, we will provide guidelines on how to select the most suitable protein structure models for detailed analysis and how to select a set of structure models suitable for data mining.
Mishell, Daniel R; Guillebaud, John; Westhoff, Carolyn; Nelson, Anita L; Kaunitz, Andrew M; Trussell, James; Davis, Ann Jeanette
2007-01-01
Initially approved for use in the United States nearly 50 years ago, oral hormonal contraceptives containing both estrogen and progestin have undergone steady improvements in safety and convenience. Concurrent with improvements in safety associated with decreasing doses of both steroids, there has been an increased incidence of unscheduled bleeding and spotting. There exist no standards regarding data collection techniques and methods, and reporting and analysis of bleeding and spotting events during combined hormonal contraceptive (CHC) trials. For the regulatory review of hormonal contraceptives, data regarding the incidence of bleeding and spotting events are not included in either of the traditional categories of efficacy and safety. Standardization of methods for collecting and analyzing data about cycle control in all clinical trials of CHCs is long overdue. Until such standards are developed and implemented, clinicians need to familiarize themselves with the techniques used in each study in order to provide correct information to their patients about the frequency of bleeding and spotting associated with different formulations and delivery systems.
A thioacidolysis method tailored for higher‐throughput quantitative analysis of lignin monomers
Foster, Cliff; Happs, Renee M.; Doeppke, Crissa; Meunier, Kristoffer; Gehan, Jackson; Yue, Fengxia; Lu, Fachuang; Davis, Mark F.
2016-01-01
Abstract Thioacidolysis is a method used to measure the relative content of lignin monomers bound by β‐O‐4 linkages. Current thioacidolysis methods are low‐throughput as they require tedious steps for reaction product concentration prior to analysis using standard GC methods. A quantitative thioacidolysis method that is accessible with general laboratory equipment and uses a non‐chlorinated organic solvent and is tailored for higher‐throughput analysis is reported. The method utilizes lignin arylglycerol monomer standards for calibration, requires 1–2 mg of biomass per assay and has been quantified using fast‐GC techniques including a Low Thermal Mass Modular Accelerated Column Heater (LTM MACH). Cumbersome steps, including standard purification, sample concentrating and drying have been eliminated to help aid in consecutive day‐to‐day analyses needed to sustain a high sample throughput for large screening experiments without the loss of quantitation accuracy. The method reported in this manuscript has been quantitatively validated against a commonly used thioacidolysis method and across two different research sites with three common biomass varieties to represent hardwoods, softwoods, and grasses. PMID:27534715
A thioacidolysis method tailored for higher-throughput quantitative analysis of lignin monomers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harman-Ware, Anne E.; Foster, Cliff; Happs, Renee M.
Thioacidolysis is a method used to measure the relative content of lignin monomers bound by β-O-4 linkages. Current thioacidolysis methods are low-throughput as they require tedious steps for reaction product concentration prior to analysis using standard GC methods. A quantitative thioacidolysis method that is accessible with general laboratory equipment and uses a non-chlorinated organic solvent and is tailored for higher-throughput analysis is reported. The method utilizes lignin arylglycerol monomer standards for calibration, requires 1-2 mg of biomass per assay and has been quantified using fast-GC techniques including a Low Thermal Mass Modular Accelerated Column Heater (LTM MACH). Cumbersome steps, includingmore » standard purification, sample concentrating and drying have been eliminated to help aid in consecutive day-to-day analyses needed to sustain a high sample throughput for large screening experiments without the loss of quantitation accuracy. As a result, the method reported in this manuscript has been quantitatively validated against a commonly used thioacidolysis method and across two different research sites with three common biomass varieties to represent hardwoods, softwoods, and grasses.« less
A thioacidolysis method tailored for higher-throughput quantitative analysis of lignin monomers
Harman-Ware, Anne E.; Foster, Cliff; Happs, Renee M.; ...
2016-09-14
Thioacidolysis is a method used to measure the relative content of lignin monomers bound by β-O-4 linkages. Current thioacidolysis methods are low-throughput as they require tedious steps for reaction product concentration prior to analysis using standard GC methods. A quantitative thioacidolysis method that is accessible with general laboratory equipment and uses a non-chlorinated organic solvent and is tailored for higher-throughput analysis is reported. The method utilizes lignin arylglycerol monomer standards for calibration, requires 1-2 mg of biomass per assay and has been quantified using fast-GC techniques including a Low Thermal Mass Modular Accelerated Column Heater (LTM MACH). Cumbersome steps, includingmore » standard purification, sample concentrating and drying have been eliminated to help aid in consecutive day-to-day analyses needed to sustain a high sample throughput for large screening experiments without the loss of quantitation accuracy. As a result, the method reported in this manuscript has been quantitatively validated against a commonly used thioacidolysis method and across two different research sites with three common biomass varieties to represent hardwoods, softwoods, and grasses.« less
Garbarino, John R.; Struzeski, Tedmund M.
1998-01-01
Inductively coupled plasma-optical emission spectrometry (ICP-OES) and inductively coupled plasma-mass spectrometry (ICP-MS) can be used to determine 26 elements in whole-water digests. Both methods have distinct advantages and disadvantages--ICP-OES is capable of analyzing samples with higher elemental concentrations without dilution, however, ICP-MS is more sensitive and capable of determining much lower elemental concentrations. Both techniques gave accurate results for spike recoveries, digested standard reference-water samples, and whole-water digests. Average spike recoveries in whole-water digests were 100 plus/minus 10 percent, although recoveries for digests with high dissolved-solid concentrations were lower for selected elements by ICP-MS. Results for standard reference-water samples were generally within 1 standard deviation of hte most probable values. Statistical analysis of the results from 43 whole-water digest indicated that there was no significant difference among ICP-OES, ICP-MS, and former official methods of analysis for 24 of the 26 elements evaluated.
Preliminary Analysis of Photoreading
NASA Technical Reports Server (NTRS)
McNamara, Danielle S.
2000-01-01
The purpose of this project was to provide a preliminary analysis of a reading strategy called PhotoReading. PhotoReading is a technique developed by Paul Scheele that claims to increase reading rate to 25,000 words per minute (Scheele, 1993). PhotoReading itself involves entering a "relaxed state" and looking at, but not reading, each page of a text for a brief moment (about I to 2 seconds). While this technique has received attention in the popular press, there had been no objective examinations of the technique's validity. To examine the effectiveness of PhotoReading, the principal investigator (i.e., trainee) participated in a PhotoReading workshop to learn the technique. Parallel versions of two standardized and three experimenter-created reading comprehension tests were administered to the trainee and an expert user of the PhotoReading technique to compare the use of normal reading strategies and the PhotoReading technique by both readers. The results for all measures yielded no benefits of using the PhotoReading technique. The extremely rapid reading rates claimed by PhotoReaders were not observed; indeed, the reading rates were generally comparable to those for normal reading. Moreover, the PhotoReading expert generally showed an increase in reading time when using the PhotoReading technique in comparison to when using normal reading strategies to process text. This increase in reading time when PhotoReading was accompanied by a decrease in text comprehension.
Critical analysis and systematization of rat pancreatectomy terminology.
Eulálio, José Marcus Raso; Bon-Habib, Assad Charbel Chequer; Soares, Daiane de Oliveira; Corrêa, Paulo Guilherme Antunes; Pineschi, Giovana Penna Firme; Diniz, Victor Senna; Manso, José Eduardo Ferreira; Schanaider, Alberto
2016-10-01
To critically analyze and standardize the rat pancreatectomy nomenclature variants. It was performed a review of indexed manuscripts in PUBMED from 01/01/1945 to 31/12/2015 with the combined keywords "rat pancreatectomy" and "rat pancreas resection". The following parameters was considered: A. Frequency of publications; B. Purpose of the pancreatectomy in each article; C. Bibliographic references; D. Nomenclature of techniques according to the pancreatic parenchyma resection percentage. Among the 468, the main objectives were to surgically induce diabetes and to study the genes regulations and expressions. Five rat pancreatectomy technique references received 15 or more citations. Twenty different terminologies were identified for the pancreas resection: according to the resected parenchyma percentage (30 to 95%); to the procedure type (total, subtotal and partial); or based on the selected anatomical region (distal, longitudinal and segmental). A nomenclature systematization was gathered by cross-checking information between the main surgical techniques, the anatomic parameters descriptions and the resected parenchyma percentages. The subtotal pancreatectomy nomenclature for parenchymal resection between 80 and 95% establishes a surgical parameter that also defines the total and partial pancreatectomy limits and standardizes these surgical procedures in rats.
Upright Imaging of Drosophila Egg Chambers
Manning, Lathiena; Starz-Gaiano, Michelle
2015-01-01
Drosophila melanogaster oogenesis provides an ideal context for studying varied developmental processes since the ovary is relatively simple in architecture, is well-characterized, and is amenable to genetic analysis. Each egg chamber consists of germ-line cells surrounded by a single epithelial layer of somatic follicle cells. Subsets of follicle cells undergo differentiation during specific stages to become several different cell types. Standard techniques primarily allow for a lateral view of egg chambers, and therefore a limited view of follicle cell organization and identity. The upright imaging protocol describes a mounting technique that enables a novel, vertical view of egg chambers with a standard confocal microscope. Samples are first mounted between two layers of glycerin jelly in a lateral (horizontal) position on a glass microscope slide. The jelly with encased egg chambers is then cut into blocks, transferred to a coverslip, and flipped to position egg chambers upright. Mounted egg chambers can be imaged on either an upright or an inverted confocal microscope. This technique enables the study of follicle cell specification, organization, molecular markers, and egg development with new detail and from a new perspective. PMID:25867882
Sparse feature learning for instrument identification: Effects of sampling and pooling methods.
Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu
2016-05-01
Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.
Demonstration of Wavelet Techniques in the Spectral Analysis of Bypass Transition Data
NASA Technical Reports Server (NTRS)
Lewalle, Jacques; Ashpis, David E.; Sohn, Ki-Hyeon
1997-01-01
A number of wavelet-based techniques for the analysis of experimental data are developed and illustrated. A multiscale analysis based on the Mexican hat wavelet is demonstrated as a tool for acquiring physical and quantitative information not obtainable by standard signal analysis methods. Experimental data for the analysis came from simultaneous hot-wire velocity traces in a bypass transition of the boundary layer on a heated flat plate. A pair of traces (two components of velocity) at one location was excerpted. A number of ensemble and conditional statistics related to dominant time scales for energy and momentum transport were calculated. The analysis revealed a lack of energy-dominant time scales inside turbulent spots but identified transport-dominant scales inside spots that account for the largest part of the Reynolds stress. Momentum transport was much more intermittent than were energetic fluctuations. This work is the first step in a continuing study of the spatial evolution of these scale-related statistics, the goal being to apply the multiscale analysis results to improve the modeling of transitional and turbulent industrial flows.
Linear prediction and single-channel recording.
Carter, A A; Oswald, R E
1995-08-01
The measurement of individual single-channel events arising from the gating of ion channels provides a detailed data set from which the kinetic mechanism of a channel can be deduced. In many cases, the pattern of dwells in the open and closed states is very complex, and the kinetic mechanism and parameters are not easily determined. Assuming a Markov model for channel kinetics, the probability density function for open and closed time dwells should consist of a sum of decaying exponentials. One method of approaching the kinetic analysis of such a system is to determine the number of exponentials and the corresponding parameters which comprise the open and closed dwell time distributions. These can then be compared to the relaxations predicted from the kinetic model to determine, where possible, the kinetic constants. We report here the use of a linear technique, linear prediction/singular value decomposition, to determine the number of exponentials and the exponential parameters. Using simulated distributions and comparing with standard maximum-likelihood analysis, the singular value decomposition techniques provide advantages in some situations and are a useful adjunct to other single-channel analysis techniques.
NASA Technical Reports Server (NTRS)
McDowell, Mark; Gray, Elizabeth
2008-01-01
Stereo Imaging Velocimetry (SIV) is a NASA Glenn Research Center (GRC) developed fluid physics technique for measuring threedimensional (3-D) velocities in any optically transparent fluid that can be seeded with tracer particles. SIV provides a means to measure 3-D fluid velocities quantitatively and qualitatively at many points. This technique provides full-field 3-D analysis of any optically clear fluid or gas experiment using standard off-the-shelf CCD cameras to provide accurate and reproducible 3-D velocity profiles for experiments that require 3-D analysis. A flame ball is a steady flame in a premixed combustible atmosphere which, due to the transport properties (low Lewis-number) of the mixture, does not propagate but is instead supplied by diffusive transport of the reactants, forming a premixed flame. This flame geometry presents a unique environment for testing combustion theory. We present our analysis of flame ball phenomena utilizing SIV technology in order to accurately calculate the 3-D position of a flame ball(s) during an experiment, which can be used as a direct comparison of numerical simulations.
NASA Astrophysics Data System (ADS)
Coronel, Juan; Varón, Margarita; Rissons, Angélique
2016-09-01
The optical injection locking (OIL) technique is proposed to reduce the phase noise of a carrier generated for a vertical-cavity surface-emitting laser (VCSEL)-based optoelectronic oscillator. The OIL technique permits the enhancement of the VCSEL direct modulation bandwidth as well as the stabilization of the optical noise of the laser. A 2-km delay line, 10-GHz optical injection-locked VCSEL-based optoelectronic oscillator (OILVBO) was implemented. The internal noise sources of the optoelectronic oscillator components were characterized and analyzed to understand the noise conversion of the system into phase noise in the oscillator carrier. The implemented OILVBO phase noise was -105.7 dBc/Hz at 10 kHz from the carrier; this value agrees well with the performed simulated analysis. From the computed and measured phase noise curves, it is possible to infer the noise processes that take place inside the OILVBO. As a second measurement of the oscillation quality, a time-domain analysis was done through the Allan's standard deviation measurement, reported for first time for an optoelectronic oscillator using the OIL technique.
System Identification Applied to Dynamic CFD Simulation and Wind Tunnel Data
NASA Technical Reports Server (NTRS)
Murphy, Patrick C.; Klein, Vladislav; Frink, Neal T.; Vicroy, Dan D.
2011-01-01
Demanding aerodynamic modeling requirements for military and civilian aircraft have provided impetus for researchers to improve computational and experimental techniques. Model validation is a key component for these research endeavors so this study is an initial effort to extend conventional time history comparisons by comparing model parameter estimates and their standard errors using system identification methods. An aerodynamic model of an aircraft performing one-degree-of-freedom roll oscillatory motion about its body axes is developed. The model includes linear aerodynamics and deficiency function parameters characterizing an unsteady effect. For estimation of unknown parameters two techniques, harmonic analysis and two-step linear regression, were applied to roll-oscillatory wind tunnel data and to computational fluid dynamics (CFD) simulated data. The model used for this study is a highly swept wing unmanned aerial combat vehicle. Differences in response prediction, parameters estimates, and standard errors are compared and discussed
NASA Astrophysics Data System (ADS)
Goldberg, Robert R.; Goldberg, Michael R.
1999-05-01
A previous paper by the authors presented an algorithm that successfully segmented organs grown in vitro from their surroundings. It was noticed that one difficulty in standard dyeing techniques for the analysis of contours in organs was due to the fact that the antigen necessary to bind with the fluorescent dye was not uniform throughout the cell borders. To address these concerns, a new fluorescent technique was utilized. A transgenic mouse line was genetically engineered utilizing the hoxb7/gfp (green fluorescent protein). Whereas the original technique (fixed and blocking) required a numerous number of noise removal filtering and sophisticated segmentation techniques, segmentation on the GFP kidney required only an adaptive binary threshold technique which yielded excellent results without the need for specific noise reduction. This is important for tracking the growth of kidney development through time.
Integrating medical devices in the operating room using service-oriented architectures.
Ibach, Bastian; Benzko, Julia; Schlichting, Stefan; Zimolong, Andreas; Radermacher, Klaus
2012-08-01
Abstract With the increasing documentation requirements and communication capabilities of medical devices in the operating room, the integration and modular networking of these devices have become more and more important. Commercial integrated operating room systems are mainly proprietary developments using usually proprietary communication standards and interfaces, which reduce the possibility of integrating devices from different vendors. To overcome these limitations, there is a need for an open standardized architecture that is based on standard protocols and interfaces enabling the integration of devices from different vendors based on heterogeneous software and hardware components. Starting with an analysis of the requirements for device integration in the operating room and the techniques used for integrating devices in other industrial domains, a new concept for an integration architecture for the operating room based on the paradigm of a service-oriented architecture is developed. Standardized communication protocols and interface descriptions are used. As risk management is an important factor in the field of medical engineering, a risk analysis of the developed concept has been carried out and the first prototypes have been implemented.
Prakash, Amol; Peterman, Scott; Ahmad, Shadab; Sarracino, David; Frewen, Barbara; Vogelsang, Maryann; Byram, Gregory; Krastins, Bryan; Vadali, Gouri; Lopez, Mary
2014-12-05
Data-dependent acquisition (DDA) and data-independent acquisition strategies (DIA) have both resulted in improved understanding of proteomics samples. Both strategies have advantages and disadvantages that are well-published, where DDA is typically applied for deep discovery and DIA may be used to create sample records. In this paper, we present a hybrid data acquisition and processing strategy (pSMART) that combines the strengths of both techniques and provides significant benefits for qualitative and quantitative peptide analysis. The performance of pSMART is compared to published DIA strategies in an experiment that allows the objective assessment of DIA performance with respect to interrogation of previously acquired MS data. The results of this experiment demonstrate that pSMART creates fewer decoy hits than a standard DIA strategy. Moreover, we show that pSMART is more selective, sensitive, and reproducible than either standard DIA or DDA strategies alone.
NASA Astrophysics Data System (ADS)
Anikushina, T. A.; Naumov, A. V.
2013-12-01
This article demonstrates the principal advantages of the technique for analysis of the long-term spectral evolution of single molecules (SM) in the study of the microscopic nature of the dynamic processes in low-temperature polymers. We performed the detailed analysis of the spectral trail of single tetra-tert-butylterrylene (TBT) molecule in an amorphous polyisobutylene matrix, measured over 5 hours at T = 7K. It has been shown that the slow temporal dynamics is in qualitative agreement with the standard model of two-level systems and stochastic sudden-jump model. At the same time the distributions of the first four moments (cumulants) of the spectra of the selected SM measured at different time points were found not consistent with the standard theory prediction. It was considered as evidence that in a given time interval the system is not ergodic
NASA Astrophysics Data System (ADS)
Thawinkarn, Dawruwan
2018-01-01
This research aims to analyze factors of science teacher leadership in the Thailand World-Class Standard Schools. The research instrument was a five scale rating questionnaire with reliability 0.986. The sample group included 500 science teachers from World-Class Standard Schools who had been selected by using the stratified random sampling technique. Factor analysis of science teacher leadership in the Thailand World-Class Standard Schools was conducted by using M plus for Windows. The results are as follows: The results of confirmatory factor analysis on science teacher leadership in the Thailand World-Class Standard Schools revealed that the model significantly correlated with the empirical data. The consistency index value was x2 = 105.655, df = 88, P-Value = 0.086, TLI = 0.997, CFI = 0.999, RMSEA = 0.022, and SRMR = 0.019. The value of factor loading of science teacher leadership was positive, with statistical significance at the level of 0.01. The value of six factors was between 0.880-0.996. The highest factor loading was the professional learning community, followed by child-centered instruction, participation in development, the role model in teaching, transformational leaders, and self-development with factor loading at 0.996, 0.928, 0.911, 0.907, 0.901, and 0.871, respectively. The reliability of each factor was 99.1%, 86.0%, 83.0%, 82.2%, 81.0%, and 75.8%, respectively.
1988-11-01
system, using graphic techniques which enable users, analysts, and designers to get a clear and common picture of the system and how its parts fit...boxes into hierarchies suitable for computer implementation. ŗ. Structured Design uses tools, especially graphic ones, to render systems readily...LSA, PROCESSES, DATA FLOWS, DATA STORES, EX"RNAL ENTITIES, OVERALL SYSTEMS DESIGN PROCESS, over 19, ABSTRACT (Continue on reverse if necessary and
Fusion of multiscale wavelet-based fractal analysis on retina image for stroke prediction.
Che Azemin, M Z; Kumar, Dinesh K; Wong, T Y; Wang, J J; Kawasaki, R; Mitchell, P; Arjunan, Sridhar P
2010-01-01
In this paper, we present a novel method of analyzing retinal vasculature using Fourier Fractal Dimension to extract the complexity of the retinal vasculature enhanced at different wavelet scales. Logistic regression was used as a fusion method to model the classifier for 5-year stroke prediction. The efficacy of this technique has been tested using standard pattern recognition performance evaluation, Receivers Operating Characteristics (ROC) analysis and medical prediction statistics, odds ratio. Stroke prediction model was developed using the proposed system.
NASA Astrophysics Data System (ADS)
Nielsen, S. Suzanne
Investigations in food science and technology, whether by the food industry, governmental agencies, or universities, often require determination of food composition and characteristics. Trends and demands of consumers, the food industry, and national and international regulations challenge food scientists as they work to monitor food composition and to ensure the quality and safety of the food supply. All food products require analysis as part of a quality management program throughout the development process (including raw ingredients), through production, and after a product is in the market. In addition, analysis is done of problem samples and competitor products. The characteristics of foods (i.e., chemical composition, physical properties, sensory properties) are used to answer specific questions for regulatory purposes and typical quality control. The nature of the sample and the specific reason for the analysis commonly dictate the choice of analytical methods. Speed, precision, accuracy, and ruggedness often are key factors in this choice. Validation of the method for the specific food matrix being analyzed is necessary to ensure usefulness of the method. Making an appropriate choice of the analytical technique for a specific application requires a good knowledge of the various techniques (Fig. 1.1). For example, your choice of method to determine the salt content of potato chips would be different if it is for nutrition labeling than for quality control. The success of any analytical method relies on the proper selection and preparation of the food sample, carefully performing the analysis, and doing the appropriate calculations and interpretation of the data. Methods of analysis developed and endorsed by several nonprofit scientific organizations allow for standardized comparisons of results between different laboratories and for evaluation of less standard procedures. Such official methods are critical in the analysis of foods, to ensure that they meet the legal requirements established by governmental agencies. Government regulations and international standards most relevant to the analysis of foods are mentioned here but covered in more detail in Chap. 2, and nutrition labeling regulations in the USA are covered in Chap. 3. Internet addresses for many of the organizations and government agencies discussed are given at the end of this chapter.
Wyatt, S K; Barck, K H; Kates, L; Zavala-Solorio, J; Ross, J; Kolumam, G; Sonoda, J; Carano, R A D
2015-11-01
The ability to non-invasively measure body composition in mouse models of obesity and obesity-related disorders is essential for elucidating mechanisms of metabolic regulation and monitoring the effects of novel treatments. These studies aimed to develop a fully automated, high-throughput micro-computed tomography (micro-CT)-based image analysis technique for longitudinal quantitation of adipose, non-adipose and lean tissue as well as bone and demonstrate utility for assessing the effects of two distinct treatments. An initial validation study was performed in diet-induced obesity (DIO) and control mice on a vivaCT 75 micro-CT system. Subsequently, four groups of DIO mice were imaged pre- and post-treatment with an experimental agonistic antibody specific for anti-fibroblast growth factor receptor 1 (anti-FGFR1, R1MAb1), control immunoglobulin G antibody, a known anorectic antiobesity drug (rimonabant, SR141716), or solvent control. The body composition analysis technique was then ported to a faster micro-CT system (CT120) to markedly increase throughput as well as to evaluate the use of micro-CT image intensity for hepatic lipid content in DIO and control mice. Ex vivo chemical analysis and colorimetric analysis of the liver triglycerides were performed as the standard metrics for correlation with body composition and hepatic lipid status, respectively. Micro-CT-based body composition measures correlate with ex vivo chemical analysis metrics and enable distinction between DIO and control mice. R1MAb1 and rimonabant have differing effects on body composition as assessed by micro-CT. High-throughput body composition imaging is possible using a modified CT120 system. Micro-CT also provides a non-invasive assessment of hepatic lipid content. This work describes, validates and demonstrates utility of a fully automated image analysis technique to quantify in vivo micro-CT-derived measures of adipose, non-adipose and lean tissue, as well as bone. These body composition metrics highly correlate with standard ex vivo chemical analysis and enable longitudinal evaluation of body composition and therapeutic efficacy monitoring.
The quantitative analysis of silicon carbide surface smoothing by Ar and Xe cluster ions
NASA Astrophysics Data System (ADS)
Ieshkin, A. E.; Kireev, D. S.; Ermakov, Yu. A.; Trifonov, A. S.; Presnov, D. E.; Garshev, A. V.; Anufriev, Yu. V.; Prokhorova, I. G.; Krupenin, V. A.; Chernysh, V. S.
2018-04-01
The gas cluster ion beam technique was used for the silicon carbide crystal surface smoothing. The effect of processing by two inert cluster ions, argon and xenon, was quantitatively compared. While argon is a standard element for GCIB, results for xenon clusters were not reported yet. Scanning probe microscopy and high resolution transmission electron microscopy techniques were used for the analysis of the surface roughness and surface crystal layer quality. The gas cluster ion beam processing results in surface relief smoothing down to average roughness about 1 nm for both elements. It was shown that xenon as the working gas is more effective: sputtering rate for xenon clusters is 2.5 times higher than for argon at the same beam energy. High resolution transmission electron microscopy analysis of the surface defect layer gives values of 7 ± 2 nm and 8 ± 2 nm for treatment with argon and xenon clusters.
NASA Astrophysics Data System (ADS)
Hartman, John; Kirby, Brian
2017-03-01
Nanoparticle tracking analysis, a multiprobe single particle tracking technique, is a widely used method to quickly determine the concentration and size distribution of colloidal particle suspensions. Many popular tools remove non-Brownian components of particle motion by subtracting the ensemble-average displacement at each time step, which is termed dedrifting. Though critical for accurate size measurements, dedrifting is shown here to introduce significant biasing error and can fundamentally limit the dynamic range of particle size that can be measured for dilute heterogeneous suspensions such as biological extracellular vesicles. We report a more accurate estimate of particle mean-square displacement, which we call decorrelation analysis, that accounts for correlations between individual and ensemble particle motion, which are spuriously introduced by dedrifting. Particle tracking simulation and experimental results show that this approach more accurately determines particle diameters for low-concentration polydisperse suspensions when compared with standard dedrifting techniques.
Polynuclear aromatic hydrocarbon analysis using the synchronous scanning luminoscope
NASA Astrophysics Data System (ADS)
Hyfantis, George J., Jr.; Teglas, Matthew S.; Wilbourn, Robert G.
2001-02-01
12 The Synchronous Scanning Luminoscope (SSL) is a field- portable, synchronous luminescence spectrofluorometer that was developed for on-site analysis of contaminated soil and ground water. The SSL is capable of quantitative analysis of total polynuclear aromatic hydrocarbons (PAHs) using phosphorescence and fluorescence techniques with a high correlation to laboratory data as illustrated by this study. The SSL is also capable of generating benzo(a)pyrene equivalency results, based on seven carcinogenic PAHs and Navy risk numbers, with a high correlation to laboratory data as illustrated by this study. These techniques allow rapid field assessments of total PAHs and benzo(a)pyrene equivalent concentrations. The Luminoscope is capable of detecting total PAHs to the parts per billion range. This paper describes standard field methods for using the SSL and describes the results of field/laboratory testing of PAHs. SSL results from two different hazardous waste sites are discussed.
"TuNa-saving" endoscopic medial maxillectomy: a surgical technique for maxillary inverted papilloma.
Pagella, Fabio; Pusateri, Alessandro; Matti, Elina; Avato, Irene; Zaccari, Dario; Emanuelli, Enzo; Volo, Tiziana; Cazzador, Diego; Citraro, Leonardo; Ricci, Giampiero; Tomacelli, Giovanni Leo
2017-07-01
The maxillary sinus is the most common site of sinonasal inverted papilloma. Endoscopic sinus surgery, in particular endoscopic medial maxillectomy, is currently the gold standard for treatment of maxillary sinus papilloma. Although a common technique, complications such as stenosis of the lacrimal pathway and consequent development of epiphora are still possible. To avoid these problems, we propose a modification of this surgical technique that preserves the head of the inferior turbinate and the nasolacrimal duct. A retrospective analysis was performed on patients treated for maxillary inverted papilloma in three tertiary medical centres between 2006 and 2014. Pedicle-oriented endoscopic surgery principles were applied and, in select cases where the tumour pedicle was located on the anterior wall, a modified endoscopic medial maxillectomy was carried out as described in this paper. From 2006 to 2014 a total of 84 patients were treated. A standard endoscopic medial maxillectomy was performed in 55 patients (65.4%), while the remaining 29 (34.6%) had a modified technique performed. Three recurrences (3/84; 3.6%) were observed after a minimum follow-up of 24 months. A new surgical approach for select cases of maxillary sinus inverted papilloma is proposed in this paper. In this technique, the endoscopic medial maxillectomy was performed while preserving the head of the inferior turbinate and the nasolacrimal duct ("TuNa-saving"). This technique allowed for good visualization of the maxillary sinus, good oncological control and a reduction in the rate of complications.
Description of MSFC engineering photographic analysis
NASA Technical Reports Server (NTRS)
Earle, Jim; Williams, Frank
1988-01-01
Utilizing a background that includes development of basic launch and test photographic coverage and analysis procedures, the MSFC Photographic Evaluation Group has built a body of experience that enables it to effectively satisfy MSFC's engineering photographic analysis needs. Combining the basic soundness of reliable, proven techniques of the past with the newer technical advances of computers and computer-related devices, the MSFC Photo Evaluation Group is in a position to continue to provide photo and video analysis service center-wide and NASA-wide to supply an improving photo analysis product to meet the photo evaluation needs of the future; and to provide new standards in the state-of-the-art of photo analysis of dynamic events.
LaPrè, A K; Price, M A; Wedge, R D; Umberger, B R; Sup, Frank C
2018-04-01
Musculoskeletal modeling and marker-based motion capture techniques are commonly used to quantify the motions of body segments, and the forces acting on them during human gait. However, when these techniques are applied to analyze the gait of people with lower limb loss, the clinically relevant interaction between the residual limb and prosthesis socket is typically overlooked. It is known that there is considerable motion and loading at the residuum-socket interface, yet traditional gait analysis techniques do not account for these factors due to the inability to place tracking markers on the residual limb inside of the socket. In the present work, we used a global optimization technique and anatomical constraints to estimate the motion and loading at the residuum-socket interface as part of standard gait analysis procedures. We systematically evaluated a range of parameters related to the residuum-socket interface, such as the number of degrees of freedom, and determined the configuration that yields the best compromise between faithfully tracking experimental marker positions while yielding anatomically realistic residuum-socket kinematics and loads that agree with data from the literature. Application of the present model to gait analysis for people with lower limb loss will deepen our understanding of the biomechanics of walking with a prosthesis, which should facilitate the development of enhanced rehabilitation protocols and improved assistive devices. Copyright © 2017 John Wiley & Sons, Ltd.
Analytical methods for determination of mycotoxins: a review.
Turner, Nicholas W; Subrahmanyam, Sreenath; Piletsky, Sergey A
2009-01-26
Mycotoxins are small (MW approximately 700), toxic chemical products formed as secondary metabolites by a few fungal species that readily colonise crops and contaminate them with toxins in the field or after harvest. Ochratoxins and Aflatoxins are mycotoxins of major significance and hence there has been significant research on broad range of analytical and detection techniques that could be useful and practical. Due to the variety of structures of these toxins, it is impossible to use one standard technique for analysis and/or detection. Practical requirements for high-sensitivity analysis and the need for a specialist laboratory setting create challenges for routine analysis. Several existing analytical techniques, which offer flexible and broad-based methods of analysis and in some cases detection, have been discussed in this manuscript. There are a number of methods used, of which many are lab-based, but to our knowledge there seems to be no single technique that stands out above the rest, although analytical liquid chromatography, commonly linked with mass spectroscopy is likely to be popular. This review manuscript discusses (a) sample pre-treatment methods such as liquid-liquid extraction (LLE), supercritical fluid extraction (SFE), solid phase extraction (SPE), (b) separation methods such as (TLC), high performance liquid chromatography (HPLC), gas chromatography (GC), and capillary electrophoresis (CE) and (c) others such as ELISA. Further currents trends, advantages and disadvantages and future prospects of these methods have been discussed.
FT-IR spectroscopy characterization of schwannoma: a case study
NASA Astrophysics Data System (ADS)
Ferreira, Isabelle; Neto, Lazaro P. M.; das Chagas, Maurilio José; Carvalho, Luís. Felipe C. S.; dos Santos, Laurita; Ribas, Marcelo; Loddi, Vinicius; Martin, Airton A.
2016-03-01
Schwannoma are rare benign neural neoplasia. The clinical diagnosis could be improved if novel optical techniques are performed. Among these techniques, FT-IR is one of the currently techniques which has been applied for samples discrimination using biochemical information with minimum sample preparation. In this work, we report a case of a schwannoma in the cervical region. A histological examination described a benign process. An immunohistochemically examination demonstrated positivity to anti-S100 protein antibody, indicating a diagnosis of schwannoma. The aim of this analysis was to characterize FT-IR spectrum of the neoplastic and normal tissue in the fingerprint (1000-1800 cm-1) and high wavenumber region (2800-3600 cm-1). The IR spectra were collect from tumor tissue and normal nerve samples by a FT-IR spectrophotometer (Spotlight Perkin Elmer 400, USA) with 64 scans, and resolution of 4 cm-1. A total of twenty spectra were recorded (10 from schwannoma and 10 from nerve). Multivariate Analysis was used to classify the data. Through average and standard deviation analysis we observed that the main spectral change occurs at ≍1600 cm-1 (amide I) and ≍1400 cm-1 (amide III) in the fingerprint region, and in CH2/CH3 protein-lipids and OH-water vibrations for the high wavenumber region. In conclusion, FT-IR could be used as a technique for schwannoma analysis helping to establish specific diagnostic.
NASA Technical Reports Server (NTRS)
Podwysocki, M. H.
1974-01-01
Two study areas in a cratonic platform underlain by flat-lying sedimentary rocks were analyzed to determine if a quantitative relationship exists between fracture trace patterns and their frequency distributions and subsurface structural closures which might contain petroleum. Fracture trace lengths and frequency (number of fracture traces per unit area) were analyzed by trend surface analysis and length frequency distributions also were compared to a standard Gaussian distribution. Composite rose diagrams of fracture traces were analyzed using a multivariate analysis method which grouped or clustered the rose diagrams and their respective areas on the basis of the behavior of the rays of the rose diagram. Analysis indicates that the lengths of fracture traces are log-normally distributed according to the mapping technique used. Fracture trace frequency appeared higher on the flanks of active structures and lower around passive reef structures. Fracture trace log-mean lengths were shorter over several types of structures, perhaps due to increased fracturing and subsequent erosion. Analysis of rose diagrams using a multivariate technique indicated lithology as the primary control for the lower grouping levels. Groupings at higher levels indicated that areas overlying active structures may be isolated from their neighbors by this technique while passive structures showed no differences which could be isolated.
Laparoscopic versus Open Peritoneal Dialysis Catheter Insertion: A Meta-Analysis
Hagen, Sander M.; Lafranca, Jeffrey A.; Steyerberg, Ewout W.; IJzermans, Jan N. M.; Dor, Frank J. M. F.
2013-01-01
Background Peritoneal dialysis is an effective treatment for end-stage renal disease. Key to successful peritoneal dialysis is a well-functioning catheter. The different insertion techniques may be of great importance. Mostly, the standard operative approach is the open technique; however, laparoscopic insertion is increasingly popular. Catheter malfunction is reported up to 35% for the open technique and up to 13% for the laparoscopic technique. However, evidence is lacking to definitely conclude that the laparoscopic approach is to be preferred. This review and meta-analysis was carried out to investigate if one of the techniques is superior to the other. Methods Comprehensive searches were conducted in MEDLINE, Embase and CENTRAL (the Cochrane Library 2012, issue 10). Reference lists were searched manually. The methodology was in accordance with the Cochrane Handbook for interventional systematic reviews, and written based on the PRISMA-statement. Results Three randomized controlled trials and eight cohort studies were identified. Nine postoperative outcome measures were meta-analyzed; of these, seven were not different between operation techniques. Based on the meta-analysis, the proportion of migrating catheters was lower (odds ratio (OR) 0.21, confidence interval (CI) 0.07 to 0.63; P = 0.006), and the one-year catheter survival was higher in the laparoscopic group (OR 3.93, CI 1.80 to 8.57; P = 0.0006). Conclusions Based on these results there is some evidence in favour of the laparoscopic insertion technique for having a higher one-year catheter survival and less migration, which would be clinically relevant. PMID:23457554
UIAGM Ropehandling Techniques.
ERIC Educational Resources Information Center
Cloutier, K. Ross
The Union Internationale des Associations des Guides de Montagne's (UIAGM) rope handling techniques are intended to form the standard for guiding ropework worldwide. These techniques have become the legal standard for instructional institutions and commercial guiding organizations in UIAGM member countries: Austria, Canada, France, Germany, Great…
Henry, Anna E; Story, Mary
2009-01-01
To identify food and beverage brand Web sites featuring designated children's areas, assess marketing techniques present on those industry Web sites, and determine nutritional quality of branded food items marketed to children. Systematic content analysis of food and beverage brand Web sites and nutrient analysis of food and beverages advertised on these Web sites. The World Wide Web. One-hundred thirty Internet Web sites of food and beverage brands with top media expenditures based on the America's Top 2000 Brands section of Brandweek magazine's annual "Superbrands" report. A standardized content analysis rating form to determine marketing techniques used on the food and beverage brand Web sites. Nutritional analysis of food brands was conducted. Of 130 Web sites analyzed, 48% featured designated children's areas. These Web sites featured a variety of Internet marketing techniques, including advergaming on 85% of the Web sites and interactive programs on 92% of the Web sites. Branded spokescharacters and tie-ins to other products were featured on the majority of the Web sites, as well. Few food brands (13%) with Web sites that market to children met the nutrition criteria set by the National Alliance for Nutrition and Activity. Nearly half of branded Web sites analyzed used designated children's areas to market food and beverages to children, 87% of which were of low nutritional quality. Nutrition professionals should advocate the use of advertising techniques to encourage healthful food choices for children.
Bhattacharjee, Sulagna; Maitra, Souvik; Baidya, Dalim K
2018-06-01
Possible advantages and risks associated with ultrasound guided radial artery cannulation in-comparison to digital palpation guided method in adult patients are not fully known. We have compared ultrasound guided radial artery cannulation with digital palpation technique in this meta-analysis. Meta-analysis of randomized controlled trials. Trials conducted in operating room, emergency department, cardiac catheterization laboratory. PubMed and Cochrane Central Register of Controlled Trials (CENTRAL) were searched (from 1946 to 20th November 2017) to identify prospective randomized controlled trials in adult patients. Two-dimensional ultrasound guided radial artery catheterization versus digital palpation guided radial artery cannulation. Overall cannulation success rate, first attempt success rate, time to cannulation and mean number of attempts to successful cannulation. Odds ratio (OR) and standardized mean difference (SMD) or mean difference (MD) with 95% confidence interval (CI) were calculated for categorical and continuous variables respectively. Data of 1895 patients from 10 studies have been included in this meta- analysis. Overall cannulation success rate was similar between ultrasound guided technique and digital palpation [OR (95% CI) 2.01 (1.00, 4.06); p = 0.05]. Ultrasound guided radial artery cannulation is associated with higher first attempt success rate of radial artery cannulation in comparison to digital palpation [OR (95% CI) 2.76 (186, 4.10); p < 0.001]. No difference was seen in time to cannulate [SMD (95% CI) -0.31 (-0.65, 0.04); p = 0.30] and mean number of attempt [MD (95% CI) -0.65 (-1.32, 0.02); p = 0.06] between USG guided technique with palpation technique. Radial artery cannulation by ultrasound guidance may increase the first attempt success rate but not the overall cannulation success when compared to digital palpation technique. However, results of this meta-analysis should be interpreted with caution due presence of heterogeneity. Copyright © 2018 Elsevier Inc. All rights reserved.
Bhat, Riyaz A; Lahaye, Thomas; Panstruga, Ralph
2006-01-01
Non-invasive fluorophore-based protein interaction assays like fluorescence resonance energy transfer (FRET) and bimolecular fluorescence complementation (BiFC, also referred to as "split YFP") have been proven invaluable tools to study protein-protein interactions in living cells. Both methods are now frequently used in the plant sciences and are likely to develop into standard techniques for the identification, verification and in-depth analysis of polypeptide interactions. In this review, we address the individual strengths and weaknesses of both approaches and provide an outlook about new directions and possible future developments for both techniques. PMID:16800872
Resource recycling technique of abandoned TNT-RDX-AL mixed explosive
NASA Astrophysics Data System (ADS)
Chen, Siyang; Ding, Yukui
2017-08-01
TNT-RDX-AL mixed explosive is a kind of high energy mixed explosive. It has the detonation characteristics even when reaching the scrapping standard. Inappropriate disposal often causes serious accident. Employing the resource recycling technique, the abandoned TNT-RDX-AL mixed explosive can be recycled. This paper summarized the progress of recycling of abandoned mixed explosive. What's more, three kinds of technological process of resource recycling abandoned TNT-RDX-AL mixed explosives are introduced. The author analysis of the current recovery processes and provided a reference for the recycling of the other same type explosive.
Applications of remote sensing, volume 3
NASA Technical Reports Server (NTRS)
Landgrebe, D. A. (Principal Investigator)
1977-01-01
The author has identified the following significant results. Of the four change detection techniques (post classification comparison, delta data, spectral/temporal, and layered spectral temporal), the post classification comparison was selected for further development. This was based upon test performances of the four change detection method, straightforwardness of the procedures, and the output products desired. A standardized modified, supervised classification procedure for analyzing the Texas coastal zone data was compiled. This procedure was developed in order that all quadrangles in the study are would be classified using similar analysis techniques to allow for meaningful comparisons and evaluations of the classifications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harold S. Blackman; Ronald Boring; Julie L. Marble
This panel will discuss what new directions are necessary to maximize the usefulness of HRA techniques across different areas of application. HRA has long been a part of Probabilistic Risk Assessment in the nuclear industry as it offers a superior standard for risk-based decision-making. These techniques are continuing to be adopted by other industries including oil & gas, cybersecurity, nuclear, and aviation. Each participant will present his or her ideas concerning industry needs followed by a discussion about what research is needed and the necessity to achieve cross industry collaboration.
Analysis of enamel rod end patterns on tooth surface for personal identification--ameloglyphics.
Manjunath, Krishnappa; Sivapathasundharam, Balasundharam; Saraswathi, Thillai R
2012-05-01
Ameloglyphics is the study of enamel rod end patterns on a tooth surface. Our aim was to study the in vivo analysis of enamel rod end patterns on tooth surfaces for personal identification. In this study, the maxillary left canine and 1st premolar of 30 men and 30 women were included. The cellulose acetate peel technique was used to record enamel rod endings on tooth surfaces. Photomicrographs of the acetate peel imprint were subjected to VeriFinger Standard SDK v5.0 software for obtaining enamel rod end patterns. All 120 enamel rod end patterns were subjected to visual analysis and biometric analysis. Biometric analysis revealed that the enamel rod end pattern is unique for each tooth in an individual. It shows both intra- and interindividual variation. Enamel rod end patterns were unique between the male and female subjects. Visual analysis showed that wavy branched subpattern was the predominant subpattern observed among examined teeth. Hence, ameloglyphics is a reliable technique for personal identification. © 2012 American Academy of Forensic Sciences.
Trace analysis of high-purity graphite by LA-ICP-MS.
Pickhardt, C; Becker, J S
2001-07-01
Laser-ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) has been established as a very efficient and sensitive technique for the direct analysis of solids. In this work the capability of LA-ICP-MS was investigated for determination of trace elements in high-purity graphite. Synthetic laboratory standards with a graphite matrix were prepared for the purpose of quantifying the analytical results. Doped trace elements, concentration 0.5 microg g(-1), in a laboratory standard were determined with an accuracy of 1% to +/- 7% and a relative standard deviation (RSD) of 2-13%. Solution-based calibration was also used for quantitative analysis of high-purity graphite. It was found that such calibration led to analytical results for trace-element determination in graphite with accuracy similar to that obtained by use of synthetic laboratory standards for quantification of analytical results. Results from quantitative determination of trace impurities in a real reactor-graphite sample, using both quantification approaches, were in good agreement. Detection limits for all elements of interest were determined in the low ng g(-1) concentration range. Improvement of detection limits by a factor of 10 was achieved for analyses of high-purity graphite with LA-ICP-MS under wet plasma conditions, because the lower background signal and increased element sensitivity.
Adamski, Mateusz G; Gumann, Patryk; Baird, Alison E
2014-01-01
Over the past decade rapid advances have occurred in the understanding of RNA expression and its regulation. Quantitative polymerase chain reactions (qPCR) have become the gold standard for quantifying gene expression. Microfluidic next generation, high throughput qPCR now permits the detection of transcript copy number in thousands of reactions simultaneously, dramatically increasing the sensitivity over standard qPCR. Here we present a gene expression analysis method applicable to both standard polymerase chain reactions (qPCR) and high throughput qPCR. This technique is adjusted to the input sample quantity (e.g., the number of cells) and is independent of control gene expression. It is efficiency-corrected and with the use of a universal reference sample (commercial complementary DNA (cDNA)) permits the normalization of results between different batches and between different instruments--regardless of potential differences in transcript amplification efficiency. Modifications of the input quantity method include (1) the achievement of absolute quantification and (2) a non-efficiency corrected analysis. When compared to other commonly used algorithms the input quantity method proved to be valid. This method is of particular value for clinical studies of whole blood and circulating leukocytes where cell counts are readily available.
NASA Astrophysics Data System (ADS)
Shen, Chuan-Chou; Lin, Huei-Ting; Chu, Mei-Fei; Yu, Ein-Fen; Wang, Xianfeng; Dorale, Jeffrey A.
2006-09-01
A new analytical technique using inductively coupled plasma-quadrupole mass spectrometry (ICP-QMS) has been developed that produces permil-level precision in the measurement of uranium concentration ([U]) and isotopic composition (δ234U) in natural materials. A 233U-236U double spike method was used to correct for mass fractionation during analysis. To correct for ratio drifting, samples were bracketed by uranium standard measurements. A sensitivity of 6-7 × 108 cps/ppm was generated with a sample solution uptake rate of 30 μL/min. With a measurement time of 15-20 min, standards of 30-ng uranium produced a within-run precision better than 3‰ (±2 R.S.D.) for δ234U and better than 2‰ for [U]. Replicate measurements made on standards show that a between-run reproducibility of 3.5‰ for δ234U and 2‰ for [U] can be achieved. ICP-QMS data of δ234U and [U] in seawater, coral, and speleothem materials are consistent with the data measured by other ICP-MS and TIMS techniques. Advantages of the ICP-QMS method include low cost, easy maintenance, simple instrumental operation, and few sample preparation steps. Sample size requirements are small, such as 10-14 mg of coral material. The results demonstrate that this technique can be applied to natural samples with various matrices.
Valente, Roberto; Sutcliffe, Robert; Levesque, Eric; Costa, Mara; De' Angelis, Nicola; Tayar, Claude; Cherqui, Daniel; Laurent, Alexis
2018-04-01
Laparoscopic left hemihepatectomy (LLH) may be an alternative to open (OLH). There are several original variations in the technical aspects of LLH, and no accepted standard. The aim of this study is to assess the safety and effectiveness of the technique developed at Henri Mondor Hospital since 1996. The technique of LLH was conceived for safety and training of two mature generations of lead surgeons. The technique includes full laparoscopy, ventral approach to the common trunk, extrahepatic pedicle dissection, CUSA ® parenchymal transection, division of the left hilar plate laterally to the Arantius ligament, and ventral transection of the left hepatic vein. The outcomes of LLH and OLH were compared. Perioperative analysis included intra- and postoperative, and histology variables. Propensity Score Matching was undertaken of background covariates including age, ASA, BMI, fibrosis, steatosis, tumour size, and specimen weight. 17 LLH and 51 OLH were performed from 1996 to 2014 with perioperative mortality rates of 0% and 6%, respectively. In the LLH group, two patients underwent conversion to open surgery. Propensity matching selected 10 LLH/OLH pairs. The LLH group had a higher proportion of procedures for benign disease. LLH was associated with longer operating time and less blood loss. Perioperative complications occurred in 30% (LLH) and 10% (OLH) (p = 1). Mortality and ITU stay were similar. This technique is recommended as a possible technical reference for standard LLH. Copyright © 2017 International Hepato-Pancreato-Biliary Association Inc. Published by Elsevier Ltd. All rights reserved.
Steering of Frequency Standards by the Use of Linear Quadratic Gaussian Control Theory
NASA Technical Reports Server (NTRS)
Koppang, Paul; Leland, Robert
1996-01-01
Linear quadratic Gaussian control is a technique that uses Kalman filtering to estimate a state vector used for input into a control calculation. A control correction is calculated by minimizing a quadratic cost function that is dependent on both the state vector and the control amount. Different penalties, chosen by the designer, are assessed by the controller as the state vector and control amount vary from given optimal values. With this feature controllers can be designed to force the phase and frequency differences between two standards to zero either more or less aggressively depending on the application. Data will be used to show how using different parameters in the cost function analysis affects the steering and the stability of the frequency standards.
Analysis of view synthesis prediction architectures in modern coding standards
NASA Astrophysics Data System (ADS)
Tian, Dong; Zou, Feng; Lee, Chris; Vetro, Anthony; Sun, Huifang
2013-09-01
Depth-based 3D formats are currently being developed as extensions to both AVC and HEVC standards. The availability of depth information facilitates the generation of intermediate views for advanced 3D applications and displays, and also enables more efficient coding of the multiview input data through view synthesis prediction techniques. This paper outlines several approaches that have been explored to realize view synthesis prediction in modern video coding standards such as AVC and HEVC. The benefits and drawbacks of various architectures are analyzed in terms of performance, complexity, and other design considerations. It is hence concluded that block-based VSP prediction for multiview video signals provides attractive coding gains with comparable complexity as traditional motion/disparity compensation.
Nevada Applied Ecology Group procedures handbook for environmental transuranics
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, M.G.; Dunaway, P.B.
The activities of the Nevada Applied Ecology Group (NAEG) integrated research studies of environmental plutonium and other transuranics at the Nevada Test Site have required many standardized field and laboratory procedures. These include sampling techniques, collection and preparation, radiochemical and wet chemistry analysis, data bank storage and reporting, and statistical considerations for environmental samples of soil, vegetation, resuspended particles, animals, and others. This document, printed in two volumes, includes most of the Nevada Applied Ecology Group standard procedures, with explanations as to the specific applications involved in the environmental studies. Where there is more than one document concerning a procedure,more » it has been included to indicate special studies or applications perhaps more complex than the routine standard sampling procedures utilized.« less
Nevada Applied Ecology Group procedures handbook for environmental transuranics
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, M.G.; Dunaway, P.B.
The activities of the Nevada Applied Ecology Group (NAEG) integrated research studies of environmental plutonium and other transuranics at the Nevada Test Site have required many standardized field and laboratory procedures. These include sampling techniques, collection and preparation, radiochemical and wet chemistry analysis, data bank storage and reporting, and statistical considerations for environmental samples of soil, vegetation, resuspended particles, animals, and other biological material. This document, printed in two volumes, includes most of the Nevada Applied Ecology Group standard procedures, with explanations as to the specific applications involved in the environmental studies. Where there is more than one document concerningmore » a procedure, it has been included to indicate special studies or applications more complex than the routine standard sampling procedures utilized.« less
Hughes, Sarah A; Huang, Rongfu; Mahaffey, Ashley; Chelme-Ayala, Pamela; Klamerth, Nikolaus; Meshref, Mohamed N A; Ibrahim, Mohamed D; Brown, Christine; Peru, Kerry M; Headley, John V; Gamal El-Din, Mohamed
2017-11-01
There are several established methods for the determination of naphthenic acids (NAs) in waters associated with oil sands mining operations. Due to their highly complex nature, measured concentration and composition of NAs vary depending on the method used. This study compared different common sample preparation techniques, analytical instrument methods, and analytical standards to measure NAs in groundwater and process water samples collected from an active oil sands operation. In general, the high- and ultrahigh-resolution methods, namely high performance liquid chromatography time-of-flight mass spectrometry (UPLC-TOF-MS) and Orbitrap mass spectrometry (Orbitrap-MS), were within an order of magnitude of the Fourier transform infrared spectroscopy (FTIR) methods. The gas chromatography mass spectrometry (GC-MS) methods consistently had the highest NA concentrations and greatest standard error. Total NAs concentration was not statistically different between sample preparation of solid phase extraction and liquid-liquid extraction. Calibration standards influenced quantitation results. This work provided a comprehensive understanding of the inherent differences in the various techniques available to measure NAs and hence the potential differences in measured amounts of NAs in samples. Results from this study will contribute to the analytical method standardization for NA analysis in oil sands related water samples. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Campbell, Ian S.; Ton, Alain T.; Mulligan, Christopher C.
2011-07-01
An ambient mass spectrometric method based on desorption electrospray ionization (DESI) has been developed to allow rapid, direct analysis of contaminated water samples, and the technique was evaluated through analysis of a wide array of pharmaceutical and personal care product (PPCP) contaminants. Incorporating direct infusion of aqueous sample and thermal assistance into the source design has allowed low ppt detection limits for the target analytes in drinking water matrices. With this methodology, mass spectral information can be collected in less than 1 min, consuming ~100 μL of total sample. Quantitative ability was also demonstrated without the use of an internal standard, yielding decent linearity and reproducibility. Initial results suggest that this source configuration is resistant to carryover effects and robust towards multi-component samples. The rapid, continuous analysis afforded by this method offers advantages in terms of sample analysis time and throughput over traditional hyphenated mass spectrometric techniques.
Diaby, Vakaramoko; Sanogo, Vassiki; Moussa, Kouame Richard
2015-01-01
Objective In this paper, the readers are introduced to ELICIT, an imprecise weight elicitation technique for multicriteria decision analysis for healthcare. Methods The application of ELICIT consists of two steps: the rank ordering of evaluation criteria based on decision-makers’ (DMs) preferences using the principal component analysis; and the estimation of criteria weights and their descriptive statistics using the variable interdependent analysis and the Monte Carlo method. The application of ELICIT is illustrated with a hypothetical case study involving the elicitation of weights for five criteria used to select the best device for eye surgery. Results The criteria were ranked from 1–5, based on a strict preference relationship established by the DMs. For each criterion, the deterministic weight was estimated as well as the standard deviation and 95% credibility interval. Conclusions ELICIT is appropriate in situations where only ordinal DMs’ preferences are available to elicit decision criteria weights. PMID:26361235
Campbell, Ian S; Ton, Alain T; Mulligan, Christopher C
2011-07-01
An ambient mass spectrometric method based on desorption electrospray ionization (DESI) has been developed to allow rapid, direct analysis of contaminated water samples, and the technique was evaluated through analysis of a wide array of pharmaceutical and personal care product (PPCP) contaminants. Incorporating direct infusion of aqueous sample and thermal assistance into the source design has allowed low ppt detection limits for the target analytes in drinking water matrices. With this methodology, mass spectral information can be collected in less than 1 min, consuming ~100 μL of total sample. Quantitative ability was also demonstrated without the use of an internal standard, yielding decent linearity and reproducibility. Initial results suggest that this source configuration is resistant to carryover effects and robust towards multi-component samples. The rapid, continuous analysis afforded by this method offers advantages in terms of sample analysis time and throughput over traditional hyphenated mass spectrometric techniques.
NASA Astrophysics Data System (ADS)
Zhang, Tie-Yan; Zhao, Yan; Xie, Xiang-Peng
2012-12-01
This paper is concerned with the problem of stability analysis of nonlinear Roesser-type two-dimensional (2D) systems. Firstly, the fuzzy modeling method for the usual one-dimensional (1D) systems is extended to the 2D case so that the underlying nonlinear 2D system can be represented by the 2D Takagi—Sugeno (TS) fuzzy model, which is convenient for implementing the stability analysis. Secondly, a new kind of fuzzy Lyapunov function, which is a homogeneous polynomially parameter dependent on fuzzy membership functions, is developed to conceive less conservative stability conditions for the TS Roesser-type 2D system. In the process of stability analysis, the obtained stability conditions approach exactness in the sense of convergence by applying some novel relaxed techniques. Moreover, the obtained result is formulated in the form of linear matrix inequalities, which can be easily solved via standard numerical software. Finally, a numerical example is also given to demonstrate the effectiveness of the proposed approach.
Zimmerman, Heather A; Meizel-Lambert, Cayli J; Schultz, John J; Sigman, Michael E
2015-03-01
Forensic anthropologists are generally able to identify skeletal materials (bone and tooth) using gross anatomical features; however, highly fragmented or taphonomically altered materials may be problematic to identify. Several chemical analysis techniques have been shown to be reliable laboratory methods that can be used to determine if questionable fragments are osseous, dental, or non-skeletal in nature. The purpose of this review is to provide a detailed background of chemical analysis techniques focusing on elemental compositions that have been assessed for use in differentiating osseous, dental, and non-skeletal materials. More recently, chemical analysis studies have also focused on using the elemental composition of osseous/dental materials to evaluate species and provide individual discrimination, but have generally been successful only in small, closed groups, limiting their use forensically. Despite significant advances incorporating a variety of instruments, including handheld devices, further research is necessary to address issues in standardization, error rates, and sample size/diversity. Copyright © 2014 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.
Array magnetics modal analysis for the DIII-D tokamak based on localized time-series modelling
Olofsson, K. Erik J.; Hanson, Jeremy M.; Shiraki, Daisuke; ...
2014-07-14
Here, time-series analysis of magnetics data in tokamaks is typically done using block-based fast Fourier transform methods. This work presents the development and deployment of a new set of algorithms for magnetic probe array analysis. The method is based on an estimation technique known as stochastic subspace identification (SSI). Compared with the standard coherence approach or the direct singular value decomposition approach, the new technique exhibits several beneficial properties. For example, the SSI method does not require that frequencies are orthogonal with respect to the timeframe used in the analysis. Frequencies are obtained directly as parameters of localized time-series models.more » The parameters are extracted by solving small-scale eigenvalue problems. Applications include maximum-likelihood regularized eigenmode pattern estimation, detection of neoclassical tearing modes, including locked mode precursors, and automatic clustering of modes, and magnetics-pattern characterization of sawtooth pre- and postcursors, edge harmonic oscillations and fishbones.« less
BLIND EXTRACTION OF AN EXOPLANETARY SPECTRUM THROUGH INDEPENDENT COMPONENT ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waldmann, I. P.; Tinetti, G.; Hollis, M. D. J.
2013-03-20
Blind-source separation techniques are used to extract the transmission spectrum of the hot-Jupiter HD189733b recorded by the Hubble/NICMOS instrument. Such a 'blind' analysis of the data is based on the concept of independent component analysis. The detrending of Hubble/NICMOS data using the sole assumption that nongaussian systematic noise is statistically independent from the desired light-curve signals is presented. By not assuming any prior or auxiliary information but the data themselves, it is shown that spectroscopic errors only about 10%-30% larger than parametric methods can be obtained for 11 spectral bins with bin sizes of {approx}0.09 {mu}m. This represents a reasonablemore » trade-off between a higher degree of objectivity for the non-parametric methods and smaller standard errors for the parametric de-trending. Results are discussed in light of previous analyses published in the literature. The fact that three very different analysis techniques yield comparable spectra is a strong indication of the stability of these results.« less
Method comparison for forest soil carbon and nitrogen estimates in the Delaware River basin
B. Xu; Yude Pan; A.H. Johnson; A.F. Plante
2016-01-01
The accuracy of forest soil C and N estimates is hampered by forest soils that are rocky, inaccessible, and spatially heterogeneous. A composite coring technique is the standard method used in Forest Inventory and Analysis, but its accuracy has been questioned. Quantitative soil pits provide direct measurement of rock content and soil mass from a larger, more...
Neural and Behavioral Sequelae of Blast-Related Traumatic Brain Injury
2012-11-01
testing and advanced MRI techniques [task-activated functional MRI (fMRI) and diffusion tensor imaging ( DTI )] to gain a comprehensive understanding of... DTI fiber tracking) and neurobehavioral testing (computerized assessment and standard neuropsychological testing) on 60 chronic trauma patients: 15...data analysis. 15. SUBJECT TERMS Blast-related traumatic brain injury (TBI), fMRI, DTI , cognition 16. SECURITY CLASSIFICATION OF: 17. LIMITATION
ERIC Educational Resources Information Center
Jeffries, Rhonda; Jeffries, Devair
2014-01-01
This article explored the role of hair in Sylviane Diouf's "Bintou's Braids" and focused on the impact of hair as a cultural signifier on girls and the curriculum. The article examined the ability of this children's text to address female beauty standards and suggests the use of literary techniques, such as reader's theatre, to recognize…
ERIC Educational Resources Information Center
Pinkerton, Steven D.; Benotsch, Eric G.; Mikytuck, John
2007-01-01
The "gold standard" for evaluating human immunodeficiency virus (HIV) prevention programs is a partner-by-partner sexual behavior assessment that elicits information about each sex partner and the activities engaged in with that partner. When collection of detailed partner-by-partner data is not feasible, aggregate data (e.g., total…
Data needs for X-ray astronomy satellites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kallman, T.
I review the current status of atomic data for X-ray astronomy satellites. This includes some of the astrophysical issues which can be addressed, current modeling and analysis techniques, computational tools, the limitations imposed by currently available atomic data, and the validity of standard assumptions. I also discuss the future: challenges associated with future missions and goals for atomic data collection.
Analysis of tincal ore waste by energy dispersive X-ray fluorescence (EDXRF) Technique
NASA Astrophysics Data System (ADS)
Kalfa, Orhan Murat; Üstündağ, Zafer; Özkırım, Ilknur; Kagan Kadıoğlu, Yusuf
2007-01-01
Etibank Borax Plant is located in Kırka-Eskişehir, Turkey. The borax waste from this plant was analyzed by means of energy dispersive X-ray fluorescence (EDXRF). The standard addition method was used for the determination of the concentration of Al, Fe, Zn, Sn, and Ba. The results are presented and discussed in this paper.
USDA-ARS?s Scientific Manuscript database
Stir bar sorptive extraction (SBSE) is a technique for extraction and analysis of organic compounds in aqueous matrices, similar in theory to solid phase microextraction (SPME). SBSE has been successfully used to analyze several organic compounds, including food matrices. When compared with SPME, ...
Some computational techniques for estimating human operator describing functions
NASA Technical Reports Server (NTRS)
Levison, W. H.
1986-01-01
Computational procedures for improving the reliability of human operator describing functions are described. Special attention is given to the estimation of standard errors associated with mean operator gain and phase shift as computed from an ensemble of experimental trials. This analysis pertains to experiments using sum-of-sines forcing functions. Both open-loop and closed-loop measurement environments are considered.
Strain-gage bridge calibration and flight loads measurements on a low-aspect-ratio thin wing
NASA Technical Reports Server (NTRS)
Peele, E. L.; Eckstrom, C. V.
1975-01-01
Strain-gage bridges were used to make in-flight measurements of bending moment, shear, and torque loads on a low-aspect-ratio, thin, swept wing having a full depth honeycomb sandwich type structure. Standard regression analysis techniques were employed in the calibration of the strain bridges. Comparison of the measured loads with theoretical loads are included.
ERIC Educational Resources Information Center
Karunanayake, Akila G.; Dewage, Narada Bombuwala; Todd, Olivia Adele; Essandoh, Matthew; Anderson, Renel; Mlsna, Todd; Mlsna, Deb
2016-01-01
Adsorption studies of salicylic acid (SA) and 4-nitroaniline (4NA) from aqueous solutions were performed with magnetic biochar (MBC) in order to train students in analytical techniques such as standard calibration curves, UV-vis spectrophotometry, and chemical separations within the context of wastewater purification. Analysis of samples purified…
ERIC Educational Resources Information Center
Aybek, Birsel; Aslan, Serkan
2016-01-01
Problem Statement: Various research have been conducted investigating the quality and quantity of textbooks such as wording, content, design, visuality, physical properties, activities, methods and techniques, questions and experiments, events, misconceptions, organizations, pictures, text selection, end of unit questions and assessments, indexes…
Fast charging technique for high power LiFePO4 batteries: A mechanistic analysis of aging
NASA Astrophysics Data System (ADS)
Anseán, D.; Dubarry, M.; Devie, A.; Liaw, B. Y.; García, V. M.; Viera, J. C.; González, M.
2016-07-01
One of the major issues hampering the acceptance of electric vehicles (EVs) is the anxiety associated with long charging time. Hence, the ability to fast charging lithium-ion battery (LIB) systems is gaining notable interest. However, fast charging is not tolerated by all LIB chemistries because it affects battery functionality and accelerates its aging processes. Here, we investigate the long-term effects of multistage fast charging on a commercial high power LiFePO4-based cell and compare it to another cell tested under standard charging. Coupling incremental capacity (IC) and IC peak area analysis together with mechanistic model simulations ('Alawa' toolbox with harvested half-cell data), we quantify the degradation modes that cause aging of the tested cells. The results show that the proposed fast charging technique caused similar aging effects as standard charging. The degradation is caused by a linear loss of lithium inventory, coupled with a less degree of linear loss of active material on the negative electrode. This study validates fast charging as a feasible mean of operation for this particular LIB chemistry and cell architecture. It also illustrates the benefits of a mechanistic approach to understand cell degradation on commercial cells.