A Comparison of Interactional Aerodynamics Methods for a Helicopter in Low Speed Flight
NASA Technical Reports Server (NTRS)
Berry, John D.; Letnikov, Victor; Bavykina, Irena; Chaffin, Mark S.
1998-01-01
Recent advances in computing subsonic flow have been applied to helicopter configurations with various degrees of success. This paper is a comparison of two specific methods applied to a particularly challenging regime of helicopter flight, very low speeds, where the interaction of the rotor wake and the fuselage are most significant. Comparisons are made between different methods of predicting the interactional aerodynamics associated with a simple generic helicopter configuration. These comparisons are made using fuselage pressure data from a Mach-scaled powered model helicopter with a rotor diameter of approximately 3 meters. The data shown are for an advance ratio of 0.05 with a thrust coefficient of 0.0066. The results of this comparison show that in this type of complex flow both analytical techniques have regions where they are more accurate in matching the experimental data.
ERIC Educational Resources Information Center
Köksal, Mustafa Serdar
2013-01-01
In this study, comparison of academically advanced science students and gifted students in terms of attitude toward science and motivation toward science learning is aimed. The survey method was used for the data collection by the help of two different instruments: "Attitude Toward Science" scale and "motivation toward science…
A Comparison of Signal Enhancement Methods for Extracting Tonal Acoustic Signals
NASA Technical Reports Server (NTRS)
Jones, Michael G.
1998-01-01
The measurement of pure tone acoustic pressure signals in the presence of masking noise, often generated by mean flow, is a continual problem in the field of passive liner duct acoustics research. In support of the Advanced Subsonic Technology Noise Reduction Program, methods were investigated for conducting measurements of advanced duct liner concepts in harsh, aeroacoustic environments. This report presents the results of a comparison study of three signal extraction methods for acquiring quality acoustic pressure measurements in the presence of broadband noise (used to simulate the effects of mean flow). The performance of each method was compared to a baseline measurement of a pure tone acoustic pressure 3 dB above a uniform, broadband noise background.
ERIC Educational Resources Information Center
Hoelscher, Michael
2017-01-01
This article argues that strong interrelations between methodological and theoretical advances exist. Progress in, especially comparative, methods may have important impacts on theory evaluation. By using the example of the "Varieties of Capitalism" approach and an international comparison of higher education systems, it can be shown…
NASA Technical Reports Server (NTRS)
Mcgowan, David M.; Bostic, Susan W.; Camarda, Charles J.
1993-01-01
The development of two advanced reduced-basis methods, the force derivative method and the Lanczos method, and two widely used modal methods, the mode displacement method and the mode acceleration method, for transient structural analysis of unconstrained structures is presented. Two example structural problems are studied: an undamped, unconstrained beam subject to a uniformly distributed load which varies as a sinusoidal function of time and an undamped high-speed civil transport aircraft subject to a normal wing tip load which varies as a sinusoidal function of time. These example problems are used to verify the methods and to compare the relative effectiveness of each of the four reduced-basis methods for performing transient structural analyses on unconstrained structures. The methods are verified with a solution obtained by integrating directly the full system of equations of motion, and they are compared using the number of basis vectors required to obtain a desired level of accuracy and the associated computational times as comparison criteria.
An entropy and viscosity corrected potential method for rotor performance prediction
NASA Technical Reports Server (NTRS)
Bridgeman, John O.; Strawn, Roger C.; Caradonna, Francis X.
1988-01-01
An unsteady Full-Potential Rotor code (FPR) has been enhanced with modifications directed at improving its drag prediction capability. The shock generated entropy has been included to provide solutions comparable to the Euler equations. A weakly interacted integral boundary layer has also been coupled to FPR in order to estimate skin-friction drag. Pressure distributions, shock positions, and drag comparisons are made with various data sets derived from two-dimensional airfoil, hovering, and advancing high speed rotor tests. In all these comparisons, the effect of the nonisentropic modification improves (i.e., weakens) the shock strength and wave drag. In addition, the boundary layer method yields reasonable estimates of skin-friction drag. Airfoil drag and hover torque data comparisons are excellent, as are predicted shock strength and positions for a high speed advancing rotor.
Computational aspects of real-time simulation of rotary-wing aircraft. M.S. Thesis
NASA Technical Reports Server (NTRS)
Houck, J. A.
1976-01-01
A study was conducted to determine the effects of degrading a rotating blade element rotor mathematical model suitable for real-time simulation of rotorcraft. Three methods of degradation were studied, reduction of number of blades, reduction of number of blade segments, and increasing the integration interval, which has the corresponding effect of increasing blade azimuthal advance angle. The three degradation methods were studied through static trim comparisons, total rotor force and moment comparisons, single blade force and moment comparisons over one complete revolution, and total vehicle dynamic response comparisons. Recommendations are made concerning model degradation which should serve as a guide for future users of this mathematical model, and in general, they are in order of minimum impact on model validity: (1) reduction of number of blade segments; (2) reduction of number of blades; and (3) increase of integration interval and azimuthal advance angle. Extreme limits are specified beyond which a different rotor mathematical model should be used.
Effects of rotor model degradation on the accuracy of rotorcraft real time simulation
NASA Technical Reports Server (NTRS)
Houck, J. A.; Bowles, R. L.
1976-01-01
The effects are studied of degrading a rotating blade element rotor mathematical model to meet various real-time simulation requirements of rotorcraft. Three methods of degradation were studied: reduction of number of blades, reduction of number of blade segments, and increasing the integration interval, which has the corresponding effect of increasing blade azimuthal advance angle. The three degradation methods were studied through static trim comparisons, total rotor force and moment comparisons, single blade force and moment comparisons over one complete revolution, and total vehicle dynamic response comparisons. Recommendations are made concerning model degradation which should serve as a guide for future users of this mathematical model, and in general, they are in order of minimum impact on model validity: (1) reduction of number of blade segments, (2) reduction of number of blades, and (3) increase of integration interval and azimuthal advance angle. Extreme limits are specified beyond which the rotating blade element rotor mathematical model should not be used.
USDA-ARS?s Scientific Manuscript database
Background: Availability of a large number of data sets in public repositories and the advances in integrating multi-omics methods have greatly advanced our understanding of biological organisms and microbial associates, as well as large subcellular organelles, such as mitochondria. Mitochondrial ...
Recent advances in the sequencing of relevant water intrusion fungi by the EPA, combined with the development of probes and primers have allowed for the unequivocal quantitative and qualitative identification of fungi in selected matrices.
In this pilot study, quantitative...
Reflections on experimental research in medical education.
Cook, David A; Beckman, Thomas J
2010-08-01
As medical education research advances, it is important that education researchers employ rigorous methods for conducting and reporting their investigations. In this article we discuss several important yet oft neglected issues in designing experimental research in education. First, randomization controls for only a subset of possible confounders. Second, the posttest-only design is inherently stronger than the pretest-posttest design, provided the study is randomized and the sample is sufficiently large. Third, demonstrating the superiority of an educational intervention in comparison to no intervention does little to advance the art and science of education. Fourth, comparisons involving multifactorial interventions are hopelessly confounded, have limited application to new settings, and do little to advance our understanding of education. Fifth, single-group pretest-posttest studies are susceptible to numerous validity threats. Finally, educational interventions (including the comparison group) must be described in detail sufficient to allow replication.
Test Method Designed to Evaluate Cylinder Liner-Piston Ring Coatings for Advanced Heat Engines
NASA Technical Reports Server (NTRS)
Radil, Kevin C.
1997-01-01
Research on advanced heat engine concepts, such as the low-heat-rejection engine, have shown the potential for increased thermal efficiency, reduced emissions, lighter weight, simpler design, and longer life in comparison to current diesel engine designs. A major obstacle in the development of a functional advanced heat engine is overcoming the problems caused by the high combustion temperatures at the piston ring/cylinder liner interface, specifically at top ring reversal (TRR). Therefore, advanced cylinder liner and piston ring materials are needed that can survive under these extreme conditions. To address this need, researchers at the NASA Lewis Research Center have designed a tribological test method to help evaluate candidate piston ring and cylinder liner materials for advanced diesel engines.
A Comparison of Cut Scores Using Multiple Standard Setting Methods.
ERIC Educational Resources Information Center
Impara, James C.; Plake, Barbara S.
This paper reports the results of using several alternative methods of setting cut scores. The methods used were: (1) a variation of the Angoff method (1971); (2) a variation of the borderline group method; and (3) an advanced impact method (G. Dillon, 1996). The results discussed are from studies undertaken to set the cut scores for fourth grade…
Linguistic Markers of Stance in Early and Advanced Academic Writing: A Corpus-Based Comparison
ERIC Educational Resources Information Center
Aull, Laura L.; Lancaster, Zak
2014-01-01
This article uses corpus methods to examine linguistic expressions of stance in over 4,000 argumentative essays written by incoming first-year university students in comparison with the writing of upper-level undergraduate students and published academics. The findings reveal linguistic stance markers shared across the first-year essays despite…
Propeller flow visualization techniques
NASA Technical Reports Server (NTRS)
Stefko, G. L.; Paulovich, F. J.; Greissing, J. P.; Walker, E. D.
1982-01-01
Propeller flow visualization techniques were tested. The actual operating blade shape as it determines the actual propeller performance and noise was established. The ability to photographically determine the advanced propeller blade tip deflections, local flow field conditions, and gain insight into aeroelastic instability is demonstrated. The analytical prediction methods which are being developed can be compared with experimental data. These comparisons contribute to the verification of these improved methods and give improved capability for designing future advanced propellers with enhanced performance and noise characteristics.
NASA Technical Reports Server (NTRS)
Bobbitt, P. J.; Manro, M. E.; Kulfan, R. M.
1980-01-01
Wind tunnel tests of an arrow wing body configuration consisting of flat, twisted, and cambered twisted wings were conducted at Mach numbers from 0.40 to 2.50 to provide an experimental data base for comparison with theoretical methods. A variety of leading and trailing edge control surface deflections were included in these tests, and in addition, the cambered twisted wing was tested with an outboard vertical fin to determine its effect on wing and control surface loads. Theory experiment comparisons show that current state of the art linear and nonlinear attached flow methods were adequate at small angles of attack typical of cruise conditions. The incremental effects of outboard fin, wing twist, and wing camber are most accurately predicted by the advanced panel method PANAIR. Results of the advanced panel separated flow method, obtained with an early version of the program, show promise that accurate detailed pressure predictions may soon be possible for an aeroelasticity deformed wing at high angles of attack.
NASA Technical Reports Server (NTRS)
Stern, Martin O.
1992-01-01
This report describes a study to evaluate the benefits of advanced propulsion technologies for transporting materials between low Earth orbit and the Moon. A relatively conventional reference transportation system, and several other systems, each of which includes one advanced technology component, are compared in terms of how well they perform a chosen mission objective. The evaluation method is based on a pairwise life-cycle cost comparison of each of the advanced systems with the reference system. Somewhat novel and economically important features of the procedure are the inclusion not only of mass payback ratios based on Earth launch costs, but also of repair and capital acquisition costs, and of adjustments in the latter to reflect the technological maturity of the advanced technologies. The required input information is developed by panels of experts. The overall scope and approach of the study are presented in the introduction. The bulk of the paper describes the evaluation method; the reference system and an advanced transportation system, including a spinning tether in an eccentric Earth orbit, are used to illustrate it.
Analytical methods for determining individual aldehyde, ketone, and alcohol emissions from gasoline-, methanol-, and variable-fueled vehicles are described. These methods were used in the Auto/Oil Air quality Improvement Research Program to provide emission data for comparison of...
Somatic cell nuclear transfer: pros and cons.
Sumer, Huseyin; Liu, Jun; Tat, Pollyanna; Heffernan, Corey; Jones, Karen L; Verma, Paul J
2009-01-01
Even though the technique of mammalian SCNT is just over a decade old it has already resulted in numerous significant advances. Despite the recent advances in the reprogramming field, SCNT remains the bench-mark for the generation of both genetically unmodified autologous pluripotent stem cells for transplantation and for the production of cloned animals. In this review we will discuss the pros and cons of SCNT, drawing comparisons with other reprogramming methods.
Advances in the use of observed spatial patterns of catchment hydrological response
NASA Astrophysics Data System (ADS)
Grayson, Rodger B.; Blöschl, Günter; Western, Andrew W.; McMahon, Thomas A.
Over the past two decades there have been repeated calls for the collection of new data for use in developing hydrological science. The last few years have begun to bear fruit from the seeds sown by these calls, through increases in the availability and utility of remote sensing data, as well as the execution of campaigns in research catchments aimed at providing new data for advancing hydrological understanding and predictive capability. In this paper we discuss some philosophical considerations related to model complexity, data availability and predictive performance, highlighting the potential of observed patterns in moving the science and practice of catchment hydrology forward. We then review advances that have arisen from recent work on spatial patterns, including in the characterisation of spatial structure and heterogeneity, and the use of patterns for developing, calibrating and testing distributed hydrological models. We illustrate progress via examples using observed patterns of snow cover, runoff occurrence and soil moisture. Methods for the comparison of patterns are presented, illustrating how they can be used to assess hydrologically important characteristics of model performance. These methods include point-to-point comparisons, spatial relationships between errors and landscape parameters, transects, and optimal local alignment. It is argued that the progress made to date augers well for future developments, but there is scope for improvements in several areas. These include better quantitative methods for pattern comparisons, better use of pattern information in data assimilation and modelling, and a call for improved archiving of data from field studies to assist in comparative studies for generalising results and developing fundamental understanding.
Comparisons of several aerodynamic methods for application to dynamic loads analyses
NASA Technical Reports Server (NTRS)
Kroll, R. I.; Miller, R. D.
1976-01-01
The results of a study are presented in which the applicability at subsonic speeds of several aerodynamic methods for predicting dynamic gust loads on aircraft, including active control systems, was examined and compared. These aerodynamic methods varied from steady state to an advanced unsteady aerodynamic formulation. Brief descriptions of the structural and aerodynamic representations and of the motion and load equations are presented. Comparisons of numerical results achieved using the various aerodynamic methods are shown in detail. From these results, aerodynamic representations for dynamic gust analyses are identified. It was concluded that several aerodynamic methods are satisfactory for dynamic gust analyses of configurations having either controls fixed or active control systems that primarily affect the low frequency rigid body aircraft response.
NASA Technical Reports Server (NTRS)
Sawyer, W. C.; Allen, J. M.; Hernandez, G.; Dillenius, M. F. E.; Hemsch, M. J.
1982-01-01
This paper presents a survey of engineering computational methods and experimental programs used for estimating the aerodynamic characteristics of missile configurations. Emphasis is placed on those methods which are suitable for preliminary design of conventional and advanced concepts. An analysis of the technical approaches of the various methods is made in order to assess their suitability to estimate longitudinal and/or lateral-directional characteristics for different classes of missile configurations. Some comparisons between the predicted characteristics and experimental data are presented. These comparisons are made for a large variation in flow conditions and model attitude parameters. The paper also presents known experimental research programs developed for the specific purpose of validating analytical methods and extending the capability of data-base programs.
Evaluation of advanced regenerator systems
NASA Technical Reports Server (NTRS)
Cook, J. A.; Fucinari, C. A.; Lingscheit, J. N.; Rahnke, C. J.
1978-01-01
The major considerations are discussed which will affect the selection of a ceramic regenerative heat exchanger for an improved 100 HP automotive gas turbine engine. The regenerator considered for this application is about 36cm in diameter. Regenerator comparisons are made on the basis of material, method of fabrication, cost, and performance. A regenerator inlet temperature of 1000 C is assumed for performance comparisons, and laboratory test results are discussed for material comparisons at 1100 and 1200 C. Engine test results using the Ford 707 industrial gas turbine engine are also discussed.
Spacecraft applications of advanced global positioning system technology
NASA Technical Reports Server (NTRS)
Huth, Gaylord; Dodds, James; Udalov, Sergei; Austin, Richard; Loomis, Peter; Duboraw, I. Newton, III
1988-01-01
The purpose of this study was to evaluate potential uses of Global Positioning System (GPS) in spacecraft applications in the following areas: attitude control and tracking; structural control; traffic control; and time base definition (synchronization). Each of these functions are addressed. Also addressed are the hardware related issues concerning the application of GPS technology and comparisons are provided with alternative instrumentation methods for specific functions required for an advanced low earth orbit spacecraft.
Recent advances in materials toxicology
NASA Technical Reports Server (NTRS)
Russo, D. M.
1979-01-01
An overview of the fire toxicology program, its principal objectives and approach, is outlined. The laboratory methods of assessing pyrolysis product toxicity for two experiments are presented. The two experiments are: a comparison of test end points; and an evaluation of operant techniques. A third experiment is outlined for a comparison of full-scale and laboratory toxicity tests, with the purpose of determining animal survivability in full-scale tests. Future research plans are also outlined.
[Development of an Excel spreadsheet for meta-analysis of indirect and mixed treatment comparisons].
Tobías, Aurelio; Catalá-López, Ferrán; Roqué, Marta
2014-01-01
Meta-analyses in clinical research usually aimed to evaluate treatment efficacy and safety in direct comparison with a unique comparator. Indirect comparisons, using the Bucher's method, can summarize primary data when information from direct comparisons is limited or nonexistent. Mixed comparisons allow combining estimates from direct and indirect comparisons, increasing statistical power. There is a need for simple applications for meta-analysis of indirect and mixed comparisons. These can easily be conducted using a Microsoft Office Excel spreadsheet. We developed a spreadsheet for indirect and mixed effects comparisons of friendly use for clinical researchers interested in systematic reviews, but non-familiarized with the use of more advanced statistical packages. The use of the proposed Excel spreadsheet for indirect and mixed comparisons can be of great use in clinical epidemiology to extend the knowledge provided by traditional meta-analysis when evidence from direct comparisons is limited or nonexistent.
Gupta, Sayan; Feng, Jun; Chance, Mark; Ralston, Corie
2016-01-01
Synchrotron X-ray Footprinting is a powerful in situ hydroxyl radical labeling method for analysis of protein structure, interactions, folding and conformation change in solution. In this method, water is ionized by high flux density broad band synchrotron X-rays to produce a steady-state concentration of hydroxyl radicals, which then react with solvent accessible side-chains. The resulting stable modification products are analyzed by liquid chromatography coupled to mass spectrometry. A comparative reactivity rate between known and unknown states of a protein provides local as well as global information on structural changes, which is then used to develop structural models for protein function and dynamics. In this review we describe the XF-MS method, its unique capabilities and its recent technical advances at the Advanced Light Source. We provide a comparison of other hydroxyl radical and mass spectrometry based methods with XFMS. We also discuss some of the latest developments in its usage for studying bound water, transmembrane proteins and photosynthetic protein components, and the synergy of the method with other synchrotron based structural biology methods.
COMPARISON OF ADVANCED DISINFECTING METHODS FOR MUNICIPAL WASTEWATER REUSE IN AGRICULTURE. (R825362)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Acoustic analysis of the propfan
NASA Technical Reports Server (NTRS)
Farassat, F.; Succi, G. P.
1979-01-01
A review of propeller noise prediction technology is presented. Two methods for the prediction of the noise from conventional and advanced propellers in forward flight are described. These methods are based on different time domain formulations. Brief descriptions of the computer algorithms based on these formulations are given. The output of the programs (the acoustic pressure signature) was Fourier analyzed to get the acoustic pressure spectrum. The main difference between the two programs is that one can handle propellers with supersonic tip speed while the other is for subsonic tip speed propellers. Comparisons of the calculated and measured acoustic data for a conventional and an advanced propeller show good agreement in general.
NASA Astrophysics Data System (ADS)
Rolla, L. Barrera; Rice, H. J.
2006-09-01
In this paper a "forward-advancing" field discretization method suitable for solving the Helmholtz equation in large-scale problems is proposed. The forward wave expansion method (FWEM) is derived from a highly efficient discretization procedure based on interpolation of wave functions known as the wave expansion method (WEM). The FWEM computes the propagated sound field by means of an exclusively forward advancing solution, neglecting the backscattered field. It is thus analogous to methods such as the (one way) parabolic equation method (PEM) (usually discretized using standard finite difference or finite element methods). These techniques do not require the inversion of large system matrices and thus enable the solution of large-scale acoustic problems where backscatter is not of interest. Calculations using FWEM are presented for two propagation problems and comparisons to data computed with analytical and theoretical solutions and show this forward approximation to be highly accurate. Examples of sound propagation over a screen in upwind and downwind refracting atmospheric conditions at low nodal spacings (0.2 per wavelength in the propagation direction) are also included to demonstrate the flexibility and efficiency of the method.
A mean curvature model for capillary flows in asymmetric containers and conduits
NASA Astrophysics Data System (ADS)
Chen, Yongkang; Tavan, Noël; Weislogel, Mark M.
2012-08-01
Capillarity-driven flows resulting from critical geometric wetting criterion are observed to yield significant shifts of the bulk fluid from one side of the container to the other during "zero gravity" experiments. For wetting fluids, such bulk shift flows consist of advancing and receding menisci sometimes separated by secondary capillary flows such as rivulet-like flows along gaps. Here we study the mean curvature of an advancing meniscus in hopes of approximating a critical boundary condition for fluid dynamics solutions. It is found that the bulk shift flows behave as if the bulk menisci are either "connected" or "disconnected." For the connected case, an analytic method is developed to calculate the mean curvature of the advancing meniscus in an asymptotic sense. In contrast, for the disconnected case the method to calculate the mean curvature of the advancing and receding menisci uses a well-established procedure. Both disconnected and connected bulk shifts can occur as the first tier flow of more complex compound capillary flows. Preliminary comparisons between the analytic method and the results of drop tower experiments are encouraging.
Evaluation of 3 cotton trash measurement methods by visible/near-infrared reflectance spectroscopy
USDA-ARS?s Scientific Manuscript database
Currently, three types of instrumentals have been developed to assess the trash content in lint cotton fibers, namely, Shirley analyzer (SA), advanced fiber information system (AFIS), and high volume instrumentation (HVI). Each of these devices has its unique advantages, and comprehensive comparison...
ERIC Educational Resources Information Center
Bronstein, Laura R.; Ball, Annahita; Mellin, Elizabeth A.; Wade-Mdivanian, Rebecca; Anderson-Butcher, Dawn
2011-01-01
The purpose of this article is to share results of a mixed-methods research study designed to shed light on similarities and differences between school-employed and agency-employed school-based social workers' preparation and practice as a precursor for collaboration in expanded school mental health. Online survey data from a national sample of…
NASA Technical Reports Server (NTRS)
Barisa, B. B.; Flinchbaugh, G. D.; Zachary, A. T.
1989-01-01
This paper compares the cost of the Space Shuttle Main Engine (SSME) and the Space Transportation Main Engine (STME) proposed by the Advanced Launch System Program. A brief description of the SSME and STME engines is presented, followed by a comparison of these engines that illustrates the impact of focusing on acceptable performance at minimum cost (as for the STME) or on maximum performance (as for the SSME). Several examples of cost reduction methods are presented.
Touati, Nassera; Maillet, Lara; Gaboury, Isabelle
2017-01-01
Introduction Advanced access is an organizational model that has shown promise in improving timely access to primary care. In Quebec, it has recently been introduced in several family medicine units (FMUs) with a teaching mission. The objectives of this paper are to analyze the principles of advanced access implemented in FMUs and to identify which factors influenced their implementation. Methods A multiple case study of four purposefully selected FMUs was conducted. Data included document analysis and 40 semistructured interviews with health professionals and staff. Cross-case comparison and thematic analysis were performed. Results Three out of four FMUs implemented the key principles of advanced access at various levels. One scheduling pattern was observed: 90% of open appointment slots over three- to four-week periods and 10% of prebooked appointments. Structural and organizational factors facilitated the implementation: training of staff to support change, collective leadership, and openness to change. Conversely, family physicians practicing in multiple clinical settings, lack of team resources, turnover of clerical staff, rotation of medical residents, and management capacity were reported as major barriers to implementing the model. Conclusion Our results call for multilevel implementation strategies to improve the design of the advanced access model in academic teaching settings. PMID:28775899
Churchwell, Mona I; Twaddle, Nathan C; Meeker, Larry R; Doerge, Daniel R
2005-10-25
Recent technological advances have made available reverse phase chromatographic media with a 1.7 microm particle size along with a liquid handling system that can operate such columns at much higher pressures. This technology, termed ultra performance liquid chromatography (UPLC), offers significant theoretical advantages in resolution, speed, and sensitivity for analytical determinations, particularly when coupled with mass spectrometers capable of high-speed acquisitions. This paper explores the differences in LC-MS performance by conducting a side-by-side comparison of UPLC for several methods previously optimized for HPLC-based separation and quantification of multiple analytes with maximum throughput. In general, UPLC produced significant improvements in method sensitivity, speed, and resolution. Sensitivity increases with UPLC, which were found to be analyte-dependent, were as large as 10-fold and improvements in method speed were as large as 5-fold under conditions of comparable peak separations. Improvements in chromatographic resolution with UPLC were apparent from generally narrower peak widths and from a separation of diastereomers not possible using HPLC. Overall, the improvements in LC-MS method sensitivity, speed, and resolution provided by UPLC show that further advances can be made in analytical methodology to add significant value to hypothesis-driven research.
Ulmer, Candice Z; Ragland, Jared M; Koelmel, Jeremy P; Heckert, Alan; Jones, Christina M; Garrett, Timothy J; Yost, Richard A; Bowden, John A
2017-12-19
As advances in analytical separation techniques, mass spectrometry instrumentation, and data processing platforms continue to spur growth in the lipidomics field, more structurally unique lipid species are detected and annotated. The lipidomics community is in need of benchmark reference values to assess the validity of various lipidomics workflows in providing accurate quantitative measurements across the diverse lipidome. LipidQC addresses the harmonization challenge in lipid quantitation by providing a semiautomated process, independent of analytical platform, for visual comparison of experimental results of National Institute of Standards and Technology Standard Reference Material (SRM) 1950, "Metabolites in Frozen Human Plasma", against benchmark consensus mean concentrations derived from the NIST Lipidomics Interlaboratory Comparison Exercise.
Comparison of Two Entry Methods for Laparoscopic Port Entry: Technical Point of View
Toro, Adriana; Mannino, Maurizio; Cappello, Giovanni; Di Stefano, Andrea; Di Carlo, Isidoro
2012-01-01
Laparoscopic entry is a blind procedure and it often represents a problem for all the related complications. In the last three decades, rapid advances in laparoscopic surgery have made it an invaluable part of general surgery, but there remains no clear consensus on an optimal method of entry into the peritoneal cavity. The aim of this paper is to focus on the evolution of two used methods of entry into the peritoneal cavity in laparoscopic surgery. PMID:22761542
NASA Technical Reports Server (NTRS)
Aniversario, R. B.; Harvey, S. T.; Mccarty, J. E.; Parsons, J. T.; Peterson, D. C.; Pritchett, L. D.; Wilson, D. R.; Wogulis, E. R.
1983-01-01
The horizontal stabilizer of the 737 transport was redesigned. Five shipsets were fabricated using composite materials. Weight reduction greater than the 20% goal was achieved. Parts and assemblies were readily produced on production-type tooling. Quality assurance methods were demonstrated. Repair methods were developed and demonstrated. Strength and stiffness analytical methods were substantiated by comparison with test results. Cost data was accumulated in a semiproduction environment. FAA certification was obtained.
COMPARISON OF METALS IN HUMAN MILK AND URINE USING TRACE MULTIELEMENT ANALYSES
Healthy, nonsmoking women from 18-38 years old twice donated milk and urine (2-7 weeks and 3-4 months postpartum) as part of the EPA's Methods Advancement for Milk Analysis study, a pilot for the National Children's Study (NCS). Our goals were to determine 1) if routine high thro...
NASA Astrophysics Data System (ADS)
Dutton, Gregory
Forensic science is a collection of applied disciplines that draws from all branches of science. A key question in forensic analysis is: to what degree do a piece of evidence and a known reference sample share characteristics? Quantification of similarity, estimation of uncertainty, and determination of relevant population statistics are of current concern. A 2016 PCAST report questioned the foundational validity and the validity in practice of several forensic disciplines, including latent fingerprints, firearms comparisons and DNA mixture interpretation. One recommendation was the advancement of objective, automated comparison methods based on image analysis and machine learning. These concerns parallel the National Institute of Justice's ongoing R&D investments in applied chemistry, biology and physics. NIJ maintains a funding program spanning fundamental research with potential for forensic application to the validation of novel instruments and methods. Since 2009, NIJ has funded over 179M in external research to support the advancement of accuracy, validity and efficiency in the forensic sciences. An overview of NIJ's programs will be presented, with examples of relevant projects from fluid dynamics, 3D imaging, acoustics, and materials science.
Spectral analysis comparisons of Fourier-theory-based methods and minimum variance (Capon) methods
NASA Astrophysics Data System (ADS)
Garbanzo-Salas, Marcial; Hocking, Wayne. K.
2015-09-01
In recent years, adaptive (data dependent) methods have been introduced into many areas where Fourier spectral analysis has traditionally been used. Although the data-dependent methods are often advanced as being superior to Fourier methods, they do require some finesse in choosing the order of the relevant filters. In performing comparisons, we have found some concerns about the mappings, particularly when related to cases involving many spectral lines or even continuous spectral signals. Using numerical simulations, several comparisons between Fourier transform procedures and minimum variance method (MVM) have been performed. For multiple frequency signals, the MVM resolves most of the frequency content only for filters that have more degrees of freedom than the number of distinct spectral lines in the signal. In the case of Gaussian spectral approximation, MVM will always underestimate the width, and can misappropriate the location of spectral line in some circumstances. Large filters can be used to improve results with multiple frequency signals, but are computationally inefficient. Significant biases can occur when using MVM to study spectral information or echo power from the atmosphere. Artifacts and artificial narrowing of turbulent layers is one such impact.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Carlo, Francesco; Gürsoy, Doğa; Ching, Daniel J.
There is a widening gap between the fast advancement of computational methods for tomographic reconstruction and their successful implementation in production software at various synchrotron facilities. This is due in part to the lack of readily available instrument datasets and phantoms representative of real materials for validation and comparison of new numerical methods. Recent advancements in detector technology made sub-second and multi-energy tomographic data collection possible [1], but also increased the demand to develop new reconstruction methods able to handle in-situ [2] and dynamic systems [3] that can be quickly incorporated in beamline production software [4]. The X-ray Tomography Datamore » Bank, tomoBank, provides a repository of experimental and simulated datasets with the aim to foster collaboration among computational scientists, beamline scientists, and experimentalists and to accelerate the development and implementation of tomographic reconstruction methods for synchrotron facility production software by providing easy access to challenging dataset and their descriptors.« less
An unusual method of forensic human identification: use of selfie photographs.
Miranda, Geraldo Elias; Freitas, Sílvia Guzella de; Maia, Luiza Valéria de Abreu; Melani, Rodolfo Francisco Haltenhoff
2016-06-01
As with other methods of identification, in forensic odontology, antemortem data are compared with postmortem findings. In the absence of dental documentation, photographs of the smile play an important role in this comparison. As yet, there are no reports of the use of the selfie photograph for identification purposes. Owing to advancements in technology, electronic devices, and social networks, this type of photograph has become increasingly common. This paper describes a case in which selfie photographs were used to identify a carbonized body, by using the smile line and image superimposition. This low-cost, rapid, and easy to analyze technique provides highly reliable results. Nevertheless, there are disadvantages, such as the limited number of teeth that are visible in a photograph, low image quality, possibility of morphological changes in the teeth after the antemortem image was taken, and difficulty of making comparisons depending on the orientation of the photo. In forensic odontology, new methods of identification must be sought to accompany technological evolution, particularly when no traditional methods of comparison, such as clinical record charts or radiographs, are available. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Aeroacoustics of advanced propellers
NASA Technical Reports Server (NTRS)
Groeneweg, John F.
1990-01-01
The aeroacoustics of advanced, high speed propellers (propfans) are reviewed from the perspective of NASA research conducted in support of the Advanced Turboprop Program. Aerodynamic and acoustic components of prediction methods for near and far field noise are summarized for both single and counterrotation propellers in uninstalled and configurations. Experimental results from tests at both takeoff/approach and cruise conditions are reviewed with emphasis on: (1) single and counterrotation model tests in the NASA Lewis 9 by 15 (low speed) and 8 by 6 (high speed) wind tunnels, and (2) full scale flight tests of a 9 ft (2.74 m) diameter single rotation wing mounted tractor and a 11.7 ft (3.57 m) diameter counterrotation aft mounted pusher propeller. Comparisons of model data projected to flight with full scale flight data show good agreement validating the scale model wind tunnel approach. Likewise, comparisons of measured and predicted noise level show excellent agreement for both single and counterrotation propellers. Progress in describing angle of attack and installation effects is also summarized. Finally, the aeroacoustic issues associated with ducted propellers (very high bypass fans) are discussed.
The Vortex Lattice Method for the Rotor-Vortex Interaction Problem
NASA Technical Reports Server (NTRS)
Padakannaya, R.
1974-01-01
The rotor blade-vortex interaction problem and the resulting impulsive airloads which generate undesirable noise levels are discussed. A numerical lifting surface method to predict unsteady aerodynamic forces induced on a finite aspect ratio rectangular wing by a straight, free vortex placed at an arbitrary angle in a subsonic incompressible free stream is developed first. Using a rigid wake assumption, the wake vortices are assumed to move downsteam with the free steam velocity. Unsteady load distributions are obtained which compare favorably with the results of planar lifting surface theory. The vortex lattice method has been extended to a single bladed rotor operating at high advance ratios and encountering a free vortex from a fixed wing upstream of the rotor. The predicted unsteady load distributions on the model rotor blade are generally in agreement with the experimental results. This method has also been extended to full scale rotor flight cases in which vortex induced loads near the tip of a rotor blade were indicated. In both the model and the full scale rotor blade airload calculations a flat planar wake was assumed which is a good approximation at large advance ratios because the downwash is small in comparison to the free stream at large advance ratios. The large fluctuations in the measured airloads near the tip of the rotor blade on the advance side is predicted closely by the vortex lattice method.
Kerwin, Leonard Y; El Tal, Abdel Kader; Stiff, Mark A; Fakhouri, Tarek M
2014-08-01
Cosmetic, functional, and structural sequelae of scarring are innumerable, and measures exist to optimize and ultimately minimize these sequelae. To evaluate the innumerable methods available to decrease the cosmetic, functional, and structural repercussions of scarring, pubMed search of the English literature with key words scar, scar revision, scar prevention, scar treatment, scar remodeling, cicatrix, cicatrix treatment, and cicatrix remodeling was done. Original articles and reviews were examined and included. Seventy-nine manuscripts were reviewed. Techniques, comparisons, and results were reviewed and tabulated. Overall, though topical modalities are easier to use and are usually more attractive to the patient, the surgical approaches still prove to be superior and more reliable. However, advances in topical medications for scar modification are on the rise and a change towards medical treatment of scars may emerge as the next best approach. Comparison studies of the innumerable specific modalities for scar revision and prevention are impossible. Standardization of techniques is lacking. Scarring, the body's natural response to a wound, can create many adverse effects. At this point, the practice of sound, surgical fundamentals still trump the most advanced preventative methods and revision techniques. Advances in medical approaches are available, however, to assist the scarring process, which even the most advanced surgical fundamentals will ultimately lead to. Whether through newer topical therapies, light treatment, or classical surgical intervention, our treatment armamentarium of scars has expanded and will allow us to maximize scar prevention and to minimize scar morbidity. © 2014 The International Society of Dermatology.
Advanced Image Processing for Defect Visualization in Infrared Thermography
NASA Technical Reports Server (NTRS)
Plotnikov, Yuri A.; Winfree, William P.
1997-01-01
Results of a defect visualization process based on pulse infrared thermography are presented. Algorithms have been developed to reduce the amount of operator participation required in the process of interpreting thermographic images. The algorithms determine the defect's depth and size from the temporal and spatial thermal distributions that exist on the surface of the investigated object following thermal excitation. A comparison of the results from thermal contrast, time derivative, and phase analysis methods for defect visualization are presented. These comparisons are based on three dimensional simulations of a test case representing a plate with multiple delaminations. Comparisons are also based on experimental data obtained from a specimen with flat bottom holes and a composite panel with delaminations.
Advanced Mitigation Process (AMP) for Improving Laser Damage Threshold of Fused Silica Optics
NASA Astrophysics Data System (ADS)
Ye, Xin; Huang, Jin; Liu, Hongjie; Geng, Feng; Sun, Laixi; Jiang, Xiaodong; Wu, Weidong; Qiao, Liang; Zu, Xiaotao; Zheng, Wanguo
2016-08-01
The laser damage precursors in subsurface of fused silica (e.g. photosensitive impurities, scratches and redeposited silica compounds) were mitigated by mineral acid leaching and HF etching with multi-frequency ultrasonic agitation, respectively. The comparison of scratches morphology after static etching and high-frequency ultrasonic agitation etching was devoted in our case. And comparison of laser induce damage resistance of scratched and non-scratched fused silica surfaces after HF etching with high-frequency ultrasonic agitation were also investigated in this study. The global laser induce damage resistance was increased significantly after the laser damage precursors were mitigated in this case. The redeposition of reaction produce was avoided by involving multi-frequency ultrasonic and chemical leaching process. These methods made the increase of laser damage threshold more stable. In addition, there is no scratch related damage initiations found on the samples which were treated by Advanced Mitigation Process.
Advanced Mitigation Process (AMP) for Improving Laser Damage Threshold of Fused Silica Optics
Ye, Xin; Huang, Jin; Liu, Hongjie; Geng, Feng; Sun, Laixi; Jiang, Xiaodong; Wu, Weidong; Qiao, Liang; Zu, Xiaotao; Zheng, Wanguo
2016-01-01
The laser damage precursors in subsurface of fused silica (e.g. photosensitive impurities, scratches and redeposited silica compounds) were mitigated by mineral acid leaching and HF etching with multi-frequency ultrasonic agitation, respectively. The comparison of scratches morphology after static etching and high-frequency ultrasonic agitation etching was devoted in our case. And comparison of laser induce damage resistance of scratched and non-scratched fused silica surfaces after HF etching with high-frequency ultrasonic agitation were also investigated in this study. The global laser induce damage resistance was increased significantly after the laser damage precursors were mitigated in this case. The redeposition of reaction produce was avoided by involving multi-frequency ultrasonic and chemical leaching process. These methods made the increase of laser damage threshold more stable. In addition, there is no scratch related damage initiations found on the samples which were treated by Advanced Mitigation Process. PMID:27484188
Frank, Lawrence D; Fox, Eric H; Ulmer, Jared M; Chapman, James E; Kershaw, Suzanne E; Sallis, James F; Conway, Terry L; Cerin, Ester; Cain, Kelli L; Adams, Marc A; Smith, Graham R; Hinckson, Erica; Mavoa, Suzanne; Christiansen, Lars B; Hino, Adriano Akira F; Lopes, Adalberto A S; Schipperijn, Jasper
2017-01-23
Advancements in geographic information systems over the past two decades have increased the specificity by which an individual's neighborhood environment may be spatially defined for physical activity and health research. This study investigated how different types of street network buffering methods compared in measuring a set of commonly used built environment measures (BEMs) and tested their performance on associations with physical activity outcomes. An internationally-developed set of objective BEMs using three different spatial buffering techniques were used to evaluate the relative differences in resulting explanatory power on self-reported physical activity outcomes. BEMs were developed in five countries using 'sausage,' 'detailed-trimmed,' and 'detailed,' network buffers at a distance of 1 km around participant household addresses (n = 5883). BEM values were significantly different (p < 0.05) for 96% of sausage versus detailed-trimmed buffer comparisons and 89% of sausage versus detailed network buffer comparisons. Results showed that BEM coefficients in physical activity models did not differ significantly across buffering methods, and in most cases BEM associations with physical activity outcomes had the same level of statistical significance across buffer types. However, BEM coefficients differed in significance for 9% of the sausage versus detailed models, which may warrant further investigation. Results of this study inform the selection of spatial buffering methods to estimate physical activity outcomes using an internationally consistent set of BEMs. Using three different network-based buffering methods, the findings indicate significant variation among BEM values, however associations with physical activity outcomes were similar across each buffering technique. The study advances knowledge by presenting consistently assessed relationships between three different network buffer types and utilitarian travel, sedentary behavior, and leisure-oriented physical activity outcomes.
Time-domain hybrid method for simulating large amplitude motions of ships advancing in waves
NASA Astrophysics Data System (ADS)
Liu, Shukui; Papanikolaou, Apostolos D.
2011-03-01
Typical results obtained by a newly developed, nonlinear time domain hybrid method for simulating large amplitude motions of ships advancing with constant forward speed in waves are presented. The method is hybrid in the way of combining a time-domain transient Green function method and a Rankine source method. The present approach employs a simple double integration algorithm with respect to time to simulate the free-surface boundary condition. During the simulation, the diffraction and radiation forces are computed by pressure integration over the mean wetted surface, whereas the incident wave and hydrostatic restoring forces/moments are calculated on the instantaneously wetted surface of the hull. Typical numerical results of application of the method to the seakeeping performance of a standard containership, namely the ITTC S175, are herein presented. Comparisons have been made between the results from the present method, the frequency domain 3D panel method (NEWDRIFT) of NTUA-SDL and available experimental data and good agreement has been observed for all studied cases between the results of the present method and comparable other data.
Examining the Inclusion of Quantitative Research in a Meta-Ethnographic Review
ERIC Educational Resources Information Center
Booker, Rhae-Ann Richardson
2010-01-01
This study explored how one might extend meta-ethnography to quantitative research for the advancement of interpretive review methods. Using the same population of 139 studies on racial-ethnic matching as data, my investigation entailed an extended meta-ethnography (EME) and comparison of its results to a published meta-analysis (PMA). Adhering to…
ERIC Educational Resources Information Center
McLean, Carmen P.; Miller, Nathan A.
2010-01-01
We assessed changes in paranormal beliefs and general critical thinking skills among students (n = 23) enrolled in an experimental course designed to teach distinguishing science from pseudoscience and a comparison group of students (n = 30) in an advanced research methods course. On average, both courses were successful in reducing paranormal…
Advanced ballistic range technology
NASA Technical Reports Server (NTRS)
Yates, Leslie A.
1994-01-01
The research conducted supported two facilities at NASA Ames Research Center: the Hypervelocity Free-Flight Aerodynamic Facility and the 16-Inch Shock Tunnel. During the grant period, a computerized film-reading system was developed, and five- and six-degree-of-freedom parameter-identification routines were written and successfully implemented. Studies of flow separation were conducted, and methods to extract phase shift information from finite-fringe interferograms were developed. Methods for constructing optical images from Computational Fluid Dynamics solutions were also developed, and these methods were used for one-to-one comparisons of experiment and computations.
Prediction of noise field of a propfan at angle of attack
NASA Technical Reports Server (NTRS)
Envia, Edmane
1991-01-01
A method for predicting the noise field of a propfan operating at an angle of attack to the oncoming flow is presented. The method takes advantage of the high-blade-count of the advanced propeller designs to provide an accurate and efficient formula for predicting their noise field. The formula, which is written in terms of the Airy function and its derivative, provides a very attractive alternative to the use of numerical integration. A preliminary comparison shows rather favorable agreement between the predictions from the present method and the experimental data.
Development, implementation and evaluation of satellite-aided agricultural monitoring systems
NASA Technical Reports Server (NTRS)
Cicone, R. (Principal Investigator); Crist, E.; Metzler, M.; Parris, T.
1982-01-01
Research supporting the use of remote sensing for inventory and assessment of agricultural commodities is summarized. Three task areas are described: (1) corn and soybean crop spectral/temporal signature characterization; (2) efficient area estimation technology development; and (3) advanced satellite and sensor system definition. Studies include an assessment of alternative green measures from MSS variables; the evaluation of alternative methods for identifying, labeling or classification targets in an automobile procedural context; a comparison of MSS, the advanced very high resolution radiometer and the coastal zone color scanner, as well as a critical assessment of thematic mapper dimensionally and spectral structure.
Applications of alignment-free methods in epigenomics.
Pinello, Luca; Lo Bosco, Giosuè; Yuan, Guo-Cheng
2014-05-01
Epigenetic mechanisms play an important role in the regulation of cell type-specific gene activities, yet how epigenetic patterns are established and maintained remains poorly understood. Recent studies have supported a role of DNA sequences in recruitment of epigenetic regulators. Alignment-free methods have been applied to identify distinct sequence features that are associated with epigenetic patterns and to predict epigenomic profiles. Here, we review recent advances in such applications, including the methods to map DNA sequence to feature space, sequence comparison and prediction models. Computational studies using these methods have provided important insights into the epigenetic regulatory mechanisms.
Heinz, Hendrik; Ramezani-Dakhel, Hadi
2016-01-21
Natural and man-made materials often rely on functional interfaces between inorganic and organic compounds. Examples include skeletal tissues and biominerals, drug delivery systems, catalysts, sensors, separation media, energy conversion devices, and polymer nanocomposites. Current laboratory techniques are limited to monitor and manipulate assembly on the 1 to 100 nm scale, time-consuming, and costly. Computational methods have become increasingly reliable to understand materials assembly and performance. This review explores the merit of simulations in comparison to experiment at the 1 to 100 nm scale, including connections to smaller length scales of quantum mechanics and larger length scales of coarse-grain models. First, current simulation methods, advances in the understanding of chemical bonding, in the development of force fields, and in the development of chemically realistic models are described. Then, the recognition mechanisms of biomolecules on nanostructured metals, semimetals, oxides, phosphates, carbonates, sulfides, and other inorganic materials are explained, including extensive comparisons between modeling and laboratory measurements. Depending on the substrate, the role of soft epitaxial binding mechanisms, ion pairing, hydrogen bonds, hydrophobic interactions, and conformation effects is described. Applications of the knowledge from simulation to predict binding of ligands and drug molecules to the inorganic surfaces, crystal growth and shape development, catalyst performance, as well as electrical properties at interfaces are examined. The quality of estimates from molecular dynamics and Monte Carlo simulations is validated in comparison to measurements and design rules described where available. The review further describes applications of simulation methods to polymer composite materials, surface modification of nanofillers, and interfacial interactions in building materials. The complexity of functional multiphase materials creates opportunities to further develop accurate force fields, including reactive force fields, and chemically realistic surface models, to enable materials discovery at a million times lower computational cost compared to quantum mechanical methods. The impact of modeling and simulation could further be increased by the advancement of a uniform simulation platform for organic and inorganic compounds across the periodic table and new simulation methods to evaluate system performance in silico.
Comparison of Spatiotemporal Mapping Techniques for Enormous Etl and Exploitation Patterns
NASA Astrophysics Data System (ADS)
Deiotte, R.; La Valley, R.
2017-10-01
The need to extract, transform, and exploit enormous volumes of spatiotemporal data has exploded with the rise of social media, advanced military sensors, wearables, automotive tracking, etc. However, current methods of spatiotemporal encoding and exploitation simultaneously limit the use of that information and increase computing complexity. Current spatiotemporal encoding methods from Niemeyer and Usher rely on a Z-order space filling curve, a relative of Peano's 1890 space filling curve, for spatial hashing and interleaving temporal hashes to generate a spatiotemporal encoding. However, there exist other space-filling curves, and that provide different manifold coverings that could promote better hashing techniques for spatial data and have the potential to map spatiotemporal data without interleaving. The concatenation of Niemeyer's and Usher's techniques provide a highly efficient space-time index. However, other methods have advantages and disadvantages regarding computational cost, efficiency, and utility. This paper explores the several methods using a range of sizes of data sets from 1K to 10M observations and provides a comparison of the methods.
2015-03-01
71(2):193- 7. 13. Lobel DA, Elder JB, Schirmer CM, Bowyer MW, Rezai AR. A novel craniotomy simulator provides a validated method to enhance...MW, Rezai AR. A novel craniotomy simulator provides a validated method to enhance education in the management of traumatic brain injury...comparisons are significant at pɘ.05 - Wilcoxon matched pairs) A Specialty 0 5 10 15 20 Thoracotomy in ED Repair/ Drainage Hapatic Lacs- Open Neck exploration
Comparison of different treatments for unresectable esophageal cancer.
Reed, C E
1995-01-01
Many patient with esophageal cancer have advanced disease that in not amenable to curative treatment. For these individuals the relief of dysphagia is of utmost importance to the quality of their remaining survival time. This article reviews and compares the methods of palliation with focus on indications and contraindications, advantages as well as disadvantages of each technique, success rates, and complications. Tumor characteristics, the physician's experience, the institution's capabilities, cost, and patient preference will influence choice of palliation. Methods are often complementary rather than competitive.
Battery Test Manual For 48 Volt Mild Hybrid Electric Vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Lee Kenneth
2017-03-01
This manual details the U.S. Advanced Battery Consortium and U.S. Department of Energy Vehicle Technologies Program goals, test methods, and analysis techniques for a 48 Volt Mild Hybrid Electric Vehicle system. The test methods are outlined stating with characterization tests, followed by life tests. The final section details standardized analysis techniques for 48 V systems that allow for the comparison of different programs that use this manual. An example test plan is included, along with guidance to filling in gap table numbers.
Advanced instrumentation for aircraft icing research
NASA Technical Reports Server (NTRS)
Bachalo, W.; Smith, J.; Rudoff, R.
1990-01-01
A compact and rugged probe based on the phase Doppler method was evaluated as a means for characterizing icing clouds using airborne platforms and for advancing aircraft icing research in large scale wind tunnels. The Phase Doppler Particle Analyzer (PDPA) upon which the new probe was based is now widely recognized as an accurate method for the complete characterization of sprays. The prototype fiber optic-based probe was evaluated in simulated aircraft icing clouds and found to have the qualities essential to providing information that will advance aircraft icing research. Measurement comparisons of the size and velocity distributions made with the standard PDPA and the fiber optic probe were in excellent agreement as were the measurements of number density and liquid water content. Preliminary testing in the NASA Lewis Icing Research Tunnel (IRT) produced reasonable results but revealed some problems with vibration and signal quality at high speeds. The cause of these problems were identified and design changes were proposed to eliminate the shortcomings of the probe.
Blomqvist, J E; Ahlborg, G; Isaksson, S; Svartz, K
1997-06-01
Two different methods of rigid fixation were compared for postoperative stability 6 months after mandibular advancement for treatment of Class II malocclusion. Sixty (30 + 30) patients from two different oral and maxillofacial units treated for a Class II malocclusion by bilateral saggital split osteotomy (BSSO), and two different methods of internal rigid fixation were prospectively investigated. Two groups (S1, n = 15; S2, n = 15) had bicortical noncompressive screws inserted in the gonial area through a transcutaneous approach, and the other two groups (P1, n = 15; P2, n = 15) had the bone segments fixed with unicortical screws and miniplates on the lateral surface of the mandibular body. Cephalograms were taken preoperatively, 2 days postoperatively and 6 months after the operation. A computer program was used to superimpose the three cephalograms and to register the mandibular advancement and postoperative change both sagittally and vertically. These were minor differences in the advancement and postoperative changes between the four groups, but statistically no significant difference was shown in either sagittal or vertical directions. However, statistically verified differences proved that increasing age was associated with a smaller amount of postsurgical relapse. Low-angle cases (ML/NSL < 25 degrees) had a bigger amount of surgical (P = .0008) and postsurgical (P = .0195) movement compared with the patients in the high-angle group (ML/NSL < 38 degrees). Using a multiple regression test, a positive correlation was also shown between the amount of surgical advancement and the amount of postsurgical instability (P = .018). This prospective dual-center study indicates that the two different methods of internal rigid fixation after surgical advancement of the mandible by BSSO did not significantly differ from each other, and it is up to the individual operator to choose the method for internal rigid fixation.
Finite-difference computations of rotor loads
NASA Technical Reports Server (NTRS)
Caradonna, F. X.; Tung, C.
1985-01-01
This paper demonstrates the current and future potential of finite-difference methods for solving real rotor problems which now rely largely on empiricism. The demonstration consists of a simple means of combining existing finite-difference, integral, and comprehensive loads codes to predict real transonic rotor flows. These computations are performed for hover and high-advance-ratio flight. Comparisons are made with experimental pressure data.
Finite-difference computations of rotor loads
NASA Technical Reports Server (NTRS)
Caradonna, F. X.; Tung, C.
1985-01-01
The current and future potential of finite difference methods for solving real rotor problems which now rely largely on empiricism are demonstrated. The demonstration consists of a simple means of combining existing finite-difference, integral, and comprehensive loads codes to predict real transonic rotor flows. These computations are performed for hover and high-advanced-ratio flight. Comparisons are made with experimental pressure data.
Advances in superresolution optical fluctuation imaging (SOFI)
Dertinger, Thomas; Pallaoro, Alessia; Braun, Gary; Ly, Sonny; Laurence, Ted A.; Weiss, Shimon
2013-01-01
We review the concept of superresolution optical fluctuation imaging (SOFI), discuss its attributes and trade-offs (in comparison with other superresolution methods), and present superresolved images taken on samples stained with quantum dots, organic dyes, and plasmonic metal nanoparticles. We also discuss the prospects of SOFI for live cell superresolution imaging and for imaging with other (non-fluorescent) contrasts. PMID:23672771
High Fidelity Modeling of Field Reversed Configuration (FRC) Thrusters
2017-04-22
signatures which can be used for direct, non -invasive, comparison with experimental diagnostics can be produced. This research will be directly... experimental campaign is critical to developing general design philosophies for low-power plasmoid formation, the complexity of non -linear plasma processes...advanced space propulsion. The work consists of numerical method development, physical model development, and systematic studies of the non -linear
Advanced quantitative measurement methodology in physics education research
NASA Astrophysics Data System (ADS)
Wang, Jing
The ultimate goal of physics education research (PER) is to develop a theoretical framework to understand and improve the learning process. In this journey of discovery, assessment serves as our headlamp and alpenstock. It sometimes detects signals in student mental structures, and sometimes presents the difference between expert understanding and novice understanding. Quantitative assessment is an important area in PER. Developing research-based effective assessment instruments and making meaningful inferences based on these instruments have always been important goals of the PER community. Quantitative studies are often conducted to provide bases for test development and result interpretation. Statistics are frequently used in quantitative studies. The selection of statistical methods and interpretation of the results obtained by these methods shall be connected to the education background. In this connecting process, the issues of educational models are often raised. Many widely used statistical methods do not make assumptions on the mental structure of subjects, nor do they provide explanations tailored to the educational audience. There are also other methods that consider the mental structure and are tailored to provide strong connections between statistics and education. These methods often involve model assumption and parameter estimation, and are complicated mathematically. The dissertation provides a practical view of some advanced quantitative assessment methods. The common feature of these methods is that they all make educational/psychological model assumptions beyond the minimum mathematical model. The purpose of the study is to provide a comparison between these advanced methods and the pure mathematical methods. The comparison is based on the performance of the two types of methods under physics education settings. In particular, the comparison uses both physics content assessments and scientific ability assessments. The dissertation includes three parts. The first part involves the comparison between item response theory (IRT) and classical test theory (CTT). The two theories both provide test item statistics for educational inferences and decisions. The two theories are both applied to Force Concept Inventory data obtained from students enrolled in The Ohio State University. Effort was made to examine the similarity and difference between the two theories, and the possible explanation to the difference. The study suggests that item response theory is more sensitive to the context and conceptual features of the test items than classical test theory. The IRT parameters provide a better measure than CTT parameters for the educational audience to investigate item features. The second part of the dissertation is on the measure of association for binary data. In quantitative assessment, binary data is often encountered because of its simplicity. The current popular measures of association fail under some extremely unbalanced conditions. However, the occurrence of these conditions is not rare in educational data. Two popular association measures, the Pearson's correlation and the tetrachoric correlation are examined. A new method, model based association is introduced, and an educational testing constraint is discussed. The existing popular methods are compared with the model based association measure with and without the constraint. Connections between the value of association and the context and conceptual features of questions are discussed in detail. Results show that all the methods have their advantages and disadvantages. Special attention to the test and data conditions is necessary. The last part of the dissertation is focused on exploratory factor analysis (EFA). The theoretical advantages of EFA are discussed. Typical misunderstanding and misusage of EFA are explored. The EFA is performed on Lawson's Classroom Test of Scientific Reasoning (LCTSR), a widely used assessment on scientific reasoning skills. The reasoning ability structures for U.S. and Chinese students at different educational levels are given by the analysis. A final discussion on the advanced quantitative assessment methodology and the pure mathematical methodology is presented at the end.
NASA Technical Reports Server (NTRS)
Stoner, Mary Cecilia; Hehir, Austin R.; Ivanco, Marie L.; Domack, Marcia S.
2016-01-01
This cost-benefit analysis assesses the benefits of the Advanced Near Net Shape Technology (ANNST) manufacturing process for fabricating integrally stiffened cylinders. These preliminary, rough order-of-magnitude results report a 46 to 58 percent reduction in production costs and a 7-percent reduction in weight over the conventional metallic manufacturing technique used in this study for comparison. Production cost savings of 35 to 58 percent were reported over the composite manufacturing technique used in this study for comparison; however, the ANNST concept was heavier. In this study, the predicted return on investment of equipment required for the ANNST method was ten cryogenic tank barrels when compared with conventional metallic manufacturing. The ANNST method was compared with the conventional multi-piece metallic construction and composite processes for fabricating integrally stiffened cylinders. A case study compared these three alternatives for manufacturing a cylinder of specified geometry, with particular focus placed on production costs and process complexity, with cost analyses performed by the analogy and parametric methods. Furthermore, a scalability study was conducted for three tank diameters to assess the highest potential payoff of the ANNST process for manufacture of large-diameter cryogenic tanks. The analytical hierarchy process (AHP) was subsequently used with a group of selected subject matter experts to assess the value of the various benefits achieved by the ANNST method for potential stakeholders. The AHP study results revealed that decreased final cylinder mass and quality assurance were the most valued benefits of cylinder manufacturing methods, therefore emphasizing the relevance of the benefits achieved with the ANNST process for future projects.
Technological advances in perioperative monitoring: Current concepts and clinical perspectives
Chilkoti, Geetanjali; Wadhwa, Rachna; Saxena, Ashok Kumar
2015-01-01
Minimal mandatory monitoring in the perioperative period recommended by Association of Anesthetists of Great Britain and Ireland and American Society of Anesthesiologists are universally acknowledged and has become an integral part of the anesthesia practice. The technologies in perioperative monitoring have advanced, and the availability and clinical applications have multiplied exponentially. Newer monitoring techniques include depth of anesthesia monitoring, goal-directed fluid therapy, transesophageal echocardiography, advanced neurological monitoring, improved alarm system and technological advancement in objective pain assessment. Various factors that need to be considered with the use of improved monitoring techniques are their validation data, patient outcome, safety profile, cost-effectiveness, awareness of the possible adverse events, knowledge of technical principle and ability of the convenient routine handling. In this review, we will discuss the new monitoring techniques in anesthesia, their advantages, deficiencies, limitations, their comparison to the conventional methods and their effect on patient outcome, if any. PMID:25788767
Technological advances in perioperative monitoring: Current concepts and clinical perspectives.
Chilkoti, Geetanjali; Wadhwa, Rachna; Saxena, Ashok Kumar
2015-01-01
Minimal mandatory monitoring in the perioperative period recommended by Association of Anesthetists of Great Britain and Ireland and American Society of Anesthesiologists are universally acknowledged and has become an integral part of the anesthesia practice. The technologies in perioperative monitoring have advanced, and the availability and clinical applications have multiplied exponentially. Newer monitoring techniques include depth of anesthesia monitoring, goal-directed fluid therapy, transesophageal echocardiography, advanced neurological monitoring, improved alarm system and technological advancement in objective pain assessment. Various factors that need to be considered with the use of improved monitoring techniques are their validation data, patient outcome, safety profile, cost-effectiveness, awareness of the possible adverse events, knowledge of technical principle and ability of the convenient routine handling. In this review, we will discuss the new monitoring techniques in anesthesia, their advantages, deficiencies, limitations, their comparison to the conventional methods and their effect on patient outcome, if any.
Identifying Differentially Abundant Metabolic Pathways in Metagenomic Datasets
NASA Astrophysics Data System (ADS)
Liu, Bo; Pop, Mihai
Enabled by rapid advances in sequencing technology, metagenomic studies aim to characterize entire communities of microbes bypassing the need for culturing individual bacterial members. One major goal of such studies is to identify specific functional adaptations of microbial communities to their habitats. Here we describe a powerful analytical method (MetaPath) that can identify differentially abundant pathways in metagenomic data-sets, relying on a combination of metagenomic sequence data and prior metabolic pathway knowledge. We show that MetaPath outperforms other common approaches when evaluated on simulated datasets. We also demonstrate the power of our methods in analyzing two, publicly available, metagenomic datasets: a comparison of the gut microbiome of obese and lean twins; and a comparison of the gut microbiome of infant and adult subjects. We demonstrate that the subpathways identified by our method provide valuable insights into the biological activities of the microbiome.
Comparative policy analysis for alcohol and drugs: Current state of the field.
Ritter, Alison; Livingston, Michael; Chalmers, Jenny; Berends, Lynda; Reuter, Peter
2016-05-01
A central policy research question concerns the extent to which specific policies produce certain effects - and cross-national (or between state/province) comparisons appear to be an ideal way to answer such a question. This paper explores the current state of comparative policy analysis (CPA) with respect to alcohol and drugs policies. We created a database of journal articles published between 2010 and 2014 as the body of CPA work for analysis. We used this database of 57 articles to clarify, extract and analyse the ways in which CPA has been defined. Quantitative and qualitative analysis of the CPA methods employed, the policy areas that have been studied, and differences between alcohol CPA and drug CPA are explored. There is a lack of clear definition as to what counts as a CPA. The two criteria for a CPA (explicit study of a policy, and comparison across two or more geographic locations), exclude descriptive epidemiology and single state comparisons. With the strict definition, most CPAs were with reference to alcohol (42%), although the most common policy to be analysed was medical cannabis (23%). The vast majority of papers undertook quantitative data analysis, with a variety of advanced statistical methods. We identified five approaches to the policy specification: classification or categorical coding of policy as present or absent; the use of an index; implied policy differences; described policy difference and data-driven policy coding. Each of these has limitations, but perhaps the most common limitation was the inability for the method to account for the differences between policy-as-stated versus policy-as-implemented. There is significant diversity in CPA methods for analysis of alcohol and drugs policy, and some substantial challenges with the currently employed methods. The absence of clear boundaries to a definition of what counts as a 'comparative policy analysis' may account for the methodological plurality but also appears to stand in the way of advancing the techniques. Copyright © 2016 Elsevier B.V. All rights reserved.
A Validation Summary of the NCC Turbulent Reacting/non-reacting Spray Computations
NASA Technical Reports Server (NTRS)
Raju, M. S.; Liu, N.-S. (Technical Monitor)
2000-01-01
This pper provides a validation summary of the spray computations performed as a part of the NCC (National Combustion Code) development activity. NCC is being developed with the aim of advancing the current prediction tools used in the design of advanced technology combustors based on the multidimensional computational methods. The solution procedure combines the novelty of the application of the scalar Monte Carlo PDF (Probability Density Function) method to the modeling of turbulent spray flames with the ability to perform the computations on unstructured grids with parallel computing. The calculation procedure was applied to predict the flow properties of three different spray cases. One is a nonswirling unconfined reacting spray, the second is a nonswirling unconfined nonreacting spray, and the third is a confined swirl-stabilized spray flame. The comparisons involving both gas-phase and droplet velocities, droplet size distributions, and gas-phase temperatures show reasonable agreement with the available experimental data. The comparisons involve both the results obtained from the use of the Monte Carlo PDF method as well as those obtained from the conventional computational fluid dynamics (CFD) solution. Detailed comparisons in the case of a reacting nonswirling spray clearly highlight the importance of chemistry/turbulence interactions in the modeling of reacting sprays. The results from the PDF and non-PDF methods were found to be markedly different and the PDF solution is closer to the reported experimental data. The PDF computations predict that most of the combustion occurs in a predominantly diffusion-flame environment. However, the non-PDF solution predicts incorrectly that the combustion occurs in a predominantly vaporization-controlled regime. The Monte Carlo temperature distribution shows that the functional form of the PDF for the temperature fluctuations varies substantially from point to point. The results also bring to the fore some of the deficiencies associated with the use of assumed-shape PDF methods in spray computations.
NASA Technical Reports Server (NTRS)
Briggs, Maxwell H.; Schifer, Nicholas A.
2012-01-01
The U.S. Department of Energy (DOE) and Lockheed Martin Space Systems Company (LMSSC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. This generator would use two high-efficiency Advanced Stirling Convertors (ASCs), developed by Sunpower Inc. and NASA Glenn Research Center (GRC). The ASCs convert thermal energy from a radioisotope heat source into electricity. As part of ground testing of these ASCs, different operating conditions are used to simulate expected mission conditions. These conditions require achieving a particular operating frequency, hot end and cold end temperatures, and specified electrical power output for a given net heat input. In an effort to improve net heat input predictions, numerous tasks have been performed which provided a more accurate value for net heat input into the ASCs, including testing validation hardware, known as the Thermal Standard, to provide a direct comparison to numerical and empirical models used to predict convertor net heat input. This validation hardware provided a comparison for scrutinizing and improving empirical correlations and numerical models of ASC-E2 net heat input. This hardware simulated the characteristics of an ASC-E2 convertor in both an operating and non-operating mode. This paper describes the Thermal Standard testing and the conclusions of the validation effort applied to the empirical correlation methods used by the Radioisotope Power System (RPS) team at NASA Glenn.
TomoBank: a tomographic data repository for computational x-ray science
De Carlo, Francesco; Gürsoy, Doğa; Ching, Daniel J.; ...
2018-02-08
There is a widening gap between the fast advancement of computational methods for tomographic reconstruction and their successful implementation in production software at various synchrotron facilities. This is due in part to the lack of readily available instrument datasets and phantoms representative of real materials for validation and comparison of new numerical methods. Recent advancements in detector technology made sub-second and multi-energy tomographic data collection possible [1], but also increased the demand to develop new reconstruction methods able to handle in-situ [2] and dynamic systems [3] that can be quickly incorporated in beamline production software [4]. The X-ray Tomography Datamore » Bank, tomoBank, provides a repository of experimental and simulated datasets with the aim to foster collaboration among computational scientists, beamline scientists, and experimentalists and to accelerate the development and implementation of tomographic reconstruction methods for synchrotron facility production software by providing easy access to challenging dataset and their descriptors.« less
Azemard, Sabine; Vassileva, Emilia
2015-06-01
In this paper, we present a simple, fast and cost-effective method for determination of methyl mercury (MeHg) in marine samples. All important parameters influencing the sample preparation process were investigated and optimized. Full validation of the method was performed in accordance to the ISO-17025 (ISO/IEC, 2005) and Eurachem guidelines. Blanks, selectivity, working range (0.09-3.0ng), recovery (92-108%), intermediate precision (1.7-4.5%), traceability, limit of detection (0.009ng), limit of quantification (0.045ng) and expanded uncertainty (15%, k=2) were assessed. Estimation of the uncertainty contribution of each parameter and the demonstration of traceability of measurement results was provided as well. Furthermore, the selectivity of the method was studied by analyzing the same sample extracts by advanced mercury analyzer (AMA) and gas chromatography-atomic fluorescence spectrometry (GC-AFS). Additional validation of the proposed procedure was effectuated by participation in the IAEA-461 worldwide inter-laboratory comparison exercises. Copyright © 2014 Elsevier Ltd. All rights reserved.
Dong, Skye T; Costa, Daniel S J; Butow, Phyllis N; Lovell, Melanie R; Agar, Meera; Velikova, Galina; Teckle, Paulos; Tong, Allison; Tebbutt, Niall C; Clarke, Stephen J; van der Hoek, Kim; King, Madeleine T; Fayers, Peter M
2016-01-01
Symptom clusters in advanced cancer can influence patient outcomes. There is large heterogeneity in the methods used to identify symptom clusters. To investigate the consistency of symptom cluster composition in advanced cancer patients using different statistical methodologies for all patients across five primary cancer sites, and to examine which clusters predict functional status, a global assessment of health and global quality of life. Principal component analysis and exploratory factor analysis (with different rotation and factor selection methods) and hierarchical cluster analysis (with different linkage and similarity measures) were used on a data set of 1562 advanced cancer patients who completed the European Organization for the Research and Treatment of Cancer Quality of Life Questionnaire-Core 30. Four clusters consistently formed for many of the methods and cancer sites: tense-worry-irritable-depressed (emotional cluster), fatigue-pain, nausea-vomiting, and concentration-memory (cognitive cluster). The emotional cluster was a stronger predictor of overall quality of life than the other clusters. Fatigue-pain was a stronger predictor of overall health than the other clusters. The cognitive cluster and fatigue-pain predicted physical functioning, role functioning, and social functioning. The four identified symptom clusters were consistent across statistical methods and cancer types, although there were some noteworthy differences. Statistical derivation of symptom clusters is in need of greater methodological guidance. A psychosocial pathway in the management of symptom clusters may improve quality of life. Biological mechanisms underpinning symptom clusters need to be delineated by future research. A framework for evidence-based screening, assessment, treatment, and follow-up of symptom clusters in advanced cancer is essential. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
Production of oxygen from lunar ilmenite
NASA Technical Reports Server (NTRS)
Zhao, Y.; Shadman, F.
1990-01-01
The following subjects are addressed: (1) the mechanism and kinetics of carbothermal reduction of simulated lunar ilmenite using carbon and, particularly, CO as reducing agents; (2) the determination of the rate-limiting steps; (3) the investigation of the effect of impurities, particularly magnesium; (4) the search for catalysts suitable for enhancement of the rate-limiting step; (5) the comparison of the kinetics of carbothermal reduction with those of hydrogen reduction; (6) the study of the combined use of CO and hydrogen as products of gasification of carbonaceous solids; (7) the development of reduction methods based on the use of waste carbonaceous compounds for the process; (8) the development of a carbothermal reaction path that utilizes gasification of carbonaceous solids to reducing gaseous species (hydrocarbons and carbon monoxide) to facilitate the reduction reaction kinetics and make the process more flexible in using various forms of carbonaceous feeds; (9) the development of advanced gas separation techniques, including the use of high-temperature ceramic membranes; (10) the development of an optimum process flow sheet for carbothermal reduction, and comparison of this process with the hydrogen reduction scheme, as well as a general comparison with other leading oxygen production schemes; and (11) the use of new and advanced material processing and separation techniques.
Ba-Sang, Dan-Zeng; Long, Zi-Wen; Teng, Hao; Zhao, Xu-Peng; Qiu, Jian; Li, Ming-Shan
2016-01-01
Objective A network meta-analysis was conducted comparing the short-term efficacies of 16 targeted drugs in combination with chemotherapy for treatment of advanced/metastatic colorectal cancer (CRC). Results Twenty-seven RCTs were ultimately incorporated into this network meta-analysis. Compared with chemotherapy alone, bevacizumab + chemotherapy, panitumumab + chemotherapy and conatumumab + chemotherapy had higher PR rate. Bevacizumab + chemotherapy, cetuximab + chemotherapy, panitumumab + chemotherapy, trebananib + chemotherapy and conatumumab + chemotherapy had higher ORR rate in comparison to chemotherapy alone. Furthermore, bevacizumab + chemotherapy had higher DCR rate than chemotherapy alone. The results of our cluster analysis showed that chemotherapy combined with bevacizumab, cetuximab, panitumumab, conatumumab, ganitumab, or brivanib + cetuximab had better efficacies for the treatment of advanced/metastatic CRC in comparison to chemotherapy alone. Materials and Methods Electronic databases were comprehensively searched for potential and related randomized controlled trials (RCTs). Direct and indirect evidence were incorporated for evaluation of stable disease (SD), progressive disease (PD), complete response (CR), partial response (PR), disease control rate (DCR) and overall response ratio (ORR) by calculating odds ratio (OR) and 95% confidence intervals (CI), and using the surface under the cumulative ranking curve (SUCRA). Conclusions These results indicated that bevacizumab + chemotherapy, panitumumab + chemotherapy, conatumumab + chemotherapy and brivanib + cetuximab + chemotherapy may have better efficacies for the treatment of advanced/metastatic CRC. PMID:27806321
NASA Astrophysics Data System (ADS)
Eck, Brendan; Fahmi, Rachid; Brown, Kevin M.; Raihani, Nilgoun; Wilson, David L.
2014-03-01
Model observers were created and compared to human observers for the detection of low contrast targets in computed tomography (CT) images reconstructed with an advanced, knowledge-based, iterative image reconstruction method for low x-ray dose imaging. A 5-channel Laguerre-Gauss Hotelling Observer (CHO) was used with internal noise added to the decision variable (DV) and/or channel outputs (CO). Models were defined by parameters: (k1) DV-noise with standard deviation (std) proportional to DV std; (k2) DV-noise with constant std; (k3) CO-noise with constant std across channels; and (k4) CO-noise in each channel with std proportional to CO variance. Four-alternative forced choice (4AFC) human observer studies were performed on sub-images extracted from phantom images with and without a "pin" target. Model parameters were estimated using maximum likelihood comparison to human probability correct (PC) data. PC in human and all model observers increased with dose, contrast, and size, and was much higher for advanced iterative reconstruction (IMR) as compared to filtered back projection (FBP). Detection in IMR was better than FPB at 1/3 dose, suggesting significant dose savings. Model(k1,k2,k3,k4) gave the best overall fit to humans across independent variables (dose, size, contrast, and reconstruction) at fixed display window. However Model(k1) performed better when considering model complexity using the Akaike information criterion. Model(k1) fit the extraordinary detectability difference between IMR and FBP, despite the different noise quality. It is anticipated that the model observer will predict results from iterative reconstruction methods having similar noise characteristics, enabling rapid comparison of methods.
COMPARISON OF NONLINEAR DYNAMICS OPTIMIZATION METHODS FOR APS-U
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Y.; Borland, Michael
Many different objectives and genetic algorithms have been proposed for storage ring nonlinear dynamics performance optimization. These optimization objectives include nonlinear chromaticities and driving/detuning terms, on-momentum and off-momentum dynamic acceptance, chromatic detuning, local momentum acceptance, variation of transverse invariant, Touschek lifetime, etc. In this paper, the effectiveness of several different optimization methods and objectives are compared for the nonlinear beam dynamics optimization of the Advanced Photon Source upgrade (APS-U) lattice. The optimized solutions from these different methods are preliminarily compared in terms of the dynamic acceptance, local momentum acceptance, chromatic detuning, and other performance measures.
Parallelization of Unsteady Adaptive Mesh Refinement for Unstructured Navier-Stokes Solvers
NASA Technical Reports Server (NTRS)
Schwing, Alan M.; Nompelis, Ioannis; Candler, Graham V.
2014-01-01
This paper explores the implementation of the MPI parallelization in a Navier-Stokes solver using adaptive mesh re nement. Viscous and inviscid test problems are considered for the purpose of benchmarking, as are implicit and explicit time advancement methods. The main test problem for comparison includes e ects from boundary layers and other viscous features and requires a large number of grid points for accurate computation. Ex- perimental validation against double cone experiments in hypersonic ow are shown. The adaptive mesh re nement shows promise for a staple test problem in the hypersonic com- munity. Extension to more advanced techniques for more complicated ows is described.
2018-01-01
This work presents the results of an international interlaboratory comparison on ex situ passive sampling in sediments. The main objectives were to map the state of the science in passively sampling sediments, identify sources of variability, provide recommendations and practical guidance for standardized passive sampling, and advance the use of passive sampling in regulatory decision making by increasing confidence in the use of the technique. The study was performed by a consortium of 11 laboratories and included experiments with 14 passive sampling formats on 3 sediments for 25 target chemicals (PAHs and PCBs). The resulting overall interlaboratory variability was large (a factor of ∼10), but standardization of methods halved this variability. The remaining variability was primarily due to factors not related to passive sampling itself, i.e., sediment heterogeneity and analytical chemistry. Excluding the latter source of variability, by performing all analyses in one laboratory, showed that passive sampling results can have a high precision and a very low intermethod variability (
Keim, Madelaine C; Lehmann, Vicky; Shultz, Emily L; Winning, Adrien M; Rausch, Joseph R; Barrera, Maru; Gilmer, Mary Jo; Murphy, Lexa K; Vannatta, Kathryn A; Compas, Bruce E; Gerhardt, Cynthia A
2017-09-01
To examine parent-child communication (i.e., openness, problems) and child adjustment among youth with advanced or non-advanced cancer and comparison children. Families (n = 125) were recruited after a child's diagnosis/relapse and stratified by advanced (n = 55) or non-advanced (n = 70) disease. Comparison children (n = 60) were recruited from local schools. Children (ages 10-17) reported on communication (Parent-Adolescent Communication Scale) with both parents, while mothers reported on child adjustment (Child Behavior Checklist) at enrollment (T1) and one year (T2). Openness/problems in communication did not differ across groups at T1, but problems with fathers were higher among children with non-advanced cancer versus comparisons at T2. Openness declined for all fathers, while changes in problems varied by group for both parents. T1 communication predicted later adjustment only for children with advanced cancer. Communication plays an important role, particularly for children with advanced cancer. Additional research with families affected by life-limiting conditions is needed. © The Author 2017. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Propensity Scores in Pharmacoepidemiology: Beyond the Horizon.
Jackson, John W; Schmid, Ian; Stuart, Elizabeth A
2017-12-01
Propensity score methods have become commonplace in pharmacoepidemiology over the past decade. Their adoption has confronted formidable obstacles that arise from pharmacoepidemiology's reliance on large healthcare databases of considerable heterogeneity and complexity. These include identifying clinically meaningful samples, defining treatment comparisons, and measuring covariates in ways that respect sound epidemiologic study design. Additional complexities involve correctly modeling treatment decisions in the face of variation in healthcare practice, and dealing with missing information and unmeasured confounding. In this review, we examine the application of propensity score methods in pharmacoepidemiology with particular attention to these and other issues, with an eye towards standards of practice, recent methodological advances, and opportunities for future progress. Propensity score methods have matured in ways that can advance comparative effectiveness and safety research in pharmacoepidemiology. These include natural extensions for categorical treatments, matching algorithms that can optimize sample size given design constraints, weighting estimators that asymptotically target matched and overlap samples, and the incorporation of machine learning to aid in covariate selection and model building. These recent and encouraging advances should be further evaluated through simulation and empirical studies, but nonetheless represent a bright path ahead for the observational study of treatment benefits and harms.
NASA Astrophysics Data System (ADS)
Pollard, David; Chang, Won; Haran, Murali; Applegate, Patrick; DeConto, Robert
2016-05-01
A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ˜ 20 000 yr. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. The analyses provide sea-level-rise envelopes with well-defined parametric uncertainty bounds, but the simple averaging method only provides robust results with full-factorial parameter sampling in the large ensemble. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree well with the more advanced techniques. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds.
Sensitivity analysis of infectious disease models: methods, advances and their application
Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.
2013-01-01
Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497
The report gives results of activities relating to the Advanced Utility Simulation Model (AUSM): sensitivity testing. comparison with a mature electric utility model, and calibration to historical emissions. The activities were aimed at demonstrating AUSM's validity over input va...
A risk-based approach to robotic mission requirements
NASA Technical Reports Server (NTRS)
Dias, William C.; Bourke, Roger D.
1992-01-01
A NASA Risk Team has developed a method for the application of risk management to the definition of robotic mission requirements for the Space Exploration Initiative. These requirements encompass environmental information, infrastructural emplacement in advance, and either technology testing or system/subsystems demonstration. Attention is presently given to a method for step-by-step consideration and analysis of the risk component inherent in mission architecture, followed by a calculation of the subjective risk level. Mitigation strategies are then applied with the same rules, and a comparison is made.
New advances in the partial-reflection-drifts experiment using microprocessors
NASA Technical Reports Server (NTRS)
Ruggerio, R. L.; Bowhill, S. A.
1982-01-01
Improvements to the partial reflection drifts experiment are completed. The results of the improvements include real time processing and simultaneous measurements of the D region with coherent scatter. Preliminary results indicate a positive correlation between drift velocities calculated by both methods during a two day interval. The possibility now exists for extended observations between partial reflection and coherent scatter. In addition, preliminary measurements could be performed between partial reflection and meteor radar to complete a comparison of methods used to determine velocities in the D region.
Advances in biological dosimetry
NASA Astrophysics Data System (ADS)
Ivashkevich, A.; Ohnesorg, T.; Sparbier, C. E.; Elsaleh, H.
2017-01-01
Rapid retrospective biodosimetry methods are essential for the fast triage of persons occupationally or accidentally exposed to ionizing radiation. Identification and detection of a radiation specific molecular ‘footprint’ should provide a sensitive and reliable measurement of radiation exposure. Here we discuss conventional (cytogenetic) methods of detection and assessment of radiation exposure in comparison to emerging approaches such as gene expression signatures and DNA damage markers. Furthermore, we provide an overview of technical and logistic details such as type of sample required, time for sample preparation and analysis, ease of use and potential for a high throughput analysis.
A comparison of three fiber tract delineation methods and their impact on white matter analysis.
Sydnor, Valerie J; Rivas-Grajales, Ana María; Lyall, Amanda E; Zhang, Fan; Bouix, Sylvain; Karmacharya, Sarina; Shenton, Martha E; Westin, Carl-Fredrik; Makris, Nikos; Wassermann, Demian; O'Donnell, Lauren J; Kubicki, Marek
2018-05-19
Diffusion magnetic resonance imaging (dMRI) is an important method for studying white matter connectivity in the brain in vivo in both healthy and clinical populations. Improvements in dMRI tractography algorithms, which reconstruct macroscopic three-dimensional white matter fiber pathways, have allowed for methodological advances in the study of white matter; however, insufficient attention has been paid to comparing post-tractography methods that extract white matter fiber tracts of interest from whole-brain tractography. Here we conduct a comparison of three representative and conceptually distinct approaches to fiber tract delineation: 1) a manual multiple region of interest-based approach, 2) an atlas-based approach, and 3) a groupwise fiber clustering approach, by employing methods that exemplify these approaches to delineate the arcuate fasciculus, the middle longitudinal fasciculus, and the uncinate fasciculus in 10 healthy male subjects. We enable qualitative comparisons across methods, conduct quantitative evaluations of tract volume, tract length, mean fractional anisotropy, and true positive and true negative rates, and report measures of intra-method and inter-method agreement. We discuss methodological similarities and differences between the three approaches and the major advantages and drawbacks of each, and review research and clinical contexts for which each method may be most apposite. Emphasis is given to the means by which different white matter fiber tract delineation approaches may systematically produce variable results, despite utilizing the same input tractography and reliance on similar anatomical knowledge. Copyright © 2018. Published by Elsevier Inc.
Lannin, Timothy B; Thege, Fredrik I; Kirby, Brian J
2016-10-01
Advances in rare cell capture technology have made possible the interrogation of circulating tumor cells (CTCs) captured from whole patient blood. However, locating captured cells in the device by manual counting bottlenecks data processing by being tedious (hours per sample) and compromises the results by being inconsistent and prone to user bias. Some recent work has been done to automate the cell location and classification process to address these problems, employing image processing and machine learning (ML) algorithms to locate and classify cells in fluorescent microscope images. However, the type of machine learning method used is a part of the design space that has not been thoroughly explored. Thus, we have trained four ML algorithms on three different datasets. The trained ML algorithms locate and classify thousands of possible cells in a few minutes rather than a few hours, representing an order of magnitude increase in processing speed. Furthermore, some algorithms have a significantly (P < 0.05) higher area under the receiver operating characteristic curve than do other algorithms. Additionally, significant (P < 0.05) losses to performance occur when training on cell lines and testing on CTCs (and vice versa), indicating the need to train on a system that is representative of future unlabeled data. Optimal algorithm selection depends on the peculiarities of the individual dataset, indicating the need of a careful comparison and optimization of algorithms for individual image classification tasks. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.
Lowry, Sarah J; Loggers, Elizabeth T; Bowles, Erin J A; Wagner, Edward H
2012-05-01
Although much effort has focused on identifying national comparative effectiveness research (CER) priorities, little is known about the CER priorities of community-based practitioners treating patients with advanced cancer. CER priorities of managed care-based clinicians may be valuable as reflections of both payer and provider research interests. We conducted mixed methods interviews with 10 clinicians (5 oncologists and 5 pharmacists) at 5 health plans within the Health Maintenance Organization Cancer Research Network. We asked, "What evidence do you most wish you had when treating patients with advanced cancer" and questioned participants on their impressions and knowledge of CER and pragmatic clinical trials (PCTs). We conducted qualitative analyses to identify themes across interviews. Ninety percent of participants had heard of CER, 20% had heard of PCTs, and all rated CER/PCTs as highly relevant to patient and health plan decision making. Each participant offered between 3 and 10 research priorities. Half (49%) involved head-to-head treatment comparisons; another 20% involved comparing different schedules or dosing regimens of the same treatment. The majority included alternative outcomes to survival (eg, toxicity, quality of life, noninferiority). Participants cited several limitations to existing evidence, including lack of generalizability, funding biases, and rapid development of new treatments. Head-to-head treatment comparisons remain a major evidence need among community- based oncology clinicians, and CER/PCTs are highly valued methods to address the limitations of traditional randomized trials, answer questions of cost-effectiveness or noninferiority, and inform data-driven dialogue and decision making by all stakeholders.
NASA Technical Reports Server (NTRS)
Farassat, F.; Succi, G. P.
1980-01-01
A review of propeller noise prediction technology is presented which highlights the developments in the field from the successful attempt of Gutin to the current sophisticated techniques. Two methods for the predictions of the discrete frequency noise from conventional and advanced propellers in forward flight are described. These methods developed at MIT and NASA Langley Research Center are based on different time domain formulations. Brief description of the computer algorithms based on these formulations are given. The output of these two programs, which is the acoustic pressure signature, is Fourier analyzed to get the acoustic pressure spectrum. The main difference between the programs as they are coded now is that the Langley program can handle propellers with supersonic tip speed while the MIT program is for subsonic tip speed propellers. Comparisons of the calculated and measured acoustic data for a conventional and an advanced propeller show good agreement in general.
NASA Astrophysics Data System (ADS)
Feltz, M.; Knuteson, R.; Ackerman, S.; Revercomb, H.
2014-05-01
Comparisons of satellite temperature profile products from GPS radio occultation (RO) and hyperspectral infrared (IR)/microwave (MW) sounders are made using a previously developed matchup technique. The profile matchup technique matches GPS RO and IR/MW sounder profiles temporally, within 1 h, and spatially, taking into account the unique RO profile geometry and theoretical spatial resolution by calculating a ray-path averaged sounder profile. The comparisons use the GPS RO dry temperature product. Sounder minus GPS RO differences are computed and used to calculate bias and RMS profile statistics, which are created for global and 30° latitude zones for selected time periods. These statistics are created from various combinations of temperature profile data from the Constellation Observing System for Meteorology, Ionosphere & Climate (COSMIC) network, Global Navigation Satellite System Receiver for Atmospheric Sounding (GRAS) instrument, and the Atmospheric Infrared Sounder (AIRS)/Advanced Microwave Sounding Unit (AMSU), Infrared Atmospheric Sounding Interferometer (IASI)/AMSU, and Crosstrack Infrared Sounder (CrIS)/Advanced Technology Microwave Sounder (ATMS) sounding systems. By overlaying combinations of these matchup statistics for similar time and space domains, comparisons of different sounders' products, sounder product versions, and GPS RO products can be made. The COSMIC GPS RO network has the spatial coverage, time continuity, and stability to provide a common reference for comparison of the sounder profile products. The results of this study demonstrate that GPS RO has potential to act as a common temperature reference and can help facilitate inter-comparison of sounding retrieval methods and also highlight differences among sensor product versions.
NASA Astrophysics Data System (ADS)
Feltz, M.; Knuteson, R.; Ackerman, S.; Revercomb, H.
2014-11-01
Comparisons of satellite temperature profile products from GPS radio occultation (RO) and hyperspectral infrared (IR)/microwave (MW) sounders are made using a previously developed matchup technique. The profile matchup technique matches GPS RO and IR/MW sounder profiles temporally, within 1 h, and spatially, taking into account the unique RO profile geometry and theoretical spatial resolution by calculating a ray-path averaged sounder profile. The comparisons use the GPS RO dry temperature product. Sounder minus GPS RO differences are computed and used to calculate bias and rms profile statistics, which are created for global and 30° latitude zones for selected time periods. These statistics are created from various combinations of temperature profile data from the Constellation Observing System for Meteorology, Ionosphere & Climate (COSMIC) network, Global Navigation Satellite System Receiver for Atmospheric Sounding (GRAS) instrument, and the Atmospheric Infrared Sounder (AIRS)/Advanced Microwave Sounding Unit (AMSU), Infrared Atmospheric Sounding Interferometer (IASI)/AMSU, and Crosstrack Infrared Sounder (CrIS)/Advanced Technology Microwave Sounder (ATMS) sounding systems. By overlaying combinations of these matchup statistics for similar time and space domains, comparisons of different sounders' products, sounder product versions, and GPS RO products can be made. The COSMIC GPS RO network has the spatial coverage, time continuity, and stability to provide a common reference for comparison of the sounder profile products. The results of this study demonstrate that GPS RO has potential to act as a common temperature reference and can help facilitate inter-comparison of sounding retrieval methods and also highlight differences among sensor product versions.
Belanger, Adam R.; Akulian, Jason A.
2017-01-01
Lung cancer remains a common and deadly disease. Many modalities are available to the bronchoscopist to evaluate and stage lung cancer. We review the role of bronchoscopy in the staging of the mediastinum with convex endobronchial ultrasound (EBUS) and discuss emerging role of esophageal ultrasonography as a complementary modality. In addition, we discuss advances in scope technology and elastography. We review the bronchoscopic methods available for the diagnosis of peripheral lung nodules including radial EBUS and navigational bronchoscopy (NB) with a consideration of the basic methodologies and diagnostic accuracies. We conclude with a discussion of the comparison of the various methodologies. PMID:28470104
NASA Astrophysics Data System (ADS)
Nikitin, P. V.; Savinov, A. N.; Bazhenov, R. I.; Sivandaev, S. V.
2018-05-01
The article describes the method of identifying a person in distance learning systems based on a keyboard rhythm. An algorithm for the organization of access control is proposed, which implements authentication, identification and verification of a person using the keyboard rhythm. Authentication methods based on biometric personal parameters, including those based on the keyboard rhythm, due to the inexistence of biometric characteristics without a particular person, are able to provide an advanced accuracy and inability to refuse authorship and convenience for operators of automated systems, in comparison with other methods of conformity checking. Methods of permanent hidden keyboard monitoring allow detecting the substitution of a student and blocking the key system.
2013-01-01
Background Informed consent talks are mandatory before invasive interventions. However, the patients’ information recall has been shown to be rather poor. We investigated, whether medical laypersons recalled more information items from a simulated informed consent talk after advanced medical students participated in a communication training aiming to reduce a layperson’s cognitive load. Methods Using a randomized, controlled, prospective cross-over-design, 30 5th and 6th year medical students were randomized into two groups. One group received communication training, followed by a comparison intervention (early intervention group, EI); the other group first received the comparison intervention and then communication training (late intervention group, LI). Before and after the interventions, the 30 medical students performed simulated informed consent talks with 30 blinded medical laypersons using a standardized set of information. We then recorded the number of information items the medical laypersons recalled. Results After the communication training both groups of medical laypersons recalled significantly more information items (EI: 41 ± 9% vs. 23 ± 9%, p < .0001, LI 49 ± 10% vs. 35 ± 6%, p < .0001). After the comparison intervention the improvement was modest and significant only in the LI (EI: 42 ± 9% vs. 40 ± 9%, p = .41, LI 35 ± 6% vs. 29 ± 9%, p = .016). Conclusion Short communication training for advanced medical students improves information recall of medical laypersons in simulated informed consent talks. PMID:23374907
NASA Technical Reports Server (NTRS)
Griswold, M.; Roskam, J.
1980-01-01
An analytical method is presented for predicting lateral-directional aerodynamic characteristics of light twin engine propeller-driven airplanes. This method is applied to the Advanced Technology Light Twin Engine airplane. The calculated characteristics are correlated against full-scale wind tunnel data. The method predicts the sideslip derivatives fairly well, although angle of attack variations are not well predicted. Spoiler performance was predicted somewhat high but was still reasonable. The rudder derivatives were not well predicted, in particular the effect of angle of attack. The predicted dynamic derivatives could not be correlated due to lack of experimental data.
[Detection of lung nodules. New opportunities in chest radiography].
Pötter-Lang, S; Schalekamp, S; Schaefer-Prokop, C; Uffmann, M
2014-05-01
Chest radiography still represents the most commonly performed X-ray examination because it is readily available, requires low radiation doses and is relatively inexpensive. However, as previously published, many initially undetected lung nodules are retrospectively visible in chest radiographs. The great improvements in detector technology with the increasing dose efficiency and improved contrast resolution provide a better image quality and reduced dose needs. The dual energy acquisition technique and advanced image processing methods (e.g. digital bone subtraction and temporal subtraction) reduce the anatomical background noise by reduction of overlapping structures in chest radiography. Computer-aided detection (CAD) schemes increase the awareness of radiologists for suspicious areas. The advanced image processing methods show clear improvements for the detection of pulmonary lung nodules in chest radiography and strengthen the role of this method in comparison to 3D acquisition techniques, such as computed tomography (CT). Many of these methods will probably be integrated into standard clinical treatment in the near future. Digital software solutions offer advantages as they can be easily incorporated into radiology departments and are often more affordable as compared to hardware solutions.
Simulation of Watts Bar Unit 1 Initial Startup Tests with Continuous Energy Monte Carlo Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Godfrey, Andrew T; Gehin, Jess C; Bekar, Kursat B
2014-01-01
The Consortium for Advanced Simulation of Light Water Reactors* is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications. One component of the testing and validation plan for VERA is comparison of neutronics results to a set of continuous energy Monte Carlo solutions for a range of pressurized water reactor geometries using the SCALE component KENO-VI developed by Oak Ridge National Laboratory. Recent improvements in data, methods, and parallelism have enabled KENO, previously utilized predominately as a criticality safety code, to demonstrate excellent capability and performance for reactor physics applications. The highlymore » detailed and rigorous KENO solutions provide a reliable nu-meric reference for VERAneutronics and also demonstrate the most accurate predictions achievable by modeling and simulations tools for comparison to operating plant data. This paper demonstrates the performance of KENO-VI for the Watts Bar Unit 1 Cycle 1 zero power physics tests, including reactor criticality, control rod worths, and isothermal temperature coefficients.« less
Recent Advances in Experimental Whole Genome Haplotyping Methods
Huang, Mengting; Lu, Zuhong
2017-01-01
Haplotype plays a vital role in diverse fields; however, the sequencing technologies cannot resolve haplotype directly. Pioneers demonstrated several approaches to resolve haplotype in the early years, which was extensively reviewed. Since then, numerous methods have been developed recently that have significantly improved phasing performance. Here, we review experimental methods that have emerged mainly over the past five years, and categorize them into five classes according to their maximum scale of contiguity: (i) encapsulation, (ii) 3D structure capture and construction, (iii) compartmentalization, (iv) fluorography, (v) long-read sequencing. Several subsections of certain methods are attached to each class as instances. We also discuss the relative advantages and disadvantages of different classes and make comparisons among representative methods of each class. PMID:28891974
Evaluation of variability in high-resolution protein structures by global distance scoring.
Anzai, Risa; Asami, Yoshiki; Inoue, Waka; Ueno, Hina; Yamada, Koya; Okada, Tetsuji
2018-01-01
Systematic analysis of the statistical and dynamical properties of proteins is critical to understanding cellular events. Extraction of biologically relevant information from a set of high-resolution structures is important because it can provide mechanistic details behind the functional properties of protein families, enabling rational comparison between families. Most of the current structural comparisons are pairwise-based, which hampers the global analysis of increasing contents in the Protein Data Bank. Additionally, pairing of protein structures introduces uncertainty with respect to reproducibility because it frequently accompanies other settings for superimposition. This study introduces intramolecular distance scoring for the global analysis of proteins, for each of which at least several high-resolution structures are available. As a pilot study, we have tested 300 human proteins and showed that the method is comprehensively used to overview advances in each protein and protein family at the atomic level. This method, together with the interpretation of the model calculations, provide new criteria for understanding specific structural variation in a protein, enabling global comparison of the variability in proteins from different species.
Beauchemin, Catherine; Lapierre, Marie-Ève; Letarte, Nathalie; Yelle, Louise; Lachaine, Jean
2016-09-01
This study assessed the use of intermediate endpoints in the economic evaluation of new treatments for advanced cancer and the methodological approaches adopted when overall survival (OS) data are unavailable or of limited use. A systematic literature review was conducted to identify economic evaluations of treatments for advanced cancer published between 2003 and 2013. Cost-effectiveness and cost-utility analyses expressed in cost per life-year gained and cost per quality-adjusted life-year using an intermediate endpoint as an outcome measure were eligible. Characteristics of selected studies were extracted and comprised population, treatment of interest, comparator, line of treatment, study perspective, and time horizon. Use of intermediate endpoints and methods adopted when OS data were lacking were analyzed. In total, 7219 studies were identified and 100 fulfilled the eligibility criteria. Intermediate endpoints mostly used were progression-free survival and time to progression, accounting for 92 % of included studies. OS data were unavailable for analysis in nearly 25 % of economic evaluations. In the absence of OS data, studies most commonly assumed an equal risk of death for all treatment groups. Other methods included use of indirect comparison based on numerous assumptions, use of a proxy for OS, consultation with clinical experts, and use of published external information from different treatment settings. Intermediate endpoints are widely used in the economic evaluation of new treatments for advanced cancer in order to estimate OS. Currently, different methods are used in the absence of suitable OS data and the choice of an appropriate method depends on many factors including the data availability.
Advanced aircraft service life monitoring method via flight-by-flight load spectra
NASA Astrophysics Data System (ADS)
Lee, Hongchul
This research is an effort to understand current method and to propose an advanced method for Damage Tolerance Analysis (DTA) for the purpose of monitoring the aircraft service life. As one of tasks in the DTA, the current indirect Individual Aircraft Tracking (IAT) method for the F-16C/D Block 32 does not properly represent changes in flight usage severity affecting structural fatigue life. Therefore, an advanced aircraft service life monitoring method based on flight-by-flight load spectra is proposed and recommended for IAT program to track consumed fatigue life as an alternative to the current method which is based on the crack severity index (CSI) value. Damage Tolerance is one of aircraft design philosophies to ensure that aging aircrafts satisfy structural reliability in terms of fatigue failures throughout their service periods. IAT program, one of the most important tasks of DTA, is able to track potential structural crack growth at critical areas in the major airframe structural components of individual aircraft. The F-16C/D aircraft is equipped with a flight data recorder to monitor flight usage and provide the data to support structural load analysis. However, limited memory of flight data recorder allows user to monitor individual aircraft fatigue usage in terms of only the vertical inertia (NzW) data for calculating Crack Severity Index (CSI) value which defines the relative maneuver severity. Current IAT method for the F-16C/D Block 32 based on CSI value calculated from NzW is shown to be not accurate enough to monitor individual aircraft fatigue usage due to several problems. The proposed advanced aircraft service life monitoring method based on flight-by-flight load spectra is recommended as an improved method for the F-16C/D Block 32 aircraft. Flight-by-flight load spectra was generated from downloaded Crash Survival Flight Data Recorder (CSFDR) data by calculating loads for each time hack in selected flight data utilizing loads equations. From the comparison of interpolated fatigue life using CSI value and fatigue test results, it is obvious that proposed advanced IAT method via flight-by-flight load spectra is more reliable and accurate than current IAT method. Therefore, the advanced aircraft service life monitoring method based on flight-by-flight load spectra not only monitors the individual aircraft consumed fatigue life for inspection but also ensures the structural reliability of aging aircrafts throughout their service periods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swarin, S.J.; Loo, J.F.; Chladek, E.
1992-01-01
Analytical methods for determining individual aldehyde, ketone, and alcohol emissions from gasoline-, methanol-, and variable-fueled vehicles are described. These methods were used in the Auto/Oil Air Quality Improvement Research Program to provide emission data for comparison of individual reformulated fuels, individual vehicles, and for air modeling studies. The emission samples are collected in impingers which contain either 2,4-dinitrophenylhydrazine solution for the aldehydes and ketones or deionized water for the alcohols. Subsequent analyses by liquid chromatography for the aldehydes and ketones and gas chromatography for the alcohols utilized auto injectors and computerized data systems which permit high sample throughput with minimalmore » operator intervention. The quality control procedures developed and interlaboratory comparisons conducted as part of the program are also described. (Copyright (c) 1992 Society of Automotive Engineers, Inc.)« less
Vertebral rotation measurement: a summary and comparison of common radiographic and CT methods
Lam, Gabrielle C; Hill, Doug L; Le, Lawrence H; Raso, Jim V; Lou, Edmond H
2008-01-01
Current research has provided a more comprehensive understanding of Adolescent Idiopathic Scoliosis (AIS) as a three-dimensional spinal deformity, encompassing both lateral and rotational components. Apart from quantifying curve severity using the Cobb angle, vertebral rotation has become increasingly prominent in the study of scoliosis. It demonstrates significance in both preoperative and postoperative assessment, providing better appreciation of the impact of bracing or surgical interventions. In the past, the need for computer resources, digitizers and custom software limited studies of rotation to research performed after a patient left the scoliosis clinic. With advanced technology, however, rotation measurements are now more feasible. While numerous vertebral rotation measurement methods have been developed and tested, thorough comparisons of these are still relatively unexplored. This review discusses the advantages and disadvantages of six common measurement techniques based on technology most pertinent in clinical settings: radiography (Cobb, Nash-Moe, Perdriolle and Stokes' method) and computer tomography (CT) imaging (Aaro-Dahlborn and Ho's method). Better insight into the clinical suitability of rotation measurement methods currently available is presented, along with a discussion of critical concerns that should be addressed in future studies and development of new methods. PMID:18976498
Alaboina, Pankaj Kumar; Uddin, Md-Jamal; Cho, Sung-Jin
2017-10-26
Nanotechnology-driven development of cathode materials is an essential part to revolutionize the evolution of the next generation lithium ion batteries. With the progress of nanoprocess and nanoscale surface modification investigations on cathode materials in recent years, the advanced battery technology future seems very promising - Thanks to nanotechnology. In this review, an overview of promising nanoscale surface deposition methods and their significance in surface functionalization on cathodes is extensively summarized. Surface modified cathodes are provided with a protective layer to overcome the electrochemical performance limitations related to side reactions with electrolytes, reduce self-discharge reactions, improve thermal and structural stability, and further enhance the overall battery performance. The review addresses the importance of nanoscale surface modification on battery cathodes and concludes with a comparison of the different nanoprocess techniques discussed to provide a direction in the race to build advanced lithium-ion batteries.
Recent advances in testing of microsphere drug delivery systems.
Andhariya, Janki V; Burgess, Diane J
2016-01-01
This review discusses advances in the field of microsphere testing. In vitro release-testing methods such as sample and separate, dialysis membrane sacs and USP apparatus IV have been used for microspheres. Based on comparisons of these methods, USP apparatus IV is currently the method of choice. Accelerated in vitro release tests have been developed to shorten the testing time for quality control purposes. In vitro-in vivo correlations using real-time and accelerated release data have been developed, to minimize the need to conduct in vivo performance evaluation. Storage stability studies have been conducted to investigate the influence of various environmental factors on microsphere quality throughout the product shelf life. New tests such as the floating test and the in vitro wash-off test have been developed along with advancement in characterization techniques for other physico-chemical parameters such as particle size, drug content, and thermal properties. Although significant developments have been made in microsphere release testing, there is still a lack of guidance in this area. Microsphere storage stability studies should be extended to include microspheres containing large molecules. An agreement needs to be reached on the use of particle sizing techniques to avoid inconsistent data. An approach needs to be developed to determine total moisture content of microspheres.
Modelling an advanced ManPAD with dual band detectors and a rosette scanning seeker head
NASA Astrophysics Data System (ADS)
Birchenall, Richard P.; Richardson, Mark A.; Butters, Brian; Walmsley, Roy
2012-01-01
Man Portable Air Defence Systems (ManPADs) have been a favoured anti aircraft weapon since their appearance on the military proliferation scene in the mid 1960s. Since this introduction there has been a 'cat and mouse' game of Missile Countermeasures (CMs) and the aircraft protection counter counter measures (CCMs) as missile designers attempt to defeat the aircraft platform protection equipment. Magnesium Teflon Viton (MTV) flares protected the target aircraft until the missile engineers discovered the art of flare rejection using techniques including track memory and track angle bias. These early CCMs relied upon CCM triggering techniques such as the rise rate method which would just sense a sudden increase in target energy and assume that a flare CM had been released by the target aircraft. This was not as reliable as was first thought as aspect changes (bringing another engine into the field of view) or glint from the sun could inadvertently trigger a CCM when not needed. The introduction of dual band detectors in the 1980s saw a major advance in CCM capability allowing comparisons between two distinct IR bands to be made thus allowing the recognition of an MTV flare to occur with minimal false alarms. The development of the rosette scan seeker in the 1980s complemented this advancement allowing the scene in the missile field of view (FOV) to be scanned by a much smaller (1/25) instantaneous FOV (IFOV) with the spectral comparisons being made at each scan point. This took the ManPAD from a basic IR energy detector to a pseudo imaging system capable of analysing individual elements of its overall FOV allowing more complex and robust CCM to be developed. This paper continues the work published in [1,2] and describes the method used to model an advanced ManPAD with a rosette scanning seeker head and robust CCMs similar to the Raytheon Stinger RMP.
Nariyama, Nobuteru
2017-12-01
Scanning of dosimeters facilitates dose distribution measurements with fine spatial resolutions. This paper presents a method of conversion of the scanning results to water-dose profiles and provides an experimental verification. An Advanced Markus chamber and a diamond detector were scanned at a resolution of 6 μm near the beam edges during irradiation with a 25-μm-wide white narrow x-ray beam from a synchrotron radiation source. For comparison, GafChromic films HD-810 and HD-V2 were also irradiated. The conversion procedure for the water dose values was simulated with Monte Carlo photon-electron transport code as a function of the x-ray incidence position. This method was deduced from nonstandard beam reference-dosimetry protocols used for high-energy x-rays. Among the calculated nonstandard beam correction factors, P wall , which is the ratio of the absorbed dose in the sensitive volume of the chamber with water wall to that with a polymethyl methacrylate wall, was found to be the most influential correction factor in most conditions. The total correction factor ranged from 1.7 to 2.7 for the Advanced Markus chamber and from 1.15 to 1.86 for the diamond detector as a function of the x-ray incidence position. The water dose values obtained with the Advanced Markus chamber and the HD-810 film were in agreement in the vicinity of the beam, within 35% and 18% for the upper and lower sides of the beam respectively. The beam width obtained from the diamond detector was greater, and the doses out of the beam were smaller than the doses of the others. The comparison between the Advanced Markus chamber and HD-810 revealed that the dose obtained with the scanned chamber could be converted to the water dose around the beam by applying nonstandard beam reference-dosimetry protocols. © 2017 American Association of Physicists in Medicine.
Fang, Wenfeng; Yan, Yue; Hu, Zhihuang; Hong, Shaodong; Wu, Xuan; Qin, Tao; Liang, Wenhua; Zhang, Li
2014-01-01
Backgrounds It has been extensively proved that the efficacy of epidermal growth factor receptor-tyrosine kinase inhibitors (EGFR-TKIs) is superior to that of cytotoxic chemotherapy in advanced non-small cell lung cancer (NSCLC) patients harboring sensitive EGFR mutations. However, the question of whether the efficacy of EGFR-TKIs differs between exon 19 deletion and exon 21 L858R mutation has not been yet statistically answered. Methods Subgroup data on hazard ratio (HR) for progression-free survival (PFS) of correlative studies were extracted and synthesized based on random-effect model. Comparison of outcomes between specific mutations was estimated through indirect and direct methods, respectively. Results A total of 13 studies of advanced NSCLC patients with either 19 or 21 exon alteration receiving first-line EGFR-TKIs were included. Based on the data from six clinical trials for indirect meta-analysis, the pooled HRTKI/chemotherapy for PFS were 0.28 (95% CI 0.20–0.38, P<0.001) in patients with 19 exon deletion and 0.47 (95% CI 0.35–0.64, P<0.001) in those with exon 21 L858R mutation. Indirect comparison revealed that the patients with exon 19 deletion had longer PFS than those with exon 21 L858R mutation (HR19 exon deletion/exon 21 L858R mutation = 0.59, 95% CI 0.38–0.92; P = 0.019). Additionally, direct meta-analysis showed similar result (HR19 exon deletion/exon 21 L858R mutation = 0.75, 95% CI 0.65 to 0.85; P<0.001) by incorporating another seven studies. Conclusions For advanced NSCLC patients, exon 19 deletion might be associated with longer PFS compared to L858 mutation at exon 21 after first-line EGFR-TKIs. PMID:25222496
Nissen, Kathrine G; Trevino, Kelly; Lange, Theis; Prigerson, Holly G
2016-12-01
Caring for a family member with advanced cancer strains family caregivers. Classification of family types has been shown to identify patients at risk of poor psychosocial function. However, little is known about how family relationships affect caregiver psychosocial function. To investigate family types identified by a cluster analysis and to examine the reproducibility of cluster analyses. We also sought to examine the relationship between family types and caregivers' psychosocial function. Data from 622 caregivers of advanced cancer patients (part of the Coping with Cancer Study) were analyzed using Gaussian Mixture Modeling as the primary method to identify family types based on the Family Relationship Index questionnaire. We then examined the relationship between family type and caregiver quality of life (Medical Outcome Survey Short Form), social support (Interpersonal Support Evaluation List), and perceived caregiver burden (Caregiving Burden Scale). Three family types emerged: low-expressive, detached, and supportive. Analyses of variance with post hoc comparisons showed that caregivers of detached and low-expressive family types experienced lower levels of quality of life and perceived social support in comparison to supportive family types. The study identified supportive, low-expressive, and detached family types among caregivers of advanced cancer patients. The supportive family type was associated with the best outcomes and detached with the worst. These findings indicate that family function is related to psychosocial function of caregivers of advanced cancer patients. Therefore, paying attention to family support and family members' ability to share feelings and manage conflicts may serve as an important tool to improve psychosocial function in families affected by cancer. Copyright © 2016 American Academy of Hospice and Palliative Medicine. All rights reserved.
NASA Astrophysics Data System (ADS)
Messenger, C.; Bulten, H. J.; Crowder, S. G.; Dergachev, V.; Galloway, D. K.; Goetz, E.; Jonker, R. J. G.; Lasky, P. D.; Meadors, G. D.; Melatos, A.; Premachandra, S.; Riles, K.; Sammut, L.; Thrane, E. H.; Whelan, J. T.; Zhang, Y.
2015-07-01
The low-mass X-ray binary Scorpius X-1 (Sco X-1) is potentially the most luminous source of continuous gravitational-wave radiation for interferometers such as LIGO and Virgo. For low-mass X-ray binaries this radiation would be sustained by active accretion of matter from its binary companion. With the Advanced Detector Era fast approaching, work is underway to develop an array of robust tools for maximizing the science and detection potential of Sco X-1. We describe the plans and progress of a project designed to compare the numerous independent search algorithms currently available. We employ a mock-data challenge in which the search pipelines are tested for their relative proficiencies in parameter estimation, computational efficiency, robustness, and most importantly, search sensitivity. The mock-data challenge data contains an ensemble of 50 Scorpius X-1 (Sco X-1) type signals, simulated within a frequency band of 50-1500 Hz. Simulated detector noise was generated assuming the expected best strain sensitivity of Advanced LIGO [1] and Advanced VIRGO [2] (4 ×10-24 Hz-1 /2 ). A distribution of signal amplitudes was then chosen so as to allow a useful comparison of search methodologies. A factor of 2 in strain separates the quietest detected signal, at 6.8 ×10-26 strain, from the torque-balance limit at a spin frequency of 300 Hz, although this limit could range from 1.2 ×10-25 (25 Hz) to 2.2 ×10-26 (750 Hz) depending on the unknown frequency of Sco X-1. With future improvements to the search algorithms and using advanced detector data, our expectations for probing below the theoretical torque-balance strain limit are optimistic.
Dyvorne, Hadrien A; Jajamovich, Guido H; Bane, Octavia; Fiel, M Isabel; Chou, Hsin; Schiano, Thomas D; Dieterich, Douglas; Babb, James S; Friedman, Scott L; Taouli, Bachir
2016-05-01
Establishing accurate non-invasive methods of liver fibrosis quantification remains a major unmet need. Here, we assessed the diagnostic value of a multiparametric magnetic resonance imaging (MRI) protocol including diffusion-weighted imaging (DWI), dynamic contrast-enhanced (DCE)-MRI and magnetic resonance elastography (MRE) in comparison with transient elastography (TE) and blood tests [including ELF (Enhanced Liver Fibrosis) and APRI] for liver fibrosis detection. In this single centre cross-sectional study, we prospectively enrolled 60 subjects with liver disease who underwent multiparametric MRI (DWI, DCE-MRI and MRE), TE and blood tests. Correlation was assessed between non-invasive modalities and histopathologic findings including stage, grade and collagen content, while accounting for covariates such as age, sex, BMI, HCV status and MRI-derived fat and iron content. ROC curve analysis evaluated the performance of each technique for detection of moderate-to-advanced liver fibrosis (F2-F4) and advanced fibrosis (F3-F4). Magnetic resonance elastography provided the strongest correlation with fibrosis stage (r = 0.66, P < 0.001), inflammation grade (r = 0.52, P < 0.001) and collagen content (r = 0.53, P = 0.036). For detection of moderate-to-advanced fibrosis (F2-F4), AUCs were 0.78, 0.82, 0.72, 0.79, 0.71 for MRE, TE, DCE-MRI, DWI and APRI, respectively. For detection of advanced fibrosis (F3-F4), AUCs were 0.94, 0.77, 0.79, 0.79 and 0.70, respectively. Magnetic resonance elastography provides the highest correlation with histopathologic markers and yields high diagnostic performance for detection of advanced liver fibrosis and cirrhosis, compared to DWI, DCE-MRI, TE and serum markers. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
A Comparison of Three Methods for the Analysis of Skin Flap Viability: Reliability and Validity.
Tim, Carla Roberta; Martignago, Cintia Cristina Santi; da Silva, Viviane Ribeiro; Dos Santos, Estefany Camila Bonfim; Vieira, Fabiana Nascimento; Parizotto, Nivaldo Antonio; Liebano, Richard Eloin
2018-05-01
Objective: Technological advances have provided new alternatives to the analysis of skin flap viability in animal models; however, the interrater validity and reliability of these techniques have yet to be analyzed. The present study aimed to evaluate the interrater validity and reliability of three different methods: weight of paper template (WPT), paper template area (PTA), and photographic analysis. Approach: Sixteen male Wistar rats had their cranially based dorsal skin flap elevated. On the seventh postoperative day, the viable tissue area and the necrotic area of the skin flap were recorded using the paper template method and photo image. The evaluation of the percentage of viable tissue was performed using three methods, simultaneously and independently by two raters. The analysis of interrater reliability and viability was performed using the intraclass correlation coefficient and Bland Altman Plot Analysis was used to visualize the presence or absence of systematic bias in the evaluations of data validity. Results: The results showed that interrater reliability for WPT, measurement of PTA, and photographic analysis were 0.995, 0.990, and 0.982, respectively. For data validity, a correlation >0.90 was observed for all comparisons made between the three methods. In addition, Bland Altman Plot Analysis showed agreement between the comparisons of the methods and the presence of systematic bias was not observed. Innovation: Digital methods are an excellent choice for assessing skin flap viability; moreover, they make data use and storage easier. Conclusion: Independently from the method used, the interrater reliability and validity proved to be excellent for the analysis of skin flaps' viability.
Functional restoration of cirrhotic liver after partial hepatectomy in the rat.
Hashimoto, Masaji; Watanabe, Goro
2005-01-01
Although cirrhosis is the terminal stage of various liver diseases, thanks to recent advances one might eliminate some causes of liver damage. Liver has a potent regeneration capacity. It is important to evaluate the regenerating cirrhotic liver after partial hepatectomy, morphologically and functionally, in the long term. We evaluated the functional capacity of the rat liver rendered cirrhotic by orally administered thioacetamide, and examined the correlation between morphological and functional restoration after 2/3 hepatectomy in comparison with hepatectomized normal rats and sham-operated cirrhotic rats. Morphological restoration was evaluated by remnant liver weight, proliferating cell nuclear antigen labeling index, and fibrosis ratio. Functional restoration was evaluated by the indocyanine green disappearance rate and aminopyrine clearance. Cirrhotic rats were functionally deteriorated in comparison with the normal rats. Morphological restoration in cirrhotic rats was delayed in comparison with normal rats. Functional restoration after 2/3 hepatectomy was advanced in comparison with morphological restoration. In comparison with sham-operated cirrhotic rats, functional restoration of the cirrhotic liver was accelerated by partial hepatectomy. In cirrhotic rats, functional restoration of the liver after 2/3 hepatectomy was advanced in comparison with morphological restoration. Partial hepatectomy seemed to promote functional restoration of the cirrhotic liver.
Comparison of parameter-adapted segmentation methods for fluorescence micrographs.
Held, Christian; Palmisano, Ralf; Häberle, Lothar; Hensel, Michael; Wittenberg, Thomas
2011-11-01
Interpreting images from fluorescence microscopy is often a time-consuming task with poor reproducibility. Various image processing routines that can help investigators evaluate the images are therefore useful. The critical aspect for a reliable automatic image analysis system is a robust segmentation algorithm that can perform accurate segmentation for different cell types. In this study, several image segmentation methods were therefore compared and evaluated in order to identify the most appropriate segmentation schemes that are usable with little new parameterization and robustly with different types of fluorescence-stained cells for various biological and biomedical tasks. The study investigated, compared, and enhanced four different methods for segmentation of cultured epithelial cells. The maximum-intensity linking (MIL) method, an improved MIL, a watershed method, and an improved watershed method based on morphological reconstruction were used. Three manually annotated datasets consisting of 261, 817, and 1,333 HeLa or L929 cells were used to compare the different algorithms. The comparisons and evaluations showed that the segmentation performance of methods based on the watershed transform was significantly superior to the performance of the MIL method. The results also indicate that using morphological opening by reconstruction can improve the segmentation of cells stained with a marker that exhibits the dotted surface of cells. Copyright © 2011 International Society for Advancement of Cytometry.
Recent trends in high spin sensitivity magnetic resonance
NASA Astrophysics Data System (ADS)
Blank, Aharon; Twig, Ygal; Ishay, Yakir
2017-07-01
Magnetic resonance is a very powerful methodology that has been employed successfully in many applications for about 70 years now, resulting in a wealth of scientific, technological, and diagnostic data. Despite its many advantages, one major drawback of magnetic resonance is its relatively poor sensitivity and, as a consequence, its bad spatial resolution when examining heterogeneous samples. Contemporary science and technology often make use of very small amounts of material and examine heterogeneity on a very small length scale, both of which are well beyond the current capabilities of conventional magnetic resonance. It is therefore very important to significantly improve both the sensitivity and the spatial resolution of magnetic resonance techniques. The quest for higher sensitivity led in recent years to the development of many alternative detection techniques that seem to rival and challenge the conventional ;old-fashioned; induction-detection approach. The aim of this manuscript is to briefly review recent advances in the field, and to provide a quantitative as well as qualitative comparison between various detection methods with an eye to future potential advances and developments. We first offer a common definition of sensitivity in magnetic resonance to enable proper quantitative comparisons between various detection methods. Following that, up-to-date information about the sensitivity capabilities of the leading recently-developed detection approaches in magnetic resonance is provided, accompanied by a critical comparison between them and induction detection. Our conclusion from this comparison is that induction detection is still indispensable, and as such, it is very important to look for ways to significantly improve it. To do so, we provide expressions for the sensitivity of induction-detection, derived from both classical and quantum mechanics, that identify its main limiting factors. Examples from current literature, as well as a description of new ideas, show how these limiting factors can be mitigated to significantly improve the sensitivity of induction detection. Finally, we outline some directions for the possible applications of high-sensitivity induction detection in the field of electron spin resonance.
Mesoscale Science with High Energy X-ray Diffraction Microscopy at the Advanced Photon Source
NASA Astrophysics Data System (ADS)
Suter, Robert
2014-03-01
Spatially resolved diffraction of monochromatic high energy (> 50 keV) x-rays is used to map microstructural quantities inside of bulk polycrystalline materials. The non-destructive nature of High Energy Diffraction Microscopy (HEDM) measurements allows tracking of responses as samples undergo thermo-mechanical or other treatments. Volumes of the order of a cubic millimeter are probed with micron scale spatial resolution. Data sets allow direct comparisons to computational models of responses that frequently involve long-ranged, multi-grain interactions; such direct comparisons have only become possible with the development of HEDM and other high energy x-ray methods. Near-field measurements map the crystallographic orientation field within and between grains using a computational reconstruction method that simulates the experimental geometry and matches orientations in micron sized volume elements to experimental data containing projected grain images in large numbers of Bragg peaks. Far-field measurements yield elastic strain tensors through indexing schemes that sort observed diffraction peaks into sets associated with individual crystals and detect small radial motions in large numbers of such peaks. Combined measurements, facilitated by a new end station hutch at Advanced Photon Source beamline 1-ID, are mutually beneficial and result in accelerated data reduction. Further, absorption tomography yields density contrast that locates secondary phases, void clusters, and cracks, and tracks sample shape during deformation. A collaboration led by the Air Force Research Laboratory and including the Advanced Photon Source, Lawrence Livermore National Laboratory, Carnegie Mellon University, Petra-III, and Cornell University and CHESS is developing software and hardware for combined measurements. Examples of these capabilities include tracking of grain boundary migrations during thermal annealing, tensile deformation of zirconium, and combined measurements of nickel superalloys and a titanium alloy under tensile forces. Work supported by NSF grant DMR-1105173
Tu, Gia Loi; Bui, Thi Hoang Nga; Tran, Thi Thu Tra; Ton, Nu Minh Nguyet
2015-01-01
Summary In this study, ultrasound- and enzyme-assisted extractions of albumin (water-soluble protein group) from defatted pumpkin (Cucurbita pepo) seed powder were compared. Both advanced extraction techniques strongly increased the albumin yield in comparison with conventional extraction. The extraction rate was two times faster in the ultrasonic extraction than in the enzymatic extraction. However, the maximum albumin yield was 16% higher when using enzymatic extraction. Functional properties of the pumpkin seed albumin concentrates obtained using the enzymatic, ultrasonic and conventional methods were then evaluated. Use of hydrolase for degradation of cell wall of the plant material did not change the functional properties of the albumin concentrate in comparison with the conventional extraction. The ultrasonic extraction enhanced water-holding, oil-holding and emulsifying capacities of the pumpkin seed albumin concentrate, but slightly reduced the foaming capacity, and emulsion and foam stability. PMID:27904383
Tu, Gia Loi; Bui, Thi Hoang Nga; Tran, Thi Thu Tra; Ton, Nu Minh Nguyet; Man Le, Van Viet
2015-12-01
In this study, ultrasound- and enzyme-assisted extractions of albumin (water-soluble protein group) from defatted pumpkin ( Cucurbita pepo ) seed powder were compared. Both advanced extraction techniques strongly increased the albumin yield in comparison with conventional extraction. The extraction rate was two times faster in the ultrasonic extraction than in the enzymatic extraction. However, the maximum albumin yield was 16% higher when using enzymatic extraction. Functional properties of the pumpkin seed albumin concentrates obtained using the enzymatic, ultrasonic and conventional methods were then evaluated. Use of hydrolase for degradation of cell wall of the plant material did not change the functional properties of the albumin concentrate in comparison with the conventional extraction. The ultrasonic extraction enhanced water-holding, oil-holding and emulsifying capacities of the pumpkin seed albumin concentrate, but slightly reduced the foaming capacity, and emulsion and foam stability.
A framework for directional and higher-order reconstruction in photoacoustic tomography
NASA Astrophysics Data System (ADS)
Boink, Yoeri E.; Lagerwerf, Marinus J.; Steenbergen, Wiendelt; van Gils, Stephan A.; Manohar, Srirang; Brune, Christoph
2018-02-01
Photoacoustic tomography is a hybrid imaging technique that combines high optical tissue contrast with high ultrasound resolution. Direct reconstruction methods such as filtered back-projection, time reversal and least squares suffer from curved line artefacts and blurring, especially in the case of limited angles or strong noise. In recent years, there has been great interest in regularised iterative methods. These methods employ prior knowledge of the image to provide higher quality reconstructions. However, easy comparisons between regularisers and their properties are limited, since many tomography implementations heavily rely on the specific regulariser chosen. To overcome this bottleneck, we present a modular reconstruction framework for photoacoustic tomography, which enables easy comparisons between regularisers with different properties, e.g. nonlinear, higher-order or directional. We solve the underlying minimisation problem with an efficient first-order primal-dual algorithm. Convergence rates are optimised by choosing an operator-dependent preconditioning strategy. A variety of reconstruction methods are tested on challenging 2D synthetic and experimental data sets. They outperform direct reconstruction approaches for strong noise levels and limited angle measurements, offering immediate benefits in terms of acquisition time and quality. This work provides a basic platform for the investigation of future advanced regularisation methods in photoacoustic tomography.
Comparison of predictive control methods for high consumption industrial furnace.
Stojanovski, Goran; Stankovski, Mile
2013-01-01
We describe several predictive control approaches for high consumption industrial furnace control. These furnaces are major consumers in production industries, and reducing their fuel consumption and optimizing the quality of the products is one of the most important engineer tasks. In order to demonstrate the benefits from implementation of the advanced predictive control algorithms, we have compared several major criteria for furnace control. On the basis of the analysis, some important conclusions have been drawn.
New electrostatic coal cleaning method cuts sulfur content by 40%
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1985-12-01
An emission control system that electrically charges pollutants and coal particles promises to reduce sulfur 40% at half the cost. The dry coal cleaning processes offer superior performance and better economics than conventional flotation cleaning. Advanced Energy Dynamics, Inc. (AED) is developing both fine and ultra fine processes which increase combustion efficiency and boiler reliability and reduced operating costs. The article gives details from the performance tests and comparisons and summarizes the economic analyses. 4 tables.
Comparison of Variance-to-Mean Ratio Methods for Reparables Inventory Management
2006-03-01
for Recoverable Items in the ALS [Advanced Logistics System] Marginal Analysis Algorithms”. Marginal analysis is a microeconomics technique used...in the Demands Workbook . The quantitative expected backorder and aircraft availability percentage result. Each of the 30 simulations is run five...10A, B-2A, C-17A and F-15E aircraft. The data was selected from D200A’s Ddb04 tables and flying hour programs respectively. The two workbook (OIM
Semantic Edge Based Disparity Estimation Using Adaptive Dynamic Programming for Binocular Sensors
Zhu, Dongchen; Li, Jiamao; Wang, Xianshun; Peng, Jingquan; Shi, Wenjun; Zhang, Xiaolin
2018-01-01
Disparity calculation is crucial for binocular sensor ranging. The disparity estimation based on edges is an important branch in the research of sparse stereo matching and plays an important role in visual navigation. In this paper, we propose a robust sparse stereo matching method based on the semantic edges. Some simple matching costs are used first, and then a novel adaptive dynamic programming algorithm is proposed to obtain optimal solutions. This algorithm makes use of the disparity or semantic consistency constraint between the stereo images to adaptively search parameters, which can improve the robustness of our method. The proposed method is compared quantitatively and qualitatively with the traditional dynamic programming method, some dense stereo matching methods, and the advanced edge-based method respectively. Experiments show that our method can provide superior performance on the above comparison. PMID:29614028
Semantic Edge Based Disparity Estimation Using Adaptive Dynamic Programming for Binocular Sensors.
Zhu, Dongchen; Li, Jiamao; Wang, Xianshun; Peng, Jingquan; Shi, Wenjun; Zhang, Xiaolin
2018-04-03
Disparity calculation is crucial for binocular sensor ranging. The disparity estimation based on edges is an important branch in the research of sparse stereo matching and plays an important role in visual navigation. In this paper, we propose a robust sparse stereo matching method based on the semantic edges. Some simple matching costs are used first, and then a novel adaptive dynamic programming algorithm is proposed to obtain optimal solutions. This algorithm makes use of the disparity or semantic consistency constraint between the stereo images to adaptively search parameters, which can improve the robustness of our method. The proposed method is compared quantitatively and qualitatively with the traditional dynamic programming method, some dense stereo matching methods, and the advanced edge-based method respectively. Experiments show that our method can provide superior performance on the above comparison.
NASA Technical Reports Server (NTRS)
Sidney, T.; Aylott, B.; Christensen, N.; Farr, B.; Farr, W.; Feroz, F.; Gair, J.; Grover, K.; Graff, P.; Hanna, C.;
2014-01-01
The problem of reconstructing the sky position of compact binary coalescences detected via gravitational waves is a central one for future observations with the ground-based network of gravitational-wave laser interferometers, such as Advanced LIGO and Advanced Virgo. Different techniques for sky localization have been independently developed. They can be divided in two broad categories: fully coherent Bayesian techniques, which are high latency and aimed at in-depth studies of all the parameters of a source, including sky position, and "triangulation-based" techniques, which exploit the data products from the search stage of the analysis to provide an almost real-time approximation of the posterior probability density function of the sky location of a detection candidate. These techniques have previously been applied to data collected during the last science runs of gravitational-wave detectors operating in the so-called initial configuration. Here, we develop and analyze methods for assessing the self consistency of parameter estimation methods and carrying out fair comparisons between different algorithms, addressing issues of efficiency and optimality. These methods are general, and can be applied to parameter estimation problems other than sky localization. We apply these methods to two existing sky localization techniques representing the two above-mentioned categories, using a set of simulated inspiralonly signals from compact binary systems with a total mass of equal to or less than 20M solar mass and nonspinning components. We compare the relative advantages and costs of the two techniques and show that sky location uncertainties are on average a factor approx. equals 20 smaller for fully coherent techniques than for the specific variant of the triangulation-based technique used during the last science runs, at the expense of a factor approx. equals 1000 longer processing time.
NASA Astrophysics Data System (ADS)
Sidery, T.; Aylott, B.; Christensen, N.; Farr, B.; Farr, W.; Feroz, F.; Gair, J.; Grover, K.; Graff, P.; Hanna, C.; Kalogera, V.; Mandel, I.; O'Shaughnessy, R.; Pitkin, M.; Price, L.; Raymond, V.; Röver, C.; Singer, L.; van der Sluys, M.; Smith, R. J. E.; Vecchio, A.; Veitch, J.; Vitale, S.
2014-04-01
The problem of reconstructing the sky position of compact binary coalescences detected via gravitational waves is a central one for future observations with the ground-based network of gravitational-wave laser interferometers, such as Advanced LIGO and Advanced Virgo. Different techniques for sky localization have been independently developed. They can be divided in two broad categories: fully coherent Bayesian techniques, which are high latency and aimed at in-depth studies of all the parameters of a source, including sky position, and "triangulation-based" techniques, which exploit the data products from the search stage of the analysis to provide an almost real-time approximation of the posterior probability density function of the sky location of a detection candidate. These techniques have previously been applied to data collected during the last science runs of gravitational-wave detectors operating in the so-called initial configuration. Here, we develop and analyze methods for assessing the self consistency of parameter estimation methods and carrying out fair comparisons between different algorithms, addressing issues of efficiency and optimality. These methods are general, and can be applied to parameter estimation problems other than sky localization. We apply these methods to two existing sky localization techniques representing the two above-mentioned categories, using a set of simulated inspiral-only signals from compact binary systems with a total mass of ≤20M⊙ and nonspinning components. We compare the relative advantages and costs of the two techniques and show that sky location uncertainties are on average a factor ≈20 smaller for fully coherent techniques than for the specific variant of the triangulation-based technique used during the last science runs, at the expense of a factor ≈1000 longer processing time.
CO2 capture in amine solutions: modelling and simulations with non-empirical methods
NASA Astrophysics Data System (ADS)
Andreoni, Wanda; Pietrucci, Fabio
2016-12-01
Absorption in aqueous amine solutions is the most advanced technology for the capture of CO2, although suffering from drawbacks that do not allow exploitation on large scale. The search for optimum solvents has been pursued with empirical methods and has also motivated a number of computational approaches over the last decade. However, a deeper level of understanding of the relevant chemical reactions in solution is required so as to contribute to this effort. We present here a brief critical overview of the most recent applications of computer simulations using ab initio methods. Comparison of their outcome shows a strong dependence on the structural models employed to represent the molecular systems in solution and on the strategy used to simulate the reactions. In particular, the results of very recent ab initio molecular dynamics augmented with metadynamics are summarized, showing the crucial role of water, which has been so far strongly underestimated both in the calculations and in the interpretation of experimental data. Indications are given for advances in computational approaches that are necessary if meant to contribute to the rational design of new solvents.
Ali, Nora A; Mourad, Hebat-Allah M; ElSayed, Hany M; El-Soudani, Magdy; Amer, Hassanein H; Daoud, Ramez M
2016-11-01
The interference is the most important problem in LTE or LTE-Advanced networks. In this paper, the interference was investigated in terms of the downlink signal to interference and noise ratio (SINR). In order to compare the different frequency reuse methods that were developed to enhance the SINR, it would be helpful to have a generalized expression to study the performance of the different methods. Therefore, this paper introduces general expressions for the SINR in homogeneous and in heterogeneous networks. In homogeneous networks, the expression was applied for the most common types of frequency reuse techniques: soft frequency reuse (SFR) and fractional frequency reuse (FFR). The expression was examined by comparing it with previously developed ones in the literature and the comparison showed that the expression is valid for any type of frequency reuse scheme and any network topology. Furthermore, the expression was extended to include the heterogeneous network; the expression includes the problem of co-tier and cross-tier interference in heterogeneous networks (HetNet) and it was examined by the same method of the homogeneous one.
Comparison of historical documents for writership
NASA Astrophysics Data System (ADS)
Ball, Gregory R.; Pu, Danjun; Stritmatter, Roger; Srihari, Sargur N.
2010-01-01
Over the last century forensic document science has developed progressively more sophisticated pattern recognition methodologies for ascertaining the authorship of disputed documents. These include advances not only in computer assisted stylometrics, but forensic handwriting analysis. We present a writer verification method and an evaluation of an actual historical document written by an unknown writer. The questioned document is compared against two known handwriting samples of Herman Melville, a 19th century American author who has been hypothesized to be the writer of this document. The comparison led to a high confidence result that the questioned document was written by the same writer as the known documents. Such methodology can be applied to many such questioned documents in historical writing, both in literary and legal fields.
NASA Astrophysics Data System (ADS)
Raulin Cerceau, Florence; Bilodeau, Bénédicte
2012-09-01
Methods dealing with how to contact other planets that are supposed to be inhabited by “intelligent” civilizations have begun more than one century and a half ago. The historical question has been already treated in several studies and the aim of this paper is not to provide details on that aspect. On the other hand, it could be interesting to make a comparison between the different approaches to contact planets, formulated at different epochs (even if obviously techniques were not in the same state of advancement). The most important characteristics of the earliest messages, remained only on a theoretical form, will be presented. The main features of modern messages, which have been concretely realized, will also be emphasized. Drawing a parallel between these two series of projects could demonstrate what has been considered as unavoidable by both pioneer and modern messages creators, while it has not been proved that the first ones have had any influence on the second ones. The common points emerging from this comparison could then (perhaps) help to select adequate models for an intelligible message intended to ETs, particularly concerning the language forms. Besides this, the differences could illustrate the human cultural advances in the field of METI and underline the tendencies that have been chosen in that field since the last decades.
Advanced electron microscopy methods for the analysis of MgB2 superconductor
NASA Astrophysics Data System (ADS)
Birajdar, B.; Peranio, N.; Eibl, O.
2008-02-01
Advanced electron microscopy methods used for the analysis of superconducting MgB2 wires and tapes are described. The wires and tapes were prepared by the powder in tube method using different processing technologies and thoroughly characterised for their superconducting properties within the HIPERMAG project. Microstructure analysis on μm to nm length scales is necessary to understand the superconducting properties of MgB2. For the MgB2 phase analysis on μm scale an analytical SEM, and for the analysis on nm scale a energy-filtered STEM is used. Both the microscopes were equipped with EDX detector and field emission gun. Electron microscopy and spectroscopy of MgB2 is challenging because of the boron analysis, carbon and oxygen contamination, and the presence of large number of secondary phases. Advanced electron microscopy involves, combined SEM, EPMA and TEM analysis with artefact free sample preparation, elemental mapping and chemical quantification of point spectra. Details of the acquisition conditions and achieved accuracy are presented. Ex-situ wires show oxygen-free MgB2 colonies (a colony is a dense arrangement of several MgB2 grains) embedded in a porous and oxygen-rich matrix, introducing structural granularity. In comparison, in-situ wires are generally more dense, but show inhibited MgB2 phase formation with significantly higher fraction of B-rich secondary phases. SiC additives in the in-situ wires forms Mg2Si secondary phases. The advanced electron microscopy has been used to extract the microstructure parameters like colony size, B-rich secondary phase fraction, O mole fraction and MgB2 grain size, and establish a microstructure-critical current density model [1]. In summary, conventional secondary electron imaging in SEM and diffraction contrast imaging in the TEM are by far not sufficient and advanced electron microscopy methods are essential for the analysis of superconducting MgB2 wires and tapes.
ERIC Educational Resources Information Center
Coles, Mike; Nelms, Rick
1996-01-01
Describes a study that explores the depth and breadth of scientific facts, principles, and procedures which are required in the Advanced General National Vocational Qualifications (GNVQ) science through comparison with GCE Advanced level. The final report takes account of the updated 1996 version of GNVQ science. (DDR)
TomoBank: a tomographic data repository for computational x-ray science
NASA Astrophysics Data System (ADS)
De Carlo, Francesco; Gürsoy, Doğa; Ching, Daniel J.; Joost Batenburg, K.; Ludwig, Wolfgang; Mancini, Lucia; Marone, Federica; Mokso, Rajmund; Pelt, Daniël M.; Sijbers, Jan; Rivers, Mark
2018-03-01
There is a widening gap between the fast advancement of computational methods for tomographic reconstruction and their successful implementation in production software at various synchrotron facilities. This is due in part to the lack of readily available instrument datasets and phantoms representative of real materials for validation and comparison of new numerical methods. Recent advancements in detector technology have made sub-second and multi-energy tomographic data collection possible (Gibbs et al 2015 Sci. Rep. 5 11824), but have also increased the demand to develop new reconstruction methods able to handle in situ (Pelt and Batenburg 2013 IEEE Trans. Image Process. 22 5238-51) and dynamic systems (Mohan et al 2015 IEEE Trans. Comput. Imaging 1 96-111) that can be quickly incorporated in beamline production software (Gürsoy et al 2014 J. Synchrotron Radiat. 21 1188-93). The x-ray tomography data bank, tomoBank, provides a repository of experimental and simulated datasets with the aim to foster collaboration among computational scientists, beamline scientists, and experimentalists and to accelerate the development and implementation of tomographic reconstruction methods for synchrotron facility production software by providing easy access to challenging datasets and their descriptors.
InSAR Tropospheric Correction Methods: A Statistical Comparison over Different Regions
NASA Astrophysics Data System (ADS)
Bekaert, D. P.; Walters, R. J.; Wright, T. J.; Hooper, A. J.; Parker, D. J.
2015-12-01
Observing small magnitude surface displacements through InSAR is highly challenging, and requires advanced correction techniques to reduce noise. In fact, one of the largest obstacles facing the InSAR community is related to tropospheric noise correction. Spatial and temporal variations in temperature, pressure, and relative humidity result in a spatially-variable InSAR tropospheric signal, which masks smaller surface displacements due to tectonic or volcanic deformation. Correction methods applied today include those relying on weather model data, GNSS and/or spectrometer data. Unfortunately, these methods are often limited by the spatial and temporal resolution of the auxiliary data. Alternatively a correction can be estimated from the high-resolution interferometric phase by assuming a linear or a power-law relationship between the phase and topography. For these methods, the challenge lies in separating deformation from tropospheric signals. We will present results of a statistical comparison of the state-of-the-art tropospheric corrections estimated from spectrometer products (MERIS and MODIS), a low and high spatial-resolution weather model (ERA-I and WRF), and both the conventional linear and power-law empirical methods. We evaluate the correction capability over Southern Mexico, Italy, and El Hierro, and investigate the impact of increasing cloud cover on the accuracy of the tropospheric delay estimation. We find that each method has its strengths and weaknesses, and suggest that further developments should aim to combine different correction methods. All the presented methods are included into our new open source software package called TRAIN - Toolbox for Reducing Atmospheric InSAR Noise (Bekaert et al., in review), which is available to the community Bekaert, D., R. Walters, T. Wright, A. Hooper, and D. Parker (in review), Statistical comparison of InSAR tropospheric correction techniques, Remote Sensing of Environment
A comparison of the environmental impact of different AOPs: risk indexes.
Giménez, Jaime; Bayarri, Bernardí; González, Óscar; Malato, Sixto; Peral, José; Esplugas, Santiago
2014-12-31
Today, environmental impact associated with pollution treatment is a matter of great concern. A method is proposed for evaluating environmental risk associated with Advanced Oxidation Processes (AOPs) applied to wastewater treatment. The method is based on the type of pollution (wastewater, solids, air or soil) and on materials and energy consumption. An Environmental Risk Index (E), constructed from numerical criteria provided, is presented for environmental comparison of processes and/or operations. The Operation Environmental Risk Index (EOi) for each of the unit operations involved in the process and the Aspects Environmental Risk Index (EAj) for process conditions were also estimated. Relative indexes were calculated to evaluate the risk of each operation (E/NOP) or aspect (E/NAS) involved in the process, and the percentage of the maximum achievable for each operation and aspect was found. A practical application of the method is presented for two AOPs: photo-Fenton and heterogeneous photocatalysis with suspended TiO2 in Solarbox. The results report the environmental risks associated with each process, so that AOPs tested and the operations involved with them can be compared.
NASA Technical Reports Server (NTRS)
Dittmar, J. H.
1985-01-01
Noise data on the Large-scale Advanced Propfan (LAP) propeller model SR-7A were taken into the NASA Lewis 8- by 6-Foot Wind Tunnel. The maximum blade passing tone decreases from the peak level when going to higher helical tip Mach numbers. This noise reduction points to the use of higher propeller speeds as a possible method to reduce airplane cabin noise while maintaining high flight speed and efficiency. Comparison of the SR-7A blade passing noise with the noise of the similarly designed SR-3 propeller shows good agreement as expected. The SR-7A propeller is slightly noisier than the SR-3 model in the plane of rotation at the cruise condition. Projections of the tunnel model data are made to the full-scale LAP propeller mounted on the test bed aircraft and compared with design predictions. The prediction method is conservative in the sense that it overpredicts the projected model data.
Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.
2003-01-01
An efficient incremental iterative approach for differentiating advanced flow codes is successfully demonstrated on a two-dimensional inviscid model problem. The method employs the reverse-mode capability of the automatic differentiation software tool ADIFOR 3.0 and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straightforward, black-box reverse-mode applicaiton of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-rder aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoinct) procedures; then, a very efficient noniterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hesian matrices) of lift, wave drag, and pitching-moment coefficients are calculated with respect to geometric shape, angle of attack, and freestream Mach number.
Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.
2001-01-01
An efficient incremental-iterative approach for differentiating advanced flow codes is successfully demonstrated on a 2D inviscid model problem. The method employs the reverse-mode capability of the automatic- differentiation software tool ADIFOR 3.0, and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straight-forward, black-box reverse- mode application of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-order aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoint) procedures; then, a very efficient non-iterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hessian matrices) of lift, wave-drag, and pitching-moment coefficients are calculated with respect to geometric- shape, angle-of-attack, and freestream Mach number
NASA Astrophysics Data System (ADS)
Xu, Qian; Yang, Zhongshi; Luo, Guang-Nan
2015-09-01
The three-dimensional (3D) Monte Carlo code PIC-EDDY has been utilized to investigate the mechanism of hydrocarbon deposition in gaps of tungsten tiles in the Experimental Advanced Superconducting Tokamak (EAST), where the sheath potential is calculated by the 2D in space and 3D in velocity particle-in-cell method. The calculated results for graphite tiles using the same method are also presented for comparison. Calculation results show that the amount of carbon deposited in the gaps of carbon tiles is three times larger than that in the gaps of tungsten tiles when the carbon particles from re-erosion on the top surface of monoblocks are taken into account. However, the deposition amount is found to be larger in the gaps of tungsten tiles at the same CH4 flux. When chemical sputtering becomes significant as carbon coverage on tungsten increases with exposure time, the deposition inside the gaps of tungsten tiles would be considerable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leyva, A.; Cabal, A.; Pinera, I.
The present paper synthesizes the results obtained in the evaluation of a 64 microstrips crystalline silicon detector coupled to RX64 ASIC, designed for high-energy physics experiments, as a useful X-ray detector in advanced medical radiography, specifically in digital mammography. Research includes the acquisition of two-dimensional radiography of a mammography phantom using the scanning method, and the comparison of experimental profile with mathematically simulated one. The paper also shows the experimental images of three biological samples taken from breast biopsies, where it is possible to identify the presence of possible pathological tissues.
Analysis of woven fabrics for reinforced composite materials
NASA Technical Reports Server (NTRS)
Dow, Norris F.; Ramnath, V.; Rosen, B. Walter
1987-01-01
The use of woven fabrics as reinforcements for composites is considered. Methods of analysis of properties are reviewed and extended, with particular attention paid to three-dimensional constructions having through-the-thickness reinforcements. Methodology developed is used parametrically to evaluate the performance potential of a wide variety of reinforcement constructions including hybrids. Comparisons are made of predicted and measured properties of representative composites having biaxial and triaxial woven, and laminated tape lay-up reinforcements. Overall results are incorporated in advanced weave designs.
Thermal conductivity of Rene 41 honeycomb panels
NASA Astrophysics Data System (ADS)
Deriugin, V.
1980-12-01
Effective thermal conductivities of Rene 41 panels suitable for advanced space transportation vehicle structures were determined analytically and experimentally for temperature ranges between 20.4K (423 F) and 1186K (1675 F). The cryogenic data were obtained using a cryostat whereas the high temperature data were measured using a heat flow meter and a comparative thermal conductivity instrument respectively. Comparisons were made between analysis and experimental data. Analytical methods appear to provide reasonable definition of the honeycomb panel effective thermal conductivities.
Thermal conductivity of Rene 41 honeycomb panels. [space transportation vehicles
NASA Technical Reports Server (NTRS)
Deriugin, V.
1980-01-01
Effective thermal conductivities of Rene 41 panels suitable for advanced space transportation vehicle structures were determined analytically and experimentally for temperature ranges between 20.4K (423 F) and 1186K (1675 F). The cryogenic data were obtained using a cryostat whereas the high temperature data were measured using a heat flow meter and a comparative thermal conductivity instrument respectively. Comparisons were made between analysis and experimental data. Analytical methods appear to provide reasonable definition of the honeycomb panel effective thermal conductivities.
Advanced composites: Fabrication processes for selected resin matrix materials
NASA Technical Reports Server (NTRS)
Welhart, E. K.
1976-01-01
This design note is based on present state of the art for epoxy and polyimide matrix composite fabrication technology. Boron/epoxy and polyimide and graphite/epoxy and polyimide structural parts can be successfully fabricated. Fabrication cycles for polyimide matrix composites have been shortened to near epoxy cycle times. Nondestructive testing has proven useful in detecting defects and anomalies in composite structure elements. Fabrication methods and tooling materials are discussed along with the advantages and disadvantages of different tooling materials. Types of honeycomb core, material costs and fabrication methods are shown in table form for comparison. Fabrication limits based on tooling size, pressure capabilities and various machining operations are also discussed.
NASA Technical Reports Server (NTRS)
Cliff, Susan E.; Elmiligui, A.; Aftosmis, M.; Morgenstern, J.; Durston, D.; Thomas, S.
2012-01-01
An innovative pressure rail concept for wind tunnel sonic boom testing of modern aircraft configurations with very low overpressures was designed with an adjoint-based solution-adapted Cartesian grid method. The computational method requires accurate free-air calculations of a test article as well as solutions modeling the influence of rail and tunnel walls. Specialized grids for accurate Euler and Navier-Stokes sonic boom computations were used on several test articles including complete aircraft models with flow-through nacelles. The computed pressure signatures are compared with recent results from the NASA 9- x 7-foot Supersonic Wind Tunnel using the advanced rail design.
Buciński, Adam; Marszałł, Michał Piotr; Krysiński, Jerzy; Lemieszek, Andrzej; Załuski, Jerzy
2010-07-01
Hodgkin's lymphoma is one of the most curable malignancies and most patients achieve a lasting complete remission. In this study, artificial neural network (ANN) analysis was shown to provide significant factors with regard to 5-year recurrence after lymphoma treatment. Data from 114 patients treated for Hodgkin's disease were available for evaluation and comparison. A total of 31 variables were subjected to ANN analysis. The ANN approach as an advanced multivariate data processing method was shown to provide objective prognostic data. Some of these prognostic factors are consistent or even identical to the factors evaluated earlier by other statistical methods.
Amorphous or Crystalline? A Comparison of Particle Engineering Methods and Selection.
Thakkar, Sachin G; Fathe, Kristin; Smyth, Hugh D C
2015-01-01
This review is intended to provide a critical account of the current goals and technologies of particle engineering regarding the production of crystalline and amorphous particles. The technologies discussed here cover traditional crystallization technologies, supercritical fluid technologies, spray drying, controlled solvent crystallization, and sonocrystallization. Also recent advancements in particle engineering including spray freezing into liquid, thin-film freeze-drying, PRINT technology are presented. The paper also examines the merits and limitations of these technologies with respect to their methods of characterization. Additionally a section discussing the utility of creating amorphous and crystalline formulation approaches in regards to bioavailability and utility in formulation is presented.
Martini, Markus; Röhrig, Andreas; Reich, Rudolf Hermann; Messing-Jünger, Martina
2017-03-01
Cranioplasty of patients with craniosynostosis requires rapid, precise and gentle osteotomy of the skull to avoid complications and benefit the healing process. The aim of this prospective clinical study was to compare two different methods of osteotomy. Piezosurgery and conventional osteotomy were compared using an oscillating saw and high speed drill while performing cranioplasties with fronto-orbital advancement. Thirty-four children who required cranioplasty with fronto-orbital advancement were recruited consecutively. The operations were conducted using piezosurgery or a conventional surgical technique, alternately. Operative time, blood count, CRP and transfusion rate, as well as soft tissue injuries, postoperative edema, pain development and secondary bone healing were investigated. The average age of patients was 9.7 months. The following indications for craniosynostosis were surgically corrected: trigonocephaly (23), anterior plagiocephaly (8), brachycephaly (1), and syndromic craniosynostosis (2). Piezosurgery was utilized in 18 cases. There were no group differences with regard to the incidence of soft tissue injuries (dura, periorbita), pain, swelling, blood loss or bony integration. The duration of osteotomy was significantly longer in the piezosurgery group, leading to slightly increased blood loss, while the postoperative CRP increase was higher using the conventional method. The piezosurgery method is a comparatively safe surgical method for conducting osteotomy during cranioplasty. With regard to soft tissue protection and postoperative clinical course, the same procedural precautions and controls are necessary as those needed for conventional methods. The osteotomy duration is considerably longer using piezosurgery, although it is accompanied by lower initial postoperative CRP values. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
New methods in iris recognition.
Daugman, John
2007-10-01
This paper presents the following four advances in iris recognition: 1) more disciplined methods for detecting and faithfully modeling the iris inner and outer boundaries with active contours, leading to more flexible embedded coordinate systems; 2) Fourier-based methods for solving problems in iris trigonometry and projective geometry, allowing off-axis gaze to be handled by detecting it and "rotating" the eye into orthographic perspective; 3) statistical inference methods for detecting and excluding eyelashes; and 4) exploration of score normalizations, depending on the amount of iris data that is available in images and the required scale of database search. Statistical results are presented based on 200 billion iris cross-comparisons that were generated from 632500 irises in the United Arab Emirates database to analyze the normalization issues raised in different regions of receiver operating characteristic curves.
Polly, P David
2015-05-01
Our understanding of the evolution of the dentition has been transformed by advances in the developmental biology, genetics, and functional morphology of teeth, as well as the methods available for studying tooth form and function. The hierarchical complexity of dental developmental genetics combined with dynamic effects of cells and tissues during development allow for substantial, rapid, and potentially non-linear evolutionary changes. Studies of selection on tooth function in the wild and evolutionary functional comparisons both suggest that tooth function and adaptation to diets are the most important factors guiding the evolution of teeth, yet selection against random changes that produce malocclusions (selectional drift) may be an equally important factor in groups with tribosphenic dentitions. These advances are critically reviewed here.
Advanced TIL system for laser beam focusing in a turbulent regime
NASA Astrophysics Data System (ADS)
Sprangle, Phillip A.; Ting, Antonio C.; Kaganovich, Dmitry; Khizhnyak, Anatoliy I.; Tomov, Ivan V.; Markov, Vladimir B.; Korobkin, Dmitriy V.
2014-10-01
This paper discusses an advanced target in the loop (ATIL) system with its performance based on a nonlinear phase conjugation scheme that performs rapid adjustment of the laser beam wavefront to mitigate effects associated with atmospheric turbulence along the propagation path. The ATIL method allows positional control of the laser spot (the beacon) on a remote imaged-resolved target. The size of this beacon is governed by the reciprocity of two counterpropagating beams (one towards the target and another scattered by the target) and the fidelity of the phase conjugation scheme. In this presentation we will present the results of the thorough analysis of ATIL operation, factors that affect its performance, its focusing efficiency and the comparison of laboratory experimental validation and computer simulation results.
Perspective: Quantum mechanical methods in biochemistry and biophysics.
Cui, Qiang
2016-10-14
In this perspective article, I discuss several research topics relevant to quantum mechanical (QM) methods in biophysical and biochemical applications. Due to the immense complexity of biological problems, the key is to develop methods that are able to strike the proper balance of computational efficiency and accuracy for the problem of interest. Therefore, in addition to the development of novel ab initio and density functional theory based QM methods for the study of reactive events that involve complex motifs such as transition metal clusters in metalloenzymes, it is equally important to develop inexpensive QM methods and advanced classical or quantal force fields to describe different physicochemical properties of biomolecules and their behaviors in complex environments. Maintaining a solid connection of these more approximate methods with rigorous QM methods is essential to their transferability and robustness. Comparison to diverse experimental observables helps validate computational models and mechanistic hypotheses as well as driving further development of computational methodologies.
Gradient-based interpolation method for division-of-focal-plane polarimeters.
Gao, Shengkui; Gruev, Viktor
2013-01-14
Recent advancements in nanotechnology and nanofabrication have allowed for the emergence of the division-of-focal-plane (DoFP) polarization imaging sensors. These sensors capture polarization properties of the optical field at every imaging frame. However, the DoFP polarization imaging sensors suffer from large registration error as well as reduced spatial-resolution output. These drawbacks can be improved by applying proper image interpolation methods for the reconstruction of the polarization results. In this paper, we present a new gradient-based interpolation method for DoFP polarimeters. The performance of the proposed interpolation method is evaluated against several previously published interpolation methods by using visual examples and root mean square error (RMSE) comparison. We found that the proposed gradient-based interpolation method can achieve better visual results while maintaining a lower RMSE than other interpolation methods under various dynamic ranges of a scene ranging from dim to bright conditions.
Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F
2011-09-01
Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.
In-Situ Transfer Standard and Coincident-View Intercomparisons for Sensor Cross-Calibration
NASA Technical Reports Server (NTRS)
Thome, Kurt; McCorkel, Joel; Czapla-Myers, Jeff
2013-01-01
There exist numerous methods for accomplishing on-orbit calibration. Methods include the reflectance-based approach relying on measurements of surface and atmospheric properties at the time of a sensor overpass as well as invariant scene approaches relying on knowledge of the temporal characteristics of the site. The current work examines typical cross-calibration methods and discusses the expected uncertainties of the methods. Data from the Advanced Land Imager (ALI), Advanced Spaceborne Thermal Emission and Reflection and Radiometer (ASTER), Enhanced Thematic Mapper Plus (ETM+), Moderate Resolution Imaging Spectroradiometer (MODIS), and Thematic Mapper (TM) are used to demonstrate the limits of relative sensor-to-sensor calibration as applied to current sensors while Landsat-5 TM and Landsat-7 ETM+ are used to evaluate the limits of in situ site characterizations for SI-traceable cross calibration. The current work examines the difficulties in trending of results from cross-calibration approaches taking into account sampling issues, site-to-site variability, and accuracy of the method. Special attention is given to the differences caused in the cross-comparison of sensors in radiance space as opposed to reflectance space. The results show that cross calibrations with absolute uncertainties lesser than 1.5 percent (1 sigma) are currently achievable even for sensors without coincident views.
A community resource benchmarking predictions of peptide binding to MHC-I molecules.
Peters, Bjoern; Bui, Huynh-Hoa; Frankild, Sune; Nielson, Morten; Lundegaard, Claus; Kostem, Emrah; Basch, Derek; Lamberth, Kasper; Harndahl, Mikkel; Fleri, Ward; Wilson, Stephen S; Sidney, John; Lund, Ole; Buus, Soren; Sette, Alessandro
2006-06-09
Recognition of peptides bound to major histocompatibility complex (MHC) class I molecules by T lymphocytes is an essential part of immune surveillance. Each MHC allele has a characteristic peptide binding preference, which can be captured in prediction algorithms, allowing for the rapid scan of entire pathogen proteomes for peptide likely to bind MHC. Here we make public a large set of 48,828 quantitative peptide-binding affinity measurements relating to 48 different mouse, human, macaque, and chimpanzee MHC class I alleles. We use this data to establish a set of benchmark predictions with one neural network method and two matrix-based prediction methods extensively utilized in our groups. In general, the neural network outperforms the matrix-based predictions mainly due to its ability to generalize even on a small amount of data. We also retrieved predictions from tools publicly available on the internet. While differences in the data used to generate these predictions hamper direct comparisons, we do conclude that tools based on combinatorial peptide libraries perform remarkably well. The transparent prediction evaluation on this dataset provides tool developers with a benchmark for comparison of newly developed prediction methods. In addition, to generate and evaluate our own prediction methods, we have established an easily extensible web-based prediction framework that allows automated side-by-side comparisons of prediction methods implemented by experts. This is an advance over the current practice of tool developers having to generate reference predictions themselves, which can lead to underestimating the performance of prediction methods they are not as familiar with as their own. The overall goal of this effort is to provide a transparent prediction evaluation allowing bioinformaticians to identify promising features of prediction methods and providing guidance to immunologists regarding the reliability of prediction tools.
Nakagawa, Tateo; Shimada, Mitsuo; Kurita, Nobuhiro; Iwata, Takashi; Nishioka, Masanori; Yoshikawa, Kozo; Higashijima, Jun; Utsunomiya, Tohru
2012-06-01
The role of intratumoral thymidylate synthase (TS) mRNA or protein expression is still controversial and little has been reported regarding relation of them in colorectal cancer. Forty-six patients with advanced colorectal cancer who underwent surgical resection were included. TS mRNA expression was determined by the Danenberg tumor profile method based on laser-captured micro-dissection of the tumor cells. TS protein expression was evaluated using immunohistochemical staining. TS mRNA expression tended to relate TS protein expression. Statistical significance was not found in overall survival between the TS mRNA high group and low group regardless of performing adjuvant chemotherapy. The overall survival in the TS protein negative group was significantly higher than that in positive group in all and the patients without adjuvant chemotherapy. Multivariate analysis showed TS protein expression was as an independent prognostic factor. TS protein expression tends to be related TS mRNA expression and is an independent prognostic factor in advanced colorectal cancer.
Producibility aspects of advanced composites for an L-1011 Aileron
NASA Technical Reports Server (NTRS)
Van Hamersveld, J.; Fogg, L. D.
1976-01-01
The design of advanced composite aileron suitable for long-term service on transport aircraft includes Kevlar 49 fabric skins on honeycomb sandwich covers, hybrid graphite/Kevlar 49 ribs and spars, and graphite/epoxy fittings. Weight and cost savings of 28 and 20 percent, respectively, are predicted by comparison with the production metallic aileron. The structural integrity of the design has been substantiated by analysis and static tests of subcomponents. The producibility considerations played a key role in the selection of design concepts with potential for low-cost production. Simplicity in fabrication is a major factor in achieving low cost using advanced tooling and manufacturing methods such as net molding to size, draping, forming broadgoods, and cocuring components. A broadgoods dispensing machine capable of handling unidirectional and bidirectional prepreg materials in widths ranging from 12 to 42 inches is used for rapid layup of component kits and covers. Existing large autoclaves, platen presses, and shop facilities are fully exploited.
Undergraduate nursing students' level of assertiveness in Greece: a questionnaire survey.
Deltsidou, Anna
2009-09-01
A number of studies of nursing and midwifery have found stress and bullying to be frequent problems. Those suffering from bullying and stress need to have high levels of assertiveness to resist and to cope successfully. Hence, it was considered vital to assess the assertiveness level of nursing students throughout their training curriculum. The study population was composed of nursing students in different semesters at one school in Central Greece (n=298) who agreed to complete a questionnaire on assertiveness level assessment, which had been translated into Greek and adapted to this population. All students present in class completed the questionnaire, representing 80% of the total population of active students. Mean assertiveness scores between semesters were compared by ANOVA and comparisons between the responses of the first semester students and responses of advanced semester students were done by Pearson's chi square. The main finding of this study was that the assertiveness levels displayed by students increase slightly in advanced semesters by comparison to those displayed by first-semester students. Assertive behavior should be encouraged through learning methods. Nurses should preferably obtain this training throughout their studies. Instructors have an essential role in the improvement and achievement of assertiveness training curriculums for undergraduate nursing students.
On testing for spatial correspondence between maps of human brain structure and function.
Alexander-Bloch, Aaron F; Shou, Haochang; Liu, Siyuan; Satterthwaite, Theodore D; Glahn, David C; Shinohara, Russell T; Vandekar, Simon N; Raznahan, Armin
2018-06-01
A critical issue in many neuroimaging studies is the comparison between brain maps. Nonetheless, it remains unclear how one should test hypotheses focused on the overlap or spatial correspondence between two or more brain maps. This "correspondence problem" affects, for example, the interpretation of comparisons between task-based patterns of functional activation, resting-state networks or modules, and neuroanatomical landmarks. To date, this problem has been addressed with remarkable variability in terms of methodological approaches and statistical rigor. In this paper, we address the correspondence problem using a spatial permutation framework to generate null models of overlap by applying random rotations to spherical representations of the cortical surface, an approach for which we also provide a theoretical statistical foundation. We use this method to derive clusters of cognitive functions that are correlated in terms of their functional neuroatomical substrates. In addition, using publicly available data, we formally demonstrate the correspondence between maps of task-based functional activity, resting-state fMRI networks and gyral-based anatomical landmarks. We provide open-access code to implement the methods presented for two commonly-used tools for surface based cortical analysis (https://www.github.com/spin-test). This spatial permutation approach constitutes a useful advance over widely-used methods for the comparison of cortical maps, thereby opening new possibilities for the integration of diverse neuroimaging data. Copyright © 2018 Elsevier Inc. All rights reserved.
Document Examination: Applications of Image Processing Systems.
Kopainsky, B
1989-12-01
Dealing with images is a familiar business for an expert in questioned documents: microscopic, photographic, infrared, and other optical techniques generate images containing the information he or she is looking for. A recent method for extracting most of this information is digital image processing, ranging from the simple contrast and contour enhancement to the advanced restoration of blurred texts. When combined with a sophisticated physical imaging system, an image pricessing system has proven to be a powerful and fast tool for routine non-destructive scanning of suspect documents. This article reviews frequent applications, comprising techniques to increase legibility, two-dimensional spectroscopy (ink discrimination, alterations, erased entries, etc.), comparison techniques (stamps, typescript letters, photo substitution), and densitometry. Computerized comparison of handwriting is not included. Copyright © 1989 Central Police University.
HEVC optimizations for medical environments
NASA Astrophysics Data System (ADS)
Fernández, D. G.; Del Barrio, A. A.; Botella, Guillermo; García, Carlos; Meyer-Baese, Uwe; Meyer-Baese, Anke
2016-05-01
HEVC/H.265 is the most interesting and cutting-edge topic in the world of digital video compression, allowing to reduce by half the required bandwidth in comparison with the previous H.264 standard. Telemedicine services and in general any medical video application can benefit from the video encoding advances. However, the HEVC is computationally expensive to implement. In this paper a method for reducing the HEVC complexity in the medical environment is proposed. The sequences that are typically processed in this context contain several homogeneous regions. Leveraging these regions, it is possible to simplify the HEVC flow while maintaining a high-level quality. In comparison with the HM16.2 standard, the encoding time is reduced up to 75%, with a negligible quality loss. Moreover, the algorithm is straightforward to implement in any hardware platform.
Liu, Bing; Li, Lei; Huang, Lixia; Li, Shaoli; Rao, Guanhua; Yu, Yang; Zhou, Yanbin
2017-01-01
Emerging evidence has indicated that circulating tumor DNA (ctDNA) from plasma could be used to analyze EGFR mutation status for NSCLC patients; however, due to the low level of ctDNA in plasma, highly sensitive approaches are required to detect low frequency mutations. In addition, the cutoff for the mutation abundance that can be detected in tumor tissue but cannot be detected in matched ctDNA is still unknown. To assess a highly sensitive method, we evaluated the use of digital PCR in the detection of EGFR mutations in tumor tissue from 47 advanced lung adenocarcinoma patients through comparison with NGS and ARMS. We determined the degree of concordance between tumor tissue DNA and paired ctDNA and analyzed the mutation abundance relationship between them. Digital PCR and Proton had a high sensitivity (96.00% vs. 100%) compared with that of ARMS in the detection of mutations in tumor tissue. Digital PCR outperformed Proton in identifying more low abundance mutations. The ctDNA detection rate of digital PCR was 87.50% in paired tumor tissue with a mutation abundance above 5% and 7.59% in paired tumor tissue with a mutation abundance below 5%. When the DNA mutation abundance of tumor tissue was above 3.81%, it could identify mutations in paired ctDNA with a high sensitivity. Digital PCR will help identify alternative methods for detecting low abundance mutations in tumor tissue DNA and plasma ctDNA. PMID:28978074
Novel Method For Low-Rate Ddos Attack Detection
NASA Astrophysics Data System (ADS)
Chistokhodova, A. A.; Sidorov, I. D.
2018-05-01
The relevance of the work is associated with an increasing number of advanced types of DDoS attacks, in particular, low-rate HTTP-flood. Last year, the power and complexity of such attacks increased significantly. The article is devoted to the analysis of DDoS attacks detecting methods and their modifications with the purpose of increasing the accuracy of DDoS attack detection. The article details low-rate attacks features in comparison with conventional DDoS attacks. During the analysis, significant shortcomings of the available method for detecting low-rate DDoS attacks were found. Thus, the result of the study is an informal description of a new method for detecting low-rate denial-of-service attacks. The architecture of the stand for approbation of the method is developed. At the current stage of the study, it is possible to improve the efficiency of an already existing method by using a classifier with memory, as well as additional information.
Coarse-graining using the relative entropy and simplex-based optimization methods in VOTCA
NASA Astrophysics Data System (ADS)
Rühle, Victor; Jochum, Mara; Koschke, Konstantin; Aluru, N. R.; Kremer, Kurt; Mashayak, S. Y.; Junghans, Christoph
2014-03-01
Coarse-grained (CG) simulations are an important tool to investigate systems on larger time and length scales. Several methods for systematic coarse-graining were developed, varying in complexity and the property of interest. Thus, the question arises which method best suits a specific class of system and desired application. The Versatile Object-oriented Toolkit for Coarse-graining Applications (VOTCA) provides a uniform platform for coarse-graining methods and allows for their direct comparison. We present recent advances of VOTCA, namely the implementation of the relative entropy method and downhill simplex optimization for coarse-graining. The methods are illustrated by coarse-graining SPC/E bulk water and a water-methanol mixture. Both CG models reproduce the pair distributions accurately. SYM is supported by AFOSR under grant 11157642 and by NSF under grant 1264282. CJ was supported in part by the NSF PHY11-25915 at KITP. K. Koschke acknowledges funding by the Nestle Research Center.
Recent Advances in the Method of Forces: Integrated Force Method of Structural Analysis
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.
1998-01-01
Stress that can be induced in an elastic continuum can be determined directly through the simultaneous application of the equilibrium equations and the compatibility conditions. In the literature, this direct stress formulation is referred to as the integrated force method. This method, which uses forces as the primary unknowns, complements the popular equilibrium-based stiffness method, which considers displacements as the unknowns. The integrated force method produces accurate stress, displacement, and frequency results even for modest finite element models. This version of the force method should be developed as an alternative to the stiffness method because the latter method, which has been researched for the past several decades, may have entered its developmental plateau. Stress plays a primary role in the development of aerospace and other products, and its analysis is difficult. Therefore, it is advisable to use both methods to calculate stress and eliminate errors through comparison. This paper examines the role of the integrated force method in analysis, animation and design.
Sukino, Shin; Nirengi, Shinsuke; Kawaguchi, Yaeko; Kotani, Kazuhiko; Tsuzaki, Kokoro; Okada, Hiroshi; Suganuma, Akiko; Sakane, Naoki
2018-05-01
Advanced glycation end products (AGEs) are associated with diabetes mellitus. Digested food-derived AGEs have been implicated in the pathogenesis of AGE-related disorders, and restricting diet-derived AGEs improves insulin resistance in animal models. The AGE content in foods changes according to cooking method, and it is higher in baked or oven-fried foods than in those prepared by steaming or simmering. Here, we examined the feasibility of crossover comparison tests for determining how different cooking methods (normal diet vs. low-AGE diet) affect insulin levels in non-diabetic Japanese subjects. Five adult men and women (age, 41 ± 7 years; body mass index (BMI), 21.7 ± 2.6 kg/m 2 ) were enrolled. The following dietary regimen was used: days 1 - 3, control meal; day 4, test meal (normal diet vs. low-AGE diet); day 5, washout day; and day 6, test meal. On days 4 and 6, blood samples were collected before and at 2, 4, and 6 h after meals. Blood levels of N-(carboxymethyl) lysine (CML) increased with dietary intake, but the increase was similar for the normal diet and low-AGE diet groups. Mean plasma glucose, insulin, triglycerides (TG), and CML did not differ significantly between the two groups. The area under the curve (AUC) for insulin levels was lower in the low-AGE diet group (d = 0.8). The sample size calculated from the effect size of the insulin AUC change was 22. Twenty-two subjects may be needed to investigate the changes in clinical parameters attributable to cooking method in non-diabetic Japanese subjects.
Advances in Optical Fiber-Based Faraday Rotation Diagnostics
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, A D; McHale, G B; Goerz, D A
2009-07-27
In the past two years, we have used optical fiber-based Faraday Rotation Diagnostics (FRDs) to measure pulsed currents on several dozen capacitively driven and explosively driven pulsed power experiments. We have made simplifications to the necessary hardware for quadrature-encoded polarization analysis, including development of an all-fiber analysis scheme. We have developed a numerical model that is useful for predicting and quantifying deviations from the ideal diagnostic response. We have developed a method of analyzing quadrature-encoded FRD data that is simple to perform and offers numerous advantages over several existing methods. When comparison has been possible, we have seen good agreementmore » with our FRDs and other current sensors.« less
Implementation and Testing of Turbulence Models for the F18-HARV Simulation
NASA Technical Reports Server (NTRS)
Yeager, Jessie C.
1998-01-01
This report presents three methods of implementing the Dryden power spectral density model for atmospheric turbulence. Included are the equations which define the three methods and computer source code written in Advanced Continuous Simulation Language to implement the equations. Time-history plots and sample statistics of simulated turbulence results from executing the code in a test program are also presented. Power spectral densities were computed for sample sequences of turbulence and are plotted for comparison with the Dryden spectra. The three model implementations were installed in a nonlinear six-degree-of-freedom simulation of the High Alpha Research Vehicle airplane. Aircraft simulation responses to turbulence generated with the three implementations are presented as plots.
Friction Stir Spot Welding of Advanced High Strength Steels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hovanski, Yuri; Grant, Glenn J.; Santella, M. L.
Friction stir spot welding techniques were developed to successfully join several advanced high strength steels. Two distinct tool materials were evaluated to determine the effect of tool materials on the process parameters and joint properties. Welds were characterized primarily via lap shear, microhardness, and optical microscopy. Friction stir spot welds were compared to the resistance spot welds in similar strength alloys by using the AWS standard for resistance spot welding high strength steels. As further comparison, a primitive cost comparison between the two joining processes was developed, which included an evaluation of the future cost prospects of friction stir spotmore » welding in advanced high strength steels.« less
Improved NASA-ANOPP Noise Prediction Computer Code for Advanced Subsonic Propulsion Systems
NASA Technical Reports Server (NTRS)
Kontos, K. B.; Janardan, B. A.; Gliebe, P. R.
1996-01-01
Recent experience using ANOPP to predict turbofan engine flyover noise suggests that it over-predicts overall EPNL by a significant amount. An improvement in this prediction method is desired for system optimization and assessment studies of advanced UHB engines. An assessment of the ANOPP fan inlet, fan exhaust, jet, combustor, and turbine noise prediction methods is made using static engine component noise data from the CF6-8OC2, E(3), and QCSEE turbofan engines. It is shown that the ANOPP prediction results are generally higher than the measured GE data, and that the inlet noise prediction method (Heidmann method) is the most significant source of this overprediction. Fan noise spectral comparisons show that improvements to the fan tone, broadband, and combination tone noise models are required to yield results that more closely simulate the GE data. Suggested changes that yield improved fan noise predictions but preserve the Heidmann model structure are identified and described. These changes are based on the sets of engine data mentioned, as well as some CFM56 engine data that was used to expand the combination tone noise database. It should be noted that the recommended changes are based on an analysis of engines that are limited to single stage fans with design tip relative Mach numbers greater than one.
Ramón, M; Martínez-Pastor, F
2018-04-23
Computer-aided sperm analysis (CASA) produces a wealth of data that is frequently ignored. The use of multiparametric statistical methods can help explore these datasets, unveiling the subpopulation structure of sperm samples. In this review we analyse the significance of the internal heterogeneity of sperm samples and its relevance. We also provide a brief description of the statistical tools used for extracting sperm subpopulations from the datasets, namely unsupervised clustering (with non-hierarchical, hierarchical and two-step methods) and the most advanced supervised methods, based on machine learning. The former method has allowed exploration of subpopulation patterns in many species, whereas the latter offering further possibilities, especially considering functional studies and the practical use of subpopulation analysis. We also consider novel approaches, such as the use of geometric morphometrics or imaging flow cytometry. Finally, although the data provided by CASA systems provides valuable information on sperm samples by applying clustering analyses, there are several caveats. Protocols for capturing and analysing motility or morphometry should be standardised and adapted to each experiment, and the algorithms should be open in order to allow comparison of results between laboratories. Moreover, we must be aware of new technology that could change the paradigm for studying sperm motility and morphology.
de Vent, Nathalie R.; Agelink van Rentergem, Joost A.; Schmand, Ben A.; Murre, Jaap M. J.; Huizenga, Hilde M.
2016-01-01
In the Advanced Neuropsychological Diagnostics Infrastructure (ANDI), datasets of several research groups are combined into a single database, containing scores on neuropsychological tests from healthy participants. For most popular neuropsychological tests the quantity, and range of these data surpasses that of traditional normative data, thereby enabling more accurate neuropsychological assessment. Because of the unique structure of the database, it facilitates normative comparison methods that were not feasible before, in particular those in which entire profiles of scores are evaluated. In this article, we describe the steps that were necessary to combine the separate datasets into a single database. These steps involve matching variables from multiple datasets, removing outlying values, determining the influence of demographic variables, and finding appropriate transformations to normality. Also, a brief description of the current contents of the ANDI database is given. PMID:27812340
Microalgal drying and cell disruption--recent advances.
Show, Kuan-Yeow; Lee, Duu-Jong; Tay, Joo-Hwa; Lee, Tse-Min; Chang, Jo-Shu
2015-05-01
Production of intracellular metabolites or biofuels from algae involves various processing steps, and extensive work on laboratory- and pilot-scale algae cultivation, harvesting and processing has been reported. As algal drying and cell disruption are integral processes of the unit operations, this review examines recent advances in algal drying and disruption for nutrition or biofuel production. Challenges and prospects of the processing are also outlined. Engineering improvements in addressing the challenges of energy efficiency and cost-effective and rigorous techno-economic analyses for a clearer prospect comparison between different processing methods are highlighted. Holistic life cycle assessments need to be conducted in assessing the energy balance and the potential environmental impacts of algal processing. The review aims to provide useful information for future development of efficient and commercially viable algal food products and biofuels production. Copyright © 2014 Elsevier Ltd. All rights reserved.
de Vent, Nathalie R; Agelink van Rentergem, Joost A; Schmand, Ben A; Murre, Jaap M J; Huizenga, Hilde M
2016-01-01
In the Advanced Neuropsychological Diagnostics Infrastructure (ANDI), datasets of several research groups are combined into a single database, containing scores on neuropsychological tests from healthy participants. For most popular neuropsychological tests the quantity, and range of these data surpasses that of traditional normative data, thereby enabling more accurate neuropsychological assessment. Because of the unique structure of the database, it facilitates normative comparison methods that were not feasible before, in particular those in which entire profiles of scores are evaluated. In this article, we describe the steps that were necessary to combine the separate datasets into a single database. These steps involve matching variables from multiple datasets, removing outlying values, determining the influence of demographic variables, and finding appropriate transformations to normality. Also, a brief description of the current contents of the ANDI database is given.
Flight experience with flight control redundancy management
NASA Technical Reports Server (NTRS)
Szalai, K. J.; Larson, R. R.; Glover, R. D.
1980-01-01
Flight experience with both current and advanced redundancy management schemes was gained in recent flight research programs using the F-8 digital fly by wire aircraft. The flight performance of fault detection, isolation, and reconfiguration (FDIR) methods for sensors, computers, and actuators is reviewed. Results of induced failures as well as of actual random failures are discussed. Deficiencies in modeling and implementation techniques are also discussed. The paper also presents comparison off multisensor tracking in smooth air, in turbulence, during large maneuvers, and during maneuvers typical of those of large commercial transport aircraft. The results of flight tests of an advanced analytic redundancy management algorithm are compared with the performance of a contemporary algorithm in terms of time to detection, false alarms, and missed alarms. The performance of computer redundancy management in both iron bird and flight tests is also presented.
NASA Technical Reports Server (NTRS)
Izygon, Michel E.
1992-01-01
This report is an attempt to clarify some of the concerns raised about the OMT method, specifically that OMT is weaker than the Booch method in a few key areas. This interim report specifically addresses the following issues: (1) is OMT object-oriented or only data-driven?; (2) can OMT be used as a front-end to implementation in C++?; (3) the inheritance concept in OMT is in contradiction with the 'pure and real' inheritance concept found in object-oriented (OO) design; (4) low support for software life-cycle issues, for project and risk management; (5) uselessness of functional modeling for the ROSE project; and (6) problems with event-driven and simulation systems. The conclusion of this report is that both Booch's method and Rumbaugh's method are good OO methods, each with strengths and weaknesses in different areas of the development process.
Structure and information in spatial segregation
2017-01-01
Ethnoracial residential segregation is a complex, multiscalar phenomenon with immense moral and economic costs. Modeling the structure and dynamics of segregation is a pressing problem for sociology and urban planning, but existing methods have limitations. In this paper, we develop a suite of methods, grounded in information theory, for studying the spatial structure of segregation. We first advance existing profile and decomposition methods by posing two related regionalization methods, which allow for profile curves with nonconstant spatial scale and decomposition analysis with nonarbitrary areal units. We then formulate a measure of local spatial scale, which may be used for both detailed, within-city analysis and intercity comparisons. These methods highlight detailed insights in the structure and dynamics of urban segregation that would be otherwise easy to miss or difficult to quantify. They are computationally efficient, applicable to a broad range of study questions, and freely available in open source software. PMID:29078323
Structure and information in spatial segregation.
Chodrow, Philip S
2017-10-31
Ethnoracial residential segregation is a complex, multiscalar phenomenon with immense moral and economic costs. Modeling the structure and dynamics of segregation is a pressing problem for sociology and urban planning, but existing methods have limitations. In this paper, we develop a suite of methods, grounded in information theory, for studying the spatial structure of segregation. We first advance existing profile and decomposition methods by posing two related regionalization methods, which allow for profile curves with nonconstant spatial scale and decomposition analysis with nonarbitrary areal units. We then formulate a measure of local spatial scale, which may be used for both detailed, within-city analysis and intercity comparisons. These methods highlight detailed insights in the structure and dynamics of urban segregation that would be otherwise easy to miss or difficult to quantify. They are computationally efficient, applicable to a broad range of study questions, and freely available in open source software. Published under the PNAS license.
Randomized controlled trials and meta-analysis in medical education: what role do they play?
Cook, David A
2012-01-01
Education researchers seek to understand what works, for whom, in what circumstances. Unfortunately, educational environments are complex and research itself is highly context dependent. Faced with these challenges, some have argued that qualitative methods should supplant quantitative methods such as randomized controlled trials (RCTs) and meta-analysis. I disagree. Good qualitative and mixed-methods research are complementary to, rather than exclusive of, quantitative methods. The complexity and challenges we face should not beguile us into ignoring methods that provide strong evidence. What, then, is the proper role for RCTs and meta-analysis in medical education? First, the choice of study design depends on the research question. RCTs and meta-analysis are appropriate for many, but not all, study goals. They have compelling strengths but also numerous limitations. Second, strong methods will not compensate for a pointless question. RCTs do not advance the science when they make confounded comparisons, or make comparison with no intervention. Third, clinical medicine now faces many of the same challenges we encounter in education. We can learn much from other fields about how to handle complexity in RCTs. Finally, no single study will definitively answer any research question. We need carefully planned, theory-building, programmatic research, reflecting a variety of paradigms and approaches, as we accumulate evidence to change the art and science of education.
A Comparison of Computational Aeroacoustic Prediction Methods for Transonic Rotor Noise
NASA Technical Reports Server (NTRS)
Brentner, Kenneth S.; Lyrintzis, Anastasios; Koutsavdis, Evangelos K.
1996-01-01
This paper compares two methods for predicting transonic rotor noise for helicopters in hover and forward flight. Both methods rely on a computational fluid dynamics (CFD) solution as input to predict the acoustic near and far fields. For this work, the same full-potential rotor code has been used to compute the CFD solution for both acoustic methods. The first method employs the acoustic analogy as embodied in the Ffowcs Williams-Hawkings (FW-H) equation, including the quadrupole term. The second method uses a rotating Kirchhoff formulation. Computed results from both methods are compared with one other and with experimental data for both hover and advancing rotor cases. The results are quite good for all cases tested. The sensitivity of both methods to CFD grid resolution and to the choice of the integration surface/volume is investigated. The computational requirements of both methods are comparable; in both cases these requirements are much less than the requirements for the CFD solution.
Advanced composites in sailplane structures: Application and mechanical properties
NASA Technical Reports Server (NTRS)
Muser, D.
1979-01-01
Advanced Composites in sailplanes mean the use of carbon and aramid fibers in an epoxy matrix. Weight savings were in the range of 8 to 18% in comparison with glass fiber structures. The laminates will be produced by hand-layup techniques and all material tests were done with these materials. These values may be used for calculation of strength and stiffness, as well as for comparison of the materials to get a weight-optimum construction. Proposals for material-optimum construction are mentioned.
Documenting helicopter operations from an energy standpoint
NASA Technical Reports Server (NTRS)
Davis, S. J.; Stepniewski, W. Z.
1974-01-01
Results are presented of a study of the relative and absolute energy consumption of helicopters, including limited comparisons with fixed-wing aircraft, and selected surface transportation vehicles. Additional comparisons were made to determine the level of reduction in energy consumption expected from the application of advanced technologies to the helicopter design and sizing process. It was found that improvements in helicopter consumption characteristics can be accomplished through the utilization of advanced technology to reduce drag, structures weight, and powerplant fuel consumption.
Fuel conservation merits of advanced turboprop transport aircraft
NASA Technical Reports Server (NTRS)
Revell, J. D.; Tullis, R. H.
1977-01-01
The advantages of a propfan powered aircraft for the commercial air transportation system were assessed by the comparison with an equivalent turbofan transport. Comparisons were accomplished on the basis of fuel utilization and operating costs, as well as aircraft weight and size. Advantages of the propfan aircraft, concerning fuel utilization and operating costs, were accomplished by considering: (1) incorporation of propfan performance and acoustic data; (2) revised mission profiles (longer design range and reduction in; and cruise speed) (3) utilization of alternate and advanced technology engines.
Community detection in complex networks using proximate support vector clustering
NASA Astrophysics Data System (ADS)
Wang, Feifan; Zhang, Baihai; Chai, Senchun; Xia, Yuanqing
2018-03-01
Community structure, one of the most attention attracting properties in complex networks, has been a cornerstone in advances of various scientific branches. A number of tools have been involved in recent studies concentrating on the community detection algorithms. In this paper, we propose a support vector clustering method based on a proximity graph, owing to which the introduced algorithm surpasses the traditional support vector approach both in accuracy and complexity. Results of extensive experiments undertaken on computer generated networks and real world data sets illustrate competent performances in comparison with the other counterparts.
Overview of Heat Addition and Efficiency Predictions for an Advanced Stirling Convertor
NASA Technical Reports Server (NTRS)
Wilson, Scott D.; Reid, Terry; Schifer, Nicholas; Briggs, Maxwell
2011-01-01
Past methods of predicting net heat input needed to be validated. Validation effort pursued with several paths including improving model inputs, using test hardware to provide validation data, and validating high fidelity models. Validation test hardware provided direct measurement of net heat input for comparison to predicted values. Predicted value of net heat input was 1.7 percent less than measured value and initial calculations of measurement uncertainty were 2.1 percent (under review). Lessons learned during validation effort were incorporated into convertor modeling approach which improved predictions of convertor efficiency.
A Green's function method for heavy ion beam transport
NASA Technical Reports Server (NTRS)
Shinn, J. L.; Wilson, J. W.; Schimmerling, W.; Shavers, M. R.; Miller, J.; Benton, E. V.; Frank, A. L.; Badavi, F. F.
1995-01-01
The use of Green's function has played a fundamental role in transport calculations for high-charge high-energy (HZE) ions. Two recent developments have greatly advanced the practical aspects of implementation of these methods. The first was the formulation of a closed-form solution as a multiple fragmentation perturbation series. The second was the effective summation of the closed-form solution through nonperturbative techniques. The nonperturbative methods have been recently extended to an inhomogeneous, two-layer transport media to simulate the lead scattering foil present in the Lawrence Berkeley Laboratories (LBL) biomedical beam line used for cancer therapy. Such inhomogeneous codes are necessary for astronaut shielding in space. The transport codes utilize the Langley Research Center atomic and nuclear database. Transport code and database evaluation are performed by comparison with experiments performed at the LBL Bevalac facility using 670 A MeV 20Ne and 600 A MeV 56Fe ion beams. The comparison with a time-of-flight and delta E detector measurement for the 20Ne beam and the plastic nuclear track detectors for 56Fe show agreement up to 35%-40% in water and aluminium targets, respectively.
Characterizing the D2 statistic: word matches in biological sequences.
Forêt, Sylvain; Wilson, Susan R; Burden, Conrad J
2009-01-01
Word matches are often used in sequence comparison methods, either as a measure of sequence similarity or in the first search steps of algorithms such as BLAST or BLAT. The D2 statistic is the number of matches of words of k letters between two sequences. Recent advances have been made in the characterization of this statistic and in the approximation of its distribution. Here, these results are extended to the case of approximate word matches. We compute the exact value of the variance of the D2 statistic for the case of a uniform letter distribution, and introduce a method to provide accurate approximations of the variance in the remaining cases. This enables the distribution of D2 to be approximated for typical situations arising in biological research. We apply these results to the identification of cis-regulatory modules, and show that this method detects such sequences with a high accuracy. The ability to approximate the distribution of D2 for both exact and approximate word matches will enable the use of this statistic in a more precise manner for sequence comparison, database searches, and identification of transcription factor binding sites.
Swiat, Maciej; Weigele, John; Hurst, Robert W; Kasner, Scott E; Pawlak, Mikolaj; Arkuszewski, Michal; Al-Okaili, Riyadh N; Swiercz, Miroslaw; Ustymowicz, Andrzej; Opala, Grzegorz; Melhem, Elias R; Krejza, Jaroslaw
2009-03-01
To prospectively compare accuracies of transcranial color-coded duplex sonography (TCCS) and transcranial Doppler sonography (TCD) in the diagnosis of middle cerebral artery (MCA) vasospasm. Prospective blinded head-to-head comparison TCD and TCCS methods using digital subtraction angiography (DSA) as the reference standard. Department of Radiology in a tertiary university health center in a metropolitan area. Eighty-one consecutive patients (mean age, 53.9 +/- 13.9 years; 48 women). The indication for DSA was subarachnoid hemorrhage in 71 patients (87.6%), stroke or transient ischemic attack in five patients (6.2%), and other reasons in five patients (6.2%). The MCA was graded as normal, narrowed <50%, and >50% using DSA. The accuracy of ultrasound methods was estimated by total area (Az) under receiver operator characteristic curve. To compare sensitivities of ultrasound methods, McNemar's test was used with mean velocity thresholds of 120 cm/sec for the detection of less advanced, and 200 cm/sec for the more advanced MCA narrowing. Angiographic MCA narrowing
NASA Astrophysics Data System (ADS)
Ito, Shigenobu; Yukita, Kazuto; Goto, Yasuyuki; Ichiyanagi, Katsuhiro; Nakano, Hiroyuki
By the development of industry, in recent years; dependence to electric energy is growing year by year. Therefore, reliable electric power supply is in need. However, to stock a huge amount of electric energy is very difficult. Also, there is a necessity to keep balance between the demand and supply, which changes hour after hour. Consequently, to supply the high quality and highly dependable electric power supply, economically, and with high efficiency, there is a need to forecast the movement of the electric power demand carefully in advance. And using that forecast as the source, supply and demand management plan should be made. Thus load forecasting is said to be an important job among demand investment of electric power companies. So far, forecasting method using Fuzzy logic, Neural Net Work, Regression model has been suggested for the development of forecasting accuracy. Those forecasting accuracy is in a high level. But to invest electric power in higher accuracy more economically, a new forecasting method with higher accuracy is needed. In this paper, to develop the forecasting accuracy of the former methods, the daily peak load forecasting method using the weather distribution of highest and lowest temperatures, and comparison value of each nearby date data is suggested.
Elbert, Yevgeniy; Burkom, Howard S
2009-11-20
This paper discusses further advances in making robust predictions with the Holt-Winters forecasts for a variety of syndromic time series behaviors and introduces a control-chart detection approach based on these forecasts. Using three collections of time series data, we compare biosurveillance alerting methods with quantified measures of forecast agreement, signal sensitivity, and time-to-detect. The study presents practical rules for initialization and parameterization of biosurveillance time series. Several outbreak scenarios are used for detection comparison. We derive an alerting algorithm from forecasts using Holt-Winters-generalized smoothing for prospective application to daily syndromic time series. The derived algorithm is compared with simple control-chart adaptations and to more computationally intensive regression modeling methods. The comparisons are conducted on background data from both authentic and simulated data streams. Both types of background data include time series that vary widely by both mean value and cyclic or seasonal behavior. Plausible, simulated signals are added to the background data for detection performance testing at signal strengths calculated to be neither too easy nor too hard to separate the compared methods. Results show that both the sensitivity and the timeliness of the Holt-Winters-based algorithm proved to be comparable or superior to that of the more traditional prediction methods used for syndromic surveillance.
2017-04-01
The reporting of research in a manner that allows reproduction in subsequent investigations is important for scientific progress. Several details of the recent study by Patrizi et al., 'Comparison between low-cost marker-less and high-end marker-based motion capture systems for the computer-aided assessment of working ergonomics', are absent from the published manuscript and make reproduction of findings impossible. As new and complex technologies with great promise for ergonomics develop, new but surmountable challenges for reporting investigations using these technologies in a reproducible manner arise. Practitioner Summary: As with traditional methods, scientific reporting of new and complex ergonomics technologies should be performed in a manner that allows reproduction in subsequent investigations and supports scientific advancement.
George, D.L.
2011-01-01
The simulation of advancing flood waves over rugged topography, by solving the shallow-water equations with well-balanced high-resolution finite volume methods and block-structured dynamic adaptive mesh refinement (AMR), is described and validated in this paper. The efficiency of block-structured AMR makes large-scale problems tractable, and allows the use of accurate and stable methods developed for solving general hyperbolic problems on quadrilateral grids. Features indicative of flooding in rugged terrain, such as advancing wet-dry fronts and non-stationary steady states due to balanced source terms from variable topography, present unique challenges and require modifications such as special Riemann solvers. A well-balanced Riemann solver for inundation and general (non-stationary) flow over topography is tested in this context. The difficulties of modeling floods in rugged terrain, and the rationale for and efficacy of using AMR and well-balanced methods, are presented. The algorithms are validated by simulating the Malpasset dam-break flood (France, 1959), which has served as a benchmark problem previously. Historical field data, laboratory model data and other numerical simulation results (computed on static fitted meshes) are shown for comparison. The methods are implemented in GEOCLAW, a subset of the open-source CLAWPACK software. All the software is freely available at. Published in 2010 by John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Larsen, J. D.; Schaap, M. G.
2013-12-01
Recent advances in computing technology and experimental techniques have made it possible to observe and characterize fluid dynamics at the micro-scale. Many computational methods exist that can adequately simulate fluid flow in porous media. Lattice Boltzmann methods provide the distinct advantage of tracking particles at the microscopic level and returning macroscopic observations. While experimental methods can accurately measure macroscopic fluid dynamics, computational efforts can be used to predict and gain insight into fluid dynamics by utilizing thin sections or computed micro-tomography (CMT) images of core sections. Although substantial effort have been made to advance non-invasive imaging methods such as CMT, fluid dynamics simulations, and microscale analysis, a true three dimensional image segmentation technique has not been developed until recently. Many competing segmentation techniques are utilized in industry and research settings with varying results. In this study lattice Boltzmann method is used to simulate stokes flow in a macroporous soil column. Two dimensional CMT images were used to reconstruct a three dimensional representation of the original sample. Six competing segmentation standards were used to binarize the CMT volumes which provide distinction between solid phase and pore space. The permeability of the reconstructed samples was calculated, with Darcy's Law, from lattice Boltzmann simulations of fluid flow in the samples. We compare simulated permeability from differing segmentation algorithms to experimental findings.
Takamura, Ayari; Watanabe, Ken; Akutsu, Tomoko
2017-07-01
Identification of human semen is indispensable for the investigation of sexual assaults. Fluorescence staining methods using commercial kits, such as the series of SPERM HY-LITER™ kits, have been useful to detect human sperm via strong fluorescence. These kits have been examined from various forensic aspects. However, because of a lack of evaluation methods, these studies did not provide objective, or quantitative, descriptions of the results nor clear criteria for the decisions reached. In addition, the variety of validations was considerably limited. In this study, we conducted more advanced validations of SPERM HY-LITER™ Express using our established image analysis method. Use of this method enabled objective and specific identification of fluorescent sperm's spots and quantitative comparisons of the sperm detection performance under complex experimental conditions. For body fluid mixtures, we examined interference with the fluorescence staining from other body fluid components. Effects of sample decomposition were simulated in high humidity and high temperature conditions. Semen with quite low sperm concentrations, such as azoospermia and oligospermia samples, represented the most challenging cases in application of the kit. Finally, the tolerance of the kit against various acidic and basic environments was analyzed. The validations herein provide useful information for the practical applications of the SPERM HY-LITER™ Express kit, which were previously unobtainable. Moreover, the versatility of our image analysis method toward various complex cases was demonstrated.
Weigel, K A; VanRaden, P M; Norman, H D; Grosu, H
2017-12-01
In the early 1900s, breed society herdbooks had been established and milk-recording programs were in their infancy. Farmers wanted to improve the productivity of their cattle, but the foundations of population genetics, quantitative genetics, and animal breeding had not been laid. Early animal breeders struggled to identify genetically superior families using performance records that were influenced by local environmental conditions and herd-specific management practices. Daughter-dam comparisons were used for more than 30 yr and, although genetic progress was minimal, the attention given to performance recording, genetic theory, and statistical methods paid off in future years. Contemporary (herdmate) comparison methods allowed more accurate accounting for environmental factors and genetic progress began to accelerate when these methods were coupled with artificial insemination and progeny testing. Advances in computing facilitated the implementation of mixed linear models that used pedigree and performance data optimally and enabled accurate selection decisions. Sequencing of the bovine genome led to a revolution in dairy cattle breeding, and the pace of scientific discovery and genetic progress accelerated rapidly. Pedigree-based models have given way to whole-genome prediction, and Bayesian regression models and machine learning algorithms have joined mixed linear models in the toolbox of modern animal breeders. Future developments will likely include elucidation of the mechanisms of genetic inheritance and epigenetic modification in key biological pathways, and genomic data will be used with data from on-farm sensors to facilitate precision management on modern dairy farms. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Reid, Matthew W; Hannemann, Nathan P; York, Gerald E; Ritter, John L; Kini, Jonathan A; Lewis, Jeffrey D; Sherman, Paul M; Velez, Carmen S; Drennon, Ann Marie; Bolzenius, Jacob D; Tate, David F
2017-07-01
To compare volumetric results from NeuroQuant® and FreeSurfer in a service member setting. Since the advent of medical imaging, quantification of brain anatomy has been a major research and clinical effort. Rapid advancement of methods to automate quantification and to deploy this information into clinical practice has surfaced in recent years. NeuroQuant® is one such tool that has recently been used in clinical settings. Accurate volumetric data are useful in many clinical indications; therefore, it is important to assess the intermethod reliability and concurrent validity of similar volume quantifying tools. Volumetric data from 148 U.S. service members across three different experimental groups participating in a study of mild traumatic brain injury (mTBI) were examined. Groups included mTBI (n = 71), posttraumatic stress disorder (n = 22), or a noncranial orthopedic injury (n = 55). Correlation coefficients and nonparametric group mean comparisons were used to assess reliability and concurrent validity, respectively. Comparison of these methods across our entire sample demonstrates generally fair to excellent reliability as evidenced by large intraclass correlation coefficients (ICC = .4 to .99), but little concurrent validity as evidenced by significantly different Mann-Whitney U comparisons for 26 of 30 brain structures measured. While reliability between the two segmenting tools is fair to excellent, volumetric outcomes are statistically different between the two methods. As suggested by both developers, structure segmentation should be visually verified prior to clinical use and rigor should be used when interpreting results generated by either method. Copyright © 2017 by the American Society of Neuroimaging.
VERA Core Simulator methodology for pressurized water reactor cycle depletion
Kochunas, Brendan; Collins, Benjamin; Stimpson, Shane; ...
2017-01-12
This paper describes the methodology developed and implemented in the Virtual Environment for Reactor Applications Core Simulator (VERA-CS) to perform high-fidelity, pressurized water reactor (PWR), multicycle, core physics calculations. Depletion of the core with pin-resolved power and nuclide detail is a significant advance in the state of the art for reactor analysis, providing the level of detail necessary to address the problems of the U.S. Department of Energy Nuclear Reactor Simulation Hub, the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS has three main components: the neutronics solver MPACT, the thermal-hydraulic (T-H) solver COBRA-TF (CTF), and the nuclidemore » transmutation solver ORIGEN. This paper focuses on MPACT and provides an overview of the resonance self-shielding methods, macroscopic-cross-section calculation, two-dimensional/one-dimensional (2-D/1-D) transport, nuclide depletion, T-H feedback, and other supporting methods representing a minimal set of the capabilities needed to simulate high-fidelity models of a commercial nuclear reactor. Results are presented from the simulation of a model of the first cycle of Watts Bar Unit 1. The simulation is within 16 parts per million boron (ppmB) reactivity for all state points compared to cycle measurements, with an average reactivity bias of <5 ppmB for the entire cycle. Comparisons to cycle 1 flux map data are also provided, and the average 2-D root-mean-square (rms) error during cycle 1 is 1.07%. To demonstrate the multicycle capability, a state point at beginning of cycle (BOC) 2 was also simulated and compared to plant data. The comparison of the cycle 2 BOC state has a reactivity difference of +3 ppmB from measurement, and the 2-D rms of the comparison in the flux maps is 1.77%. Lastly, these results provide confidence in VERA-CS’s capability to perform high-fidelity calculations for practical PWR reactor problems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kochunas, Brendan; Collins, Benjamin; Stimpson, Shane
This paper describes the methodology developed and implemented in the Virtual Environment for Reactor Applications Core Simulator (VERA-CS) to perform high-fidelity, pressurized water reactor (PWR), multicycle, core physics calculations. Depletion of the core with pin-resolved power and nuclide detail is a significant advance in the state of the art for reactor analysis, providing the level of detail necessary to address the problems of the U.S. Department of Energy Nuclear Reactor Simulation Hub, the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS has three main components: the neutronics solver MPACT, the thermal-hydraulic (T-H) solver COBRA-TF (CTF), and the nuclidemore » transmutation solver ORIGEN. This paper focuses on MPACT and provides an overview of the resonance self-shielding methods, macroscopic-cross-section calculation, two-dimensional/one-dimensional (2-D/1-D) transport, nuclide depletion, T-H feedback, and other supporting methods representing a minimal set of the capabilities needed to simulate high-fidelity models of a commercial nuclear reactor. Results are presented from the simulation of a model of the first cycle of Watts Bar Unit 1. The simulation is within 16 parts per million boron (ppmB) reactivity for all state points compared to cycle measurements, with an average reactivity bias of <5 ppmB for the entire cycle. Comparisons to cycle 1 flux map data are also provided, and the average 2-D root-mean-square (rms) error during cycle 1 is 1.07%. To demonstrate the multicycle capability, a state point at beginning of cycle (BOC) 2 was also simulated and compared to plant data. The comparison of the cycle 2 BOC state has a reactivity difference of +3 ppmB from measurement, and the 2-D rms of the comparison in the flux maps is 1.77%. Lastly, these results provide confidence in VERA-CS’s capability to perform high-fidelity calculations for practical PWR reactor problems.« less
Comparison of advanced engines for parabolic dish solar thermal power plants
NASA Technical Reports Server (NTRS)
Fujita, T.; Bowyer, J. M.; Gajanana, B. C.
1980-01-01
A paraboloidal dish solar thermal power plant produces electrical energy by a two-step conversion process. The collector subsystem is composed of a two-axis tracking paraboloidal concentrator and a cavity receiver. The concentrator focuses intercepted sunlight (direct, normal insolation) into a cavity receiver whose aperture encircles the focal point of the concentrator. At the internal wall of the receiver the electromagnetic radiation is converted to thermal energy. A heat engine/generator assembly then converts the thermal energy captured by the receiver to electricity. Developmental activity has been concentrated on small power modules which employ 11- to 12-meter diameter dishes to generate nominal power levels of approximately 20 kWe. A comparison of advanced heat engines for the dish power module is presented in terms of the performance potential of each engine with its requirements for advanced technology development. Three advanced engine possibilities are the Brayton (gas turbine), Brayton/Rankine combined cycle, and Stirling engines.
Innovative Networking Concepts Tested on the Advanced Communications Technology Satellite
NASA Technical Reports Server (NTRS)
Friedman, Daniel; Gupta, Sonjai; Zhang, Chuanguo; Ephremides, Anthony
1996-01-01
This paper describes a program of experiments conducted over the advanced communications technology satellite (ACTS) and the associated TI-VSAT (very small aperture terminal). The experiments were motivated by the commercial potential of low-cost receive only satellite terminals that can operate in a hybrid network environment, and by the desire to demonstrate frame relay technology over satellite networks. The first experiment tested highly adaptive methods of satellite bandwidth allocation in an integrated voice-data service environment. The second involved comparison of forward error correction (FEC) and automatic repeat request (ARQ) methods of error control for satellite communication with emphasis on the advantage that a hybrid architecture provides, especially in the case of multicasts. Finally, the third experiment demonstrated hybrid access to databases and compared the performance of internetworking protocols for interconnecting local area networks (LANs) via satellite. A custom unit termed frame relay access switch (FRACS) was developed by COMSAT Laboratories for these experiments; the preparation and conduct of these experiments involved a total of 20 people from the University of Maryland, the University of Colorado and COMSAT Laboratories, from late 1992 until 1995.
Simulation of Guided Wave Interaction with In-Plane Fiber Waviness
NASA Technical Reports Server (NTRS)
Leckey, Cara A. C.; Juarez, Peter D.
2016-01-01
Reducing the timeline for certification of composite materials and enabling the expanded use of advanced composite materials for aerospace applications are two primary goals of NASA's Advanced Composites Project (ACP). A key a technical challenge area for accomplishing these goals is the development of rapid composite inspection methods with improved defect characterization capabilities. Ongoing work at NASA Langley is focused on expanding ultrasonic simulation capabilities for composite materials. Simulation tools can be used to guide the development of optimal inspection methods. Custom code based on elastodynamic finite integration technique is currently being developed and implemented to study ultrasonic wave interaction with manufacturing defects, such as in-plane fiber waviness (marcelling). This paper describes details of validation comparisons performed to enable simulation of guided wave propagation in composites containing fiber waviness. Simulation results for guided wave interaction with in-plane fiber waviness are also discussed. The results show that the wavefield is affected by the presence of waviness on both the surface containing fiber waviness, as well as the opposite surface to the location of waviness.
A new learning paradigm: learning using privileged information.
Vapnik, Vladimir; Vashist, Akshay
2009-01-01
In the Afterword to the second edition of the book "Estimation of Dependences Based on Empirical Data" by V. Vapnik, an advanced learning paradigm called Learning Using Hidden Information (LUHI) was introduced. This Afterword also suggested an extension of the SVM method (the so called SVM(gamma)+ method) to implement algorithms which address the LUHI paradigm (Vapnik, 1982-2006, Sections 2.4.2 and 2.5.3 of the Afterword). See also (Vapnik, Vashist, & Pavlovitch, 2008, 2009) for further development of the algorithms. In contrast to the existing machine learning paradigm where a teacher does not play an important role, the advanced learning paradigm considers some elements of human teaching. In the new paradigm along with examples, a teacher can provide students with hidden information that exists in explanations, comments, comparisons, and so on. This paper discusses details of the new paradigm and corresponding algorithms, introduces some new algorithms, considers several specific forms of privileged information, demonstrates superiority of the new learning paradigm over the classical learning paradigm when solving practical problems, and discusses general questions related to the new ideas.
Simulation of guided wave interaction with in-plane fiber waviness
NASA Astrophysics Data System (ADS)
Leckey, Cara A. C.; Juarez, Peter D.
2017-02-01
Reducing the timeline for certification of composite materials and enabling the expanded use of advanced composite materials for aerospace applications are two primary goals of NASA's Advanced Composites Project (ACP). A key a technical challenge area for accomplishing these goals is the development of rapid composite inspection methods with improved defect characterization capabilities. Ongoing work at NASA Langley is focused on expanding ultrasonic simulation capabilities for composite materials. Simulation tools can be used to guide the development of optimal inspection methods. Custom code based on elastodynamic finite integration technique is currently being developed and implemented to study ultrasonic wave interaction with manufacturing defects, such as in-plane fiber waviness (marcelling). This paper describes details of validation comparisons performed to enable simulation of guided wave propagation in composites containing fiber waviness. Simulation results for guided wave interaction with in-plane fiber waviness are also discussed. The results show that the wavefield is affected by the presence of waviness on both the surface containing fiber waviness, as well as the opposite surface to the location of waviness.
Subtype Diagnosis of Primary Aldosteronism: Is Adrenal Vein Sampling Always Necessary?
Buffolo, Fabrizio; Monticone, Silvia; Williams, Tracy A.; Rossato, Denis; Burrello, Jacopo; Tetti, Martina; Veglio, Franco; Mulatero, Paolo
2017-01-01
Aldosterone producing adenoma and bilateral adrenal hyperplasia are the two most common subtypes of primary aldosteronism (PA) that require targeted and distinct therapeutic approaches: unilateral adrenalectomy or lifelong medical therapy with mineralocorticoid receptor antagonists. According to the 2016 Endocrine Society Guideline, adrenal venous sampling (AVS) is the gold standard test to distinguish between unilateral and bilateral aldosterone overproduction and therefore, to safely refer patients with PA to surgery. Despite significant advances in the optimization of the AVS procedure and the interpretation of hormonal data, a standardized protocol across centers is still lacking. Alternative methods are sought to either localize an aldosterone producing adenoma or to predict the presence of unilateral disease and thereby substantially reduce the number of patients with PA who proceed to AVS. In this review, we summarize the recent advances in subtyping PA for the diagnosis of unilateral and bilateral disease. We focus on the developments in the AVS procedure, the interpretation criteria, and comparisons of the performance of AVS with the alternative methods that are currently available. PMID:28420172
Comparison of Passive Microwave-Derived Early Melt Onset Records on Arctic Sea Ice
NASA Technical Reports Server (NTRS)
Bliss, Angela C.; Miller, Jeffrey A.; Meier, Walter N.
2017-01-01
Two long records of melt onset (MO) on Arctic sea ice from passive microwave brightness temperatures (Tbs) obtained by a series of satellite-borne instruments are compared. The Passive Microwave (PMW) method and Advanced Horizontal Range Algorithm (AHRA) detect the increase in emissivity that occurs when liquid water develops around snow grains at the onset of early melting on sea ice. The timing of MO on Arctic sea ice influences the amount of solar radiation absorbed by the ice-ocean system throughout the melt season by reducing surface albedos in the early spring. This work presents a thorough comparison of these two methods for the time series of MO dates from 1979through 2012. The methods are first compared using the published data as a baseline comparison of the publically available data products. A second comparison is performed on adjusted MO dates we produced to remove known differences in inter-sensor calibration of Tbs and masking techniques used to develop the original MO date products. These adjustments result in a more consistent set of input Tbs for the algorithms. Tests of significance indicate that the trends in the time series of annual mean MO dates for the PMW and AHRA are statistically different for the majority of the Arctic Ocean including the Laptev, E. Siberian, Chukchi, Beaufort, and central Arctic regions with mean differences as large as 38.3 days in the Barents Sea. Trend agreement improves for our more consistent MO dates for nearly all regions. Mean differences remain large, primarily due to differing sensitivity of in-algorithm thresholds and larger uncertainties in thin-ice regions.
Panda, Debabrata; Manickam, Sivakumar
2017-05-01
Sonophotocatalysis (SPC) is considered to be one of the important wastewater treatment techniques and hence attracted the attention of researchers to eliminate recalcitrant hazardous organic pollutants from aqueous phase. In general, SPC refers to the integrated use of ultrasonic sound waves, ultraviolet radiation and the addition of a semiconductor material which functions as a photocatalyst. Current research has brought numerous improvements in the SPC based treatment by opting visible light irradiation, nanocomposite catalysts and numerous catalyst supports for better stability and performance. This review accomplishes a critical analysis with respect to the recent advancements. The efficiency of SPC based treatments has been analyzed by considering the individual methods i.e. sonolysis, photocatalysis, sonophotolysis, sono-ozone, photo-Fenton and sono-Fenton. Besides, the essential parameters such as solution temperature, concentrations of initial pollutant and catalyst, initial pH, dosages of Fenton's reagent and hydrogen peroxide (H 2 O 2 ), ultrasonic power density, gas sparging, addition of radical scavenger, addition of carbon tetrachloride and methanol have been discussed with suggestions for the selection of optimum parameters. A higher synergistic pollutant removal rate has been reported during SPC treatment as compared to individual methods and the implementation of numerous doping materials and supports for the photocatalyst enhances the degradation rate of pollutants using DSPC under both visible and UV irradiation. Overall, SPC and DSPC based wastewater treatments are emerging as potential techniques as they provide effective solution in removing the recalcitrant organic pollutants and progressive research is expected to bring out superior treatment efficiency using these advanced technologies. The review has accomplished a thorough and a critical analysis of sonophotocatalysis (SPC) based on the recently published journals. Recent advancements in the doped sonophotocatalysis (DSPC) and the mechanisms behind synergistic enhancement in the pollutant degradation rate have been discussed with justifications. Besides, the possible future works are suggested for the advancements in sonophotocatalysis based treatment. This review will be beneficial for electing a SPC based method because of the accomplished sharp comparisons among the published results. The review includes current advancements of SPC based methods which aid for a low-cost and a large-scale wastewater treatment application. Copyright © 2016 Elsevier B.V. All rights reserved.
GEO-LEO reflectance band inter-comparison with BRDF and atmospheric scattering corrections
NASA Astrophysics Data System (ADS)
Chang, Tiejun; Xiong, Xiaoxiong Jack; Keller, Graziela; Wu, Xiangqian
2017-09-01
The inter-comparison of the reflective solar bands between the instruments onboard a geostationary orbit satellite and onboard a low Earth orbit satellite is very helpful to assess their calibration consistency. GOES-R was launched on November 19, 2016 and Himawari 8 was launched October 7, 2014. Unlike the previous GOES instruments, the Advanced Baseline Imager on GOES-16 (GOES-R became GOES-16 after November 29 when it reached orbit) and the Advanced Himawari Imager (AHI) on Himawari 8 have onboard calibrators for the reflective solar bands. The assessment of calibration is important for their product quality enhancement. MODIS and VIIRS, with their stringent calibration requirements and excellent on-orbit calibration performance, provide good references. The simultaneous nadir overpass (SNO) and ray-matching are widely used inter-comparison methods for reflective solar bands. In this work, the inter-comparisons are performed over a pseudo-invariant target. The use of stable and uniform calibration sites provides comparison with appropriate reflectance level, accurate adjustment for band spectral coverage difference, reduction of impact from pixel mismatching, and consistency of BRDF and atmospheric correction. The site in this work is a desert site in Australia (latitude -29.0 South; longitude 139.8 East). Due to the difference in solar and view angles, two corrections are applied to have comparable measurements. The first is the atmospheric scattering correction. The satellite sensor measurements are top of atmosphere reflectance. The scattering, especially Rayleigh scattering, should be removed allowing the ground reflectance to be derived. Secondly, the angle differences magnify the BRDF effect. The ground reflectance should be corrected to have comparable measurements. The atmospheric correction is performed using a vector version of the Second Simulation of a Satellite Signal in the Solar Spectrum modeling and BRDF correction is performed using a semi-empirical model. AHI band 1 (0.47μm) shows good matching with VIIRS band M3 with difference of 0.15%. AHI band 5 (1.69μm) shows largest difference in comparison with VIIRS M10.
Roper, Fred W.
1974-01-01
This final report compares career characteristics of former trainees employed in medical libraries in 1971 with those of another group of professional medical librarians who did not enter medical librarianship from special training programs. Career characteristics include career advancement (position level, number of people supervised, salary level), professional utilization (tasks perforṁed), and professional activity (association memberships and offices, number of journals read, continuing education activity). The comparison of characteristics for the two groups showed many similarities. A major difference appeared in the career advancement comparison. For the former trainees, economic advancement seems less dependent on upward movement in line positions. This suggests the possibility of two career tracks available to them. PMID:4462688
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, Vijay; Denton, David; SHarma, Pradeep
The key objective for this project was to evaluate the potential to achieve substantial reductions in the production cost of H 2-rich syngas via coal gasification with near-zero emissions due to the cumulative and synergistic benefits realized when multiple advanced technologies are integrated into the overall conversion process. In this project, Aerojet Rocketdyne’s (AR’s) advanced gasification technology (currently being offered as R-GAS™) and RTI International’s (RTI’s) advanced warm syngas cleanup technologies were evaluated via a number of comparative techno-economic case studies. AR’s advanced gasification technology consists of a dry solids pump and a compact gasifier system. Based on the uniquemore » design of this gasifier, it has been shown to reduce the capital cost of the gasification block by between 40 and 50%. At the start of this project, actual experimental work had been demonstrated through pilot plant systems for both the gasifier and dry solids pump. RTI’s advanced warm syngas cleanup technologies consist primarily of RTI’s Warm Gas Desulfurization Process (WDP) technology, which effectively allows decoupling of the sulfur and CO 2 removal allowing for more flexibility in the selection of the CO 2 removal technology, plus associated advanced technologies for direct sulfur recovery and water gas shift (WGS). WDP has been demonstrated at pre-commercial scale using an activated amine carbon dioxide recovery process which would not have been possible if a majority of the sulfur had not been removed from the syngas by WDP. This pre-commercial demonstration of RTI’s advanced warm syngas cleanup system was conducted in parallel to the activities on this project. The technical data and cost information from this pre-commercial demonstration were extensively used in this project during the techno-economic analysis. With this project, both of RTI’s advanced WGS technologies were investigated. Because RT’s advanced fixed-bed WGS (AFWGS) process was successfully implemented in the WDP pre-commercial demonstration test mentioned above, this technology was used as part of RTI’s advanced warm syngas technology package for the techno-economic analyses for this project. RTI’s advanced transport-reactor-based WGS (ATWGS) process was still conceptual at the start of this project, but one of the tasks for this project was to evaluate the technical feasibility of this technology. In each of the three application-based comparison studies conducted as part of this project, the reference case was based on an existing Department of Energy National Energy Technology Laboratory (DOE/NETL) system study. Each of these references cases used existing commercial technology and the system resulted in > 90% carbon capture. In the comparison studies for the use of the hydrogen-rich syngas generated in either an Integrated Gasification Combined Cycle (IGCC) or a Coal-to-Methanol (CTM) plant, the comparison cases consisted of the reference case, a case with the integration of each individual advanced technology (either AR or RTI), and finally a case with the integration of all the advanced technologies (AR and RTI combined). In the Coal-to-Liquids (CTL) comparison study, the comparison study consisted of only three cases, which included a reference case, a case with just RTI’s advanced syngas cleaning technology, and a case with AR’s and RTI’s advanced technologies. The results from these comparison studies showed that the integration of the advanced technologies did result in substantial benefits, and by far the greatest benefits were achieved for cases integrating all the advanced technologies. For the IGCC study, the fully integrated case resulted in a 1.4% net efficiency improvement, an 18% reduction in capital cost per kW of capacity, a 12% reduction in the operating cost per kWh, and a 75–79% reduction in sulfur emissions. For the CTM case, the fully integrated plant resulted in a 22% reduction in capital cost, a 13% reduction in operating costs, a > 99% net reduction in sulfur emissions, and a reduction of 13–15% in CO 2 emissions. Because the capital cost represents over 60% of the methanol Required Selling Price (RSP), the significant reduction in the capital cost for the advanced technology case resulted in an 18% reduction in methanol RSP. For the CTL case, the fully integrated plant resulted in a 16% reduction in capital cost, which represented a 13% reduction in diesel RSP. Finally, the technical feasibility analysis of RTI’s ATWGS process demonstrated that a fluid-bed catalyst with sufficient attrition resistance and WGS activity could be made and that the process achieved about a 24% reduction in capital cost compared to a conventional fixed-bed commercial process.« less
Advanced general aviation engine/airframe integration study
NASA Technical Reports Server (NTRS)
Zmroczek, L. A.
1982-01-01
A comparison of the in-airframe performance and efficiency of the advanced engine concepts is presented. The results indicate that the proposed advanced engines can significantly improve the performance and economy of general aviation airplanes. The engine found to be most promising is the highly advanced version of a rotary combustion (Wankel) engine. The low weight and fuel consumption of this engine, as well as its small size, make it suited for aircraft use.
A Method for Large Eddy Simulation of Acoustic Combustion Instabilities
NASA Astrophysics Data System (ADS)
Wall, Clifton; Pierce, Charles; Moin, Parviz
2002-11-01
A method for performing Large Eddy Simulation of acoustic combustion instabilities is presented. By extending the low Mach number pressure correction method to the case of compressible flow, a numerical method is developed in which the Poisson equation for pressure is replaced by a Helmholtz equation. The method avoids the acoustic CFL condition by using implicit time advancement, leading to large efficiency gains at low Mach number. The method also avoids artificial damping of acoustic waves. The numerical method is attractive for the simulation of acoustic combustion instabilities, since these flows are typically at low Mach number, and the acoustic frequencies of interest are usually low. Both of these characteristics suggest the use of larger time steps than those allowed by an acoustic CFL condition. The turbulent combustion model used is the Combined Conserved Scalar/Level Set Flamelet model of Duchamp de Lageneste and Pitsch for partially premixed combustion. Comparison of LES results to the experiments of Besson et al will be presented.
Advanced low-floor vehicle (ALFV) specification research.
DOT National Transportation Integrated Search
2015-08-01
This report details the results of research on market comparison, operational cost efficiencies, and prototype tests conducted on : a novel design for an Advanced Low Floor Vehicle (ALFV), flex-route transit bus. Section I describes how the need for ...
Laser resist screening for iP3500/3600 replacement for advanced reticle fabrication
NASA Astrophysics Data System (ADS)
Ota, Fumiko; Kobayashi, Hideo; Higuchi, Takao; Asakawa, Keishi
2001-01-01
This paper will describe resist screening results for iP3500/3600 replacement for the advanced laser reticle fabrication, resist coating thickness optimization proposal for the next generation as well. THMR-M100 (TOK) showed the best pattern profile with sharp shoulders and almost with no footing, and a newly developed resist, a joint-work between HOYA and a resist maker, showed the best performance in adhesion to chrome. However, there was not the best selection found unfortunately by this screening, which exceeded iP3500 in linearity and iso-dense bias (IDB) that was indispensable one for the advanced laser reticle fabrication. As regards coating thickness, we selected 307.5 nm thick as a candidate for coating thickness standard for the future with considering resist resolution performance such as linearity, γp(0-80) value and undercut, in conjunction with a risk of clear pinhole defects. For more precise comparison of iso-dense bias (IDB) performance, it would be better that the examination method is standardized because of the design pattern dependence of IDB.
NASA Astrophysics Data System (ADS)
Krishnan, S.; Rawindran, H.; Sinnathambi, C. M.; Lim, J. W.
2017-06-01
Due to the scarcity of water, it has become a necessity to improve the quality of wastewater that is discharged into the environment. Conventional wastewater treatment can be either a physical, chemical, and/or biological processes, or in some cases a combination of these operations. The main purpose of wastewater treatment is to eliminate nutrients, solids, and organic compounds from effluents. Current wastewater treatment technologies are deemed ineffective in the complete removal of pollutants, particularly organic matter. In many cases, these organic compounds are resistant to conventional treatment methods, thus creating the necessity for tertiary treatment. Advanced oxidation process (AOP), constitutes as a promising treatment technology for the management of wastewater. AOPs are characterised by a common chemical feature, where they utilize the highly reactive hydroxyl radicals for achieving complete mineralization of the organic pollutants into carbon dioxide and water. This paper delineates advanced oxidation processes currently used for the remediation of water and wastewater. It also provides the cost estimation of installing and running an AOP system. The costs are separated into three categories: capital, operational, and operating & maintenance.
Advances in genome-wide RNAi cellular screens: a case study using the Drosophila JAK/STAT pathway
2012-01-01
Background Genome-scale RNA-interference (RNAi) screens are becoming ever more common gene discovery tools. However, whilst every screen identifies interacting genes, less attention has been given to how factors such as library design and post-screening bioinformatics may be effecting the data generated. Results Here we present a new genome-wide RNAi screen of the Drosophila JAK/STAT signalling pathway undertaken in the Sheffield RNAi Screening Facility (SRSF). This screen was carried out using a second-generation, computationally optimised dsRNA library and analysed using current methods and bioinformatic tools. To examine advances in RNAi screening technology, we compare this screen to a biologically very similar screen undertaken in 2005 with a first-generation library. Both screens used the same cell line, reporters and experimental design, with the SRSF screen identifying 42 putative regulators of JAK/STAT signalling, 22 of which verified in a secondary screen and 16 verified with an independent probe design. Following reanalysis of the original screen data, comparisons of the two gene lists allows us to make estimates of false discovery rates in the SRSF data and to conduct an assessment of off-target effects (OTEs) associated with both libraries. We discuss the differences and similarities between the resulting data sets and examine the relative improvements in gene discovery protocols. Conclusions Our work represents one of the first direct comparisons between first- and second-generation libraries and shows that modern library designs together with methodological advances have had a significant influence on genome-scale RNAi screens. PMID:23006893
NASA Technical Reports Server (NTRS)
Veitch, J.; Raymond, V.; Farr, B.; Farr, W.; Graff, P.; Vitale, S.; Aylott, B.; Blackburn, K.; Christensen, N.; Coughlin, M.
2015-01-01
The Advanced LIGO and Advanced Virgo gravitational wave (GW) detectors will begin operation in the coming years, with compact binary coalescence events a likely source for the first detections. The gravitational waveforms emitted directly encode information about the sources, including the masses and spins of the compact objects. Recovering the physical parameters of the sources from the GW observations is a key analysis task. This work describes the LALInference software library for Bayesian parameter estimation of compact binary signals, which builds on several previous methods to provide a well-tested toolkit which has already been used for several studies. We show that our implementation is able to correctly recover the parameters of compact binary signals from simulated data from the advanced GW detectors. We demonstrate this with a detailed comparison on three compact binary systems: a binary neutron star (BNS), a neutron star - black hole binary (NSBH) and a binary black hole (BBH), where we show a cross-comparison of results obtained using three independent sampling algorithms. These systems were analysed with non-spinning, aligned spin and generic spin configurations respectively, showing that consistent results can be obtained even with the full 15-dimensional parameter space of the generic spin configurations. We also demonstrate statistically that the Bayesian credible intervals we recover correspond to frequentist confidence intervals under correct prior assumptions by analysing a set of 100 signals drawn from the prior. We discuss the computational cost of these algorithms, and describe the general and problem-specific sampling techniques we have used to improve the efficiency of sampling the compact binary coalescence (CBC) parameter space.
Advanced Launch Technology Life Cycle Analysis Using the Architectural Comparison Tool (ACT)
NASA Technical Reports Server (NTRS)
McCleskey, Carey M.
2015-01-01
Life cycle technology impact comparisons for nanolauncher technology concepts were performed using an Affordability Comparison Tool (ACT) prototype. Examined are cost drivers and whether technology investments can dramatically affect the life cycle characteristics. Primary among the selected applications was the prospect of improving nanolauncher systems. As a result, findings and conclusions are documented for ways of creating more productive and affordable nanolauncher systems; e.g., an Express Lane-Flex Lane concept is forwarded, and the beneficial effect of incorporating advanced integrated avionics is explored. Also, a Functional Systems Breakdown Structure (F-SBS) was developed to derive consistent definitions of the flight and ground systems for both system performance and life cycle analysis. Further, a comprehensive catalog of ground segment functions was created.
Shearer, Jane; McManners, Joseph
2009-07-01
Innovations in periradicular surgery for failed treatment of orthograde root canal disease have been well-documented. We know of no prospective studies that have compared success rates of conventional methods with these presumed advances. In this prospective randomised trial we compare the use of an ultrasonic retrotip with a microhead bur in the preparation of a retrograde cavity. Outcome was estimated clinically by estimation of pain, swelling, and sinus, and radiographically by looking at infill of bone and retrograde root filling 2 weeks and 6 months postoperatively. Both methods used other surgical techniques including microinstruments to place the retrograde root filling. The success rate of the ultrasonic method was higher (all patients, n=26) than that of the microhead method (n=19 of 21). A larger study with longer follow up is required to consolidate this evidence.
Islam, Asef; Oldham, Michael J; Wexler, Anthony S
2017-11-01
Mammalian lungs are comprised of large numbers of tracheobronchial airways that transition from the trachea to alveoli. Studies as wide ranging as pollutant deposition and lung development rely on accurate characterization of these airways. Advancements in CT imaging and the value of computational approaches in eliminating the burden of manual measurement are providing increased efficiency in obtaining this geometric data. In this study, we compare an automated method to a manual one for the first six generations of three Balb/c mouse lungs. We find good agreement between manual and automated methods and that much of the disagreement can be attributed to method precision. Using the automated method, we then provide anatomical data for the entire tracheobronchial airway tree from three Balb/C mice. Anat Rec, 2017. © 2017 Wiley Periodicals, Inc. Anat Rec, 300:2046-2057, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Numerical methods for engine-airframe integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murthy, S.N.B.; Paynter, G.C.
1986-01-01
Various papers on numerical methods for engine-airframe integration are presented. The individual topics considered include: scientific computing environment for the 1980s, overview of prediction of complex turbulent flows, numerical solutions of the compressible Navier-Stokes equations, elements of computational engine/airframe integrations, computational requirements for efficient engine installation, application of CAE and CFD techniques to complete tactical missile design, CFD applications to engine/airframe integration, and application of a second-generation low-order panel methods to powerplant installation studies. Also addressed are: three-dimensional flow analysis of turboprop inlet and nacelle configurations, application of computational methods to the design of large turbofan engine nacelles, comparison ofmore » full potential and Euler solution algorithms for aeropropulsive flow field computations, subsonic/transonic, supersonic nozzle flows and nozzle integration, subsonic/transonic prediction capabilities for nozzle/afterbody configurations, three-dimensional viscous design methodology of supersonic inlet systems for advanced technology aircraft, and a user's technology assessment.« less
Comparison of Different Methods of Grading a Level Turn Task on a Flight Simulator
NASA Technical Reports Server (NTRS)
Heath, Bruce E.; Crier, tomyka
2003-01-01
With the advancements in the computing power of personal computers, pc-based flight simulators and trainers have opened new avenues in the training of airplane pilots. It may be desirable to have the flight simulator make a quantitative evaluation of the progress of a pilot's training thereby reducing the physical requirement of the flight instructor who must, in turn, watch every flight. In an experiment, University students conducted six different flights, each consisting of two level turns. The flights were three minutes in duration. By evaluating videotapes, two certified flight instructors provided separate letter grades for each turn. These level turns were also evaluated using two other computer based grading methods. One method determined automated grades based on prescribed tolerances in bank angle, airspeed and altitude. The other method used was deviations in altitude and bank angle for performance index and performance grades.
NASA Technical Reports Server (NTRS)
Holcomb, L. B.; Degrey, S. P.
1973-01-01
This paper addresses the comparison of several candidate auxiliary-propulsion systems and system combinations for an advanced synchronous satellite. Economic selection techniques, evolved at the Jet Propulsion Laboratory, are used as a basis for system option comparisons. Electric auxiliary-propulsion types considered include pulsed plasma and ion bombardment, with hydrazine systems used as a state-of-the-art reference. Current as well as projected electric-propulsion system performance data are used, as well as projected hydrazine system costs resulting from NASA standardization program projections.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Revell, J.D.; Tullis, R.H.
1977-08-01
The advantages of a propfan powered aircraft for the commercial air transportation system were assessed by the comparison with an equivalent turbofan transport. Comparisons were accomplished on the basis of fuel utilization and operating costs, as well as aircraft weight and size. Advantages of the propfan aircraft, concerning fuel utilization and operating costs, were accomplished by considering: (1) incorporation of propfan performance and acoustic data; (2) revised mission profiles (longer design range and reduction in; and cruise speed) (3) utilization of alternate and advanced technology engines.
Orbit transfer vehicle engine study, phase A, extension 1: Volume 2: Study results
NASA Technical Reports Server (NTRS)
Mellish, J. A.
1981-01-01
Because of the advantage of the Advanced Expander Cycle Engine brought out in initial studies, further design optimization and comparative analyses were undertaken. The major results and conclusion derived are summarized. The primary areas covered are (1) thrust chamber geometry optimization, (2) expander cycle optimization, (3) alternate low thrust capability, (4) safety and reliability, (5) development risk comparison, and (6) cost comparisons. All of the results obtained were used to baseline the initial design concept for the OTV Advanced Expander Cycle Engine Point Design Study.
NASA Technical Reports Server (NTRS)
Smith, Marilyn J.; Lim, Joon W.; vanderWall, Berend G.; Baeder, James D.; Biedron, Robert T.; Boyd, D. Douglas, Jr.; Jayaraman, Buvana; Jung, Sung N.; Min, Byung-Young
2012-01-01
Over the past decade, there have been significant advancements in the accuracy of rotor aeroelastic simulations with the application of computational fluid dynamics methods coupled with computational structural dynamics codes (CFD/CSD). The HART II International Workshop database, which includes descent operating conditions with strong blade-vortex interactions (BVI), provides a unique opportunity to assess the ability of CFD/CSD to capture these physics. In addition to a baseline case with BVI, two additional cases with 3/rev higher harmonic blade root pitch control (HHC) are available for comparison. The collaboration during the workshop permits assessment of structured, unstructured, and hybrid overset CFD/CSD methods from across the globe on the dynamics, aerodynamics, and wake structure. Evaluation of the plethora of CFD/CSD methods indicate that the most important numerical variables associated with most accurately capturing BVI are a two-equation or detached eddy simulation (DES)-based turbulence model and a sufficiently small time step. An appropriate trade-off between grid fidelity and spatial accuracy schemes also appears to be pertinent for capturing BVI on the advancing rotor disk. Overall, the CFD/CSD methods generally fall within the same accuracy; cost-effective hybrid Navier-Stokes/Lagrangian wake methods provide accuracies within 50% the full CFD/CSD methods for most parameters of interest, except for those highly influenced by torsion. The importance of modeling the fuselage is observed, and other computational requirements are discussed.
Technological advances for improving adenoma detection rates: The changing face of colonoscopy.
Ishaq, Sauid; Siau, Keith; Harrison, Elizabeth; Tontini, Gian Eugenio; Hoffman, Arthur; Gross, Seth; Kiesslich, Ralf; Neumann, Helmut
2017-07-01
Worldwide, colorectal cancer is the third commonest cancer. Over 90% follow an adenoma-to-cancer sequence over many years. Colonoscopy is the gold standard method for cancer screening and early adenoma detection. However, considerable variation exists between endoscopists' detection rates. This review considers the effects of different endoscopic techniques on adenoma detection. Two areas of technological interest were considered: (1) optical technologies and (2) mechanical technologies. Optical solutions, including FICE, NBI, i-SCAN and high definition colonoscopy showed mixed results. In contrast, mechanical advances, such as cap-assisted colonoscopy, FUSE, EndoCuff and G-EYE™, showed promise, with reported detections rates of up to 69%. However, before definitive recommendations can be made for their incorporation into daily practice, further studies and comparison trials are required. Copyright © 2017 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.
Recent Advances in Targeted and Untargeted Metabolomics by NMR and MS/NMR Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bingol, Kerem
Metabolomics has made significant progress in multiple fronts in the last 18 months. This minireview aimed to give an overview of these advancements in the light of their contribution to targeted and untargeted metabolomics. New computational approaches have emerged to overcome manual absolute quantitation step of metabolites in 1D 1H NMR spectra. This provides more consistency between inter-laboratory comparisons. Integration of 2D NMR metabolomics databases under a unified web server allowed very accurate identification of the metabolites that have been catalogued in these databases. For the remaining uncatalogued and unknown metabolites, new cheminformatics approaches have been developed by combining NMRmore » and mass spectrometry. These hybrid NMR/MS approaches accelerated the identification of unknowns in untargeted studies, and now they are allowing to profile ever larger number of metabolites in application studies.« less
Advancements in zebrafish applications for 21st century toxicology.
Garcia, Gloria R; Noyes, Pamela D; Tanguay, Robert L
2016-05-01
The zebrafish model is the only available high-throughput vertebrate assessment system, and it is uniquely suited for studies of in vivo cell biology. A sequenced and annotated genome has revealed a large degree of evolutionary conservation in comparison to the human genome. Due to our shared evolutionary history, the anatomical and physiological features of fish are highly homologous to humans, which facilitates studies relevant to human health. In addition, zebrafish provide a very unique vertebrate data stream that allows researchers to anchor hypotheses at the biochemical, genetic, and cellular levels to observations at the structural, functional, and behavioral level in a high-throughput format. In this review, we will draw heavily from toxicological studies to highlight advances in zebrafish high-throughput systems. Breakthroughs in transgenic/reporter lines and methods for genetic manipulation, such as the CRISPR-Cas9 system, will be comprised of reports across diverse disciplines. Copyright © 2016 Elsevier Inc. All rights reserved.
Advancements in zebrafish applications for 21st century toxicology
Garcia, Gloria R.; Noyes, Pamela D.; Tanguay, Robert L.
2016-01-01
The zebrafish model is the only available high-throughput vertebrate assessment system, and it is uniquely suited for studies of in vivo cell biology. A sequenced and annotated genome has revealed a large degree of evolutionary conservation in comparison to the human genome. Due to our shared evolutionary history, the anatomical and physiological features of fish are highly homologous to humans, which facilitates studies relevant to human health. In addition, zebrafish provide a very unique vertebrate data stream that allows researchers to anchor hypotheses at the biochemical, genetic, and cellular levels to observations at the structural, functional, and behavioral level in a high-throughput format. In this review, we will draw heavily from toxicological studies to highlight advances in zebrafish high-throughput systems. Breakthroughs in transgenic/reporter lines and methods for genetic manipulation, such as the CRISPR-Cas9 system, will be comprised of reports across diverse disciplines. PMID:27016469
The Status of Women in US Academic Pharmacy
Plaza, Cecilia M.; Taylor, Danielle A.; Meyer, Susan M.
2014-01-01
Objective. To describe the status of women in pharmacy education with particular focus on a 10-year update of a previous study. Methods. Information was obtained from national databases, published reports, scholarly articles, and association websites. Comparisons were made between men and women regarding degree completion, rank, tenure status, leadership positions, research awards, salaries, and career advancement. Results. There have been modest gains in the number of women serving as department chairs and deans. Salary disparities were found between men and women at several ranks within pharmacy practice. Men were more apt to be tenured or in tenure-track positions and received 89.4% of the national achievement awards tracked since 1981. Conclusion. The problem cannot be simply attributed to the pipeline of those entering academia. Barriers to advancement differ between men and women. We recommend that individuals, institutions, and associations implement strategies to decrease barriers and reduce bias against women. PMID:25657365
Decision Trajectories in Dementia Care Networks: Decisions and Related Key Events.
Groen-van de Ven, Leontine; Smits, Carolien; Oldewarris, Karen; Span, Marijke; Jukema, Jan; Eefsting, Jan; Vernooij-Dassen, Myrra
2017-10-01
This prospective multiperspective study provides insight into the decision trajectories of people with dementia by studying the decisions made and related key events. This study includes three waves of interviews, conducted between July 2010 and July 2012, with 113 purposefully selected respondents (people with beginning to advanced stages of dementia and their informal and professional caregivers) completed in 12 months (285 interviews). Our multilayered qualitative analysis consists of content analysis, timeline methods, and constant comparison. Four decision themes emerged-managing daily life, arranging support, community living, and preparing for the future. Eight key events delineate the decision trajectories of people with dementia. Decisions and key events differ between people with dementia living alone and living with a caregiver. Our study clarifies that decisions relate not only to the disease but to living with the dementia. Individual differences in decision content and sequence may effect shared decision-making and advance care planning.
Lamb wave propagation in a restricted geometry composite pi-joint specimen
NASA Astrophysics Data System (ADS)
Blackshire, James L.; Soni, Som
2012-05-01
The propagation of elastic waves in a material can involve a number of complex physical phenomena, resulting in both subtle and dramatic effects on detected signal content. In recent years, the use of advanced methods for characterizing and imaging elastic wave propagation and scattering processes has increased, where for example the use of scanning laser vibrometry and advanced computational models have been used very effectively to identify propagating modes, scattering phenomena, and damage feature interactions. In the present effort, the propagation of Lamb waves within a narrow, constrained geometry composite pi-joint structure are studied using 3D finite element models and scanning laser vibrometry measurements, where the effects of varying sample thickness, complex joint curvatures, and restricted structure geometries are highlighted, and a direct comparison of computational and experimental results are provided for simulated and realistic geometry composite pi-joint samples.
A Synthesis of Hybrid RANS/LES CFD Results for F-16XL Aircraft Aerodynamics
NASA Technical Reports Server (NTRS)
Luckring, James M.; Park, Michael A.; Hitzel, Stephan M.; Jirasek, Adam; Lofthouse, Andrew J.; Morton, Scott A.; McDaniel, David R.; Rizzi, Arthur M.
2015-01-01
A synthesis is presented of recent numerical predictions for the F-16XL aircraft flow fields and aerodynamics. The computational results were all performed with hybrid RANS/LES formulations, with an emphasis on unsteady flows and subsequent aerodynamics, and results from five computational methods are included. The work was focused on one particular low-speed, high angle-of-attack flight test condition, and comparisons against flight-test data are included. This work represents the third coordinated effort using the F-16XL aircraft, and a unique flight-test data set, to advance our knowledge of slender airframe aerodynamics as well as our capability for predicting these aerodynamics with advanced CFD formulations. The prior efforts were identified as Cranked Arrow Wing Aerodynamics Project International, with the acronyms CAWAPI and CAWAPI-2. All information in this paper is in the public domain.
Comparison of TOMS and AVHRR volcanic ash retrievals from the August 1992 eruption of Mt. Spurr
Krotkov, N.A.; Torres, O.; Seftor, C.; Krueger, A.J.; Kostinski, A.; Rose, William I.; Bluth, G.J.S.; Schneider, D.; Schaefer, S.J.
1999-01-01
On August 19, 1992, the Advanced Very High Resolution Radiometer (AVHRR) onboard NOAA-12 and NASA's Total Ozone Mapping Spectrometer (TOMS) onboard the Nimbus-7 satellite simultaneously detected and mapped the ash cloud from the eruption of Mt. Spurr, Alaska. The spatial extent and geometry of the cloud derived from the two datasets are in good agreement and both AVHRR split window IR (11-12??m brightness temperature difference) and the TOMS UV Aerosol Index (0.34-0.38??m ultraviolet backscattering and absorption) methods give the same range of total cloud ash mass. Redundant methods for determination of ash masses in drifting volcanic clouds offer many advantages for potential application to the mitigation of aircraft hazards.
Measuring up: Advances in How We Assess Reading Ability
ERIC Educational Resources Information Center
Sabatini, John; Albro, Elizabeth; O'Reilly, Tenaha
2012-01-01
In recent decades, the science of reading acquisition, processes, and individual differences in general and special populations has been continuously advancing through interdisciplinary research in cognitive, psycholinguistic, developmental, genetic, neuroscience, cross-language studies, and experimental comparison studies of effective…
Advanced simulation model for IPM motor drive with considering phase voltage and stator inductance
NASA Astrophysics Data System (ADS)
Lee, Dong-Myung; Park, Hyun-Jong; Lee, Ju
2016-10-01
This paper proposes an advanced simulation model of driving system for Interior Permanent Magnet (IPM) BrushLess Direct Current (BLDC) motors driven by 120-degree conduction method (two-phase conduction method, TPCM) that is widely used for sensorless control of BLDC motors. BLDC motors can be classified as SPM (Surface mounted Permanent Magnet) and IPM motors. Simulation model of driving system with SPM motors is simple due to the constant stator inductance regardless of the rotor position. Simulation models of SPM motor driving system have been proposed in many researches. On the other hand, simulation models for IPM driving system by graphic-based simulation tool such as Matlab/Simulink have not been proposed. Simulation study about driving system of IPMs with TPCM is complex because stator inductances of IPM vary with the rotor position, as permanent magnets are embedded in the rotor. To develop sensorless scheme or improve control performance, development of control algorithm through simulation study is essential, and the simulation model that accurately reflects the characteristic of IPM is required. Therefore, this paper presents the advanced simulation model of IPM driving system, which takes into account the unique characteristic of IPM due to the position-dependent inductances. The validity of the proposed simulation model is validated by comparison to experimental and simulation results using IPM with TPCM control scheme.
CFD-based design load analysis of 5MW offshore wind turbine
NASA Astrophysics Data System (ADS)
Tran, T. T.; Ryu, G. J.; Kim, Y. H.; Kim, D. H.
2012-11-01
The structure and aerodynamic loads acting on NREL 5MW reference wind turbine blade are calculated and analyzed based on advanced Computational Fluid Dynamics (CFD) and unsteady Blade Element Momentum (BEM). A detailed examination of the six force components has been carried out (three force components and three moment components). Structure load (gravity and inertia load) and aerodynamic load have been obtained by additional structural calculations (CFD or BEM, respectively,). In CFD method, the Reynolds Average Navier-Stokes approach was applied to solve the continuity equation of mass conservation and momentum balance so that the complex flow around wind turbines was modeled. Written in C programming language, a User Defined Function (UDF) code which defines transient velocity profile according to the Extreme Operating Gust condition was compiled into commercial FLUENT package. Furthermore, the unsteady BEM with 3D stall model has also adopted to investigate load components on wind turbine rotor. The present study introduces a comparison between advanced CFD and unsteady BEM for determining load on wind turbine rotor. Results indicate that there are good agreements between both present methods. It is importantly shown that six load components on wind turbine rotor is significant effect under Extreme Operating Gust (EOG) condition. Using advanced CFD and additional structural calculations, this study has succeeded to construct accuracy numerical methodology to estimate total load of wind turbine that compose of aerodynamic load and structure load.
NASA Astrophysics Data System (ADS)
Pollard, D.; Chang, W.; Haran, M.; Applegate, P.; DeConto, R.
2015-11-01
A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ~ 20 000 years. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree quite well with the more advanced techniques, but only for a large ensemble with full factorial parameter sampling. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds. Each run is extended 5000 years into the "future" with idealized ramped climate warming. In the majority of runs with reasonable scores, this produces grounding-line retreat deep into the West Antarctic interior, and the analysis provides sea-level-rise envelopes with well defined parametric uncertainty bounds.
NASA Astrophysics Data System (ADS)
Pandey, Preeti; Srivastava, Rakesh; Bandyopadhyay, Pradipta
2018-03-01
The relative performance of MM-PBSA and MM-3D-RISM methods to estimate the binding free energy of protein-ligand complexes is investigated by applying these to three proteins (Dihydrofolate Reductase, Catechol-O-methyltransferase, and Stromelysin-1) differing in the number of metal ions they contain. None of the computational methods could distinguish all the ligands based on their calculated binding free energies (as compared to experimental values). The difference between the two comes from both polar and non-polar part of solvation. For charged ligand case, MM-PBSA and MM-3D-RISM give a qualitatively different result for the polar part of solvation.
In-flight thrust determination on a real-time basis
NASA Technical Reports Server (NTRS)
Ray, R. J.; Carpenter, T.; Sandlin, T.
1984-01-01
A real time computer program was implemented on a F-15 jet fighter to monitor in-flight engine performance of a Digital Electronic Engine Controlled (DEES) F-100 engine. The application of two gas generator methods to calculate in-flight thrust real time is described. A comparison was made between the actual results and those predicted by an engine model simulation. The percent difference between the two methods was compared to the predicted uncertainty based on instrumentation and model uncertainty and agreed closely with the results found during altitude facility testing. Data was obtained from acceleration runs of various altitudes at maximum power settings with and without afterburner. Real time in-flight thrust measurement was a major advancement to flight test productivity and was accomplished with no loss in accuracy over previous post flight methods.
Melanins and melanogenesis: methods, standards, protocols.
d'Ischia, Marco; Wakamatsu, Kazumasa; Napolitano, Alessandra; Briganti, Stefania; Garcia-Borron, José-Carlos; Kovacs, Daniela; Meredith, Paul; Pezzella, Alessandro; Picardo, Mauro; Sarna, Tadeusz; Simon, John D; Ito, Shosuke
2013-09-01
Despite considerable advances in the past decade, melanin research still suffers from the lack of universally accepted and shared nomenclature, methodologies, and structural models. This paper stems from the joint efforts of chemists, biochemists, physicists, biologists, and physicians with recognized and consolidated expertise in the field of melanins and melanogenesis, who critically reviewed and experimentally revisited methods, standards, and protocols to provide for the first time a consensus set of recommended procedures to be adopted and shared by researchers involved in pigment cell research. The aim of the paper was to define an unprecedented frame of reference built on cutting-edge knowledge and state-of-the-art methodology, to enable reliable comparison of results among laboratories and new progress in the field based on standardized methods and shared information. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Magnetic Field Suppression of Flow in Semiconductor Melt
NASA Technical Reports Server (NTRS)
Fedoseyev, A. I.; Kansa, E. J.; Marin, C.; Volz, M. P.; Ostrogorsky, A. G.
2000-01-01
One of the most promising approaches for the reduction of convection during the crystal growth of conductive melts (semiconductor crystals) is the application of magnetic fields. Current technology allows the experimentation with very intense static fields (up to 80 KGauss) for which nearly convection free results are expected from simple scaling analysis in stabilized systems (vertical Bridgman method with axial magnetic field). However, controversial experimental results were obtained. The computational methods are, therefore, a fundamental tool in the understanding of the phenomena accounting during the solidification of semiconductor materials. Moreover, effects like the bending of the isomagnetic lines, different aspect ratios and misalignments between the direction of the gravity and magnetic field vectors can not be analyzed with analytical methods. The earliest numerical results showed controversial conclusions and are not able to explain the experimental results. Although the generated flows are extremely low, the computational task is a complicated because of the thin boundary layers. That is one of the reasons for the discrepancy in the results that numerical studies reported. Modeling of these magnetically damped crystal growth experiments requires advanced numerical methods. We used, for comparison, three different approaches to obtain the solution of the problem of thermal convection flows: (1) Spectral method in spectral superelement implementation, (2) Finite element method with regularization for boundary layers, (3) Multiquadric method, a novel method with global radial basis functions, that is proven to have exponential convergence. The results obtained by these three methods are presented for a wide region of Rayleigh and Hartman numbers. Comparison and discussion of accuracy, efficiency, reliability and agreement with experimental results will be presented as well.
GPU-Q-J, a fast method for calculating root mean square deviation (RMSD) after optimal superposition
2011-01-01
Background Calculation of the root mean square deviation (RMSD) between the atomic coordinates of two optimally superposed structures is a basic component of structural comparison techniques. We describe a quaternion based method, GPU-Q-J, that is stable with single precision calculations and suitable for graphics processor units (GPUs). The application was implemented on an ATI 4770 graphics card in C/C++ and Brook+ in Linux where it was 260 to 760 times faster than existing unoptimized CPU methods. Source code is available from the Compbio website http://software.compbio.washington.edu/misc/downloads/st_gpu_fit/ or from the author LHH. Findings The Nutritious Rice for the World Project (NRW) on World Community Grid predicted de novo, the structures of over 62,000 small proteins and protein domains returning a total of 10 billion candidate structures. Clustering ensembles of structures on this scale requires calculation of large similarity matrices consisting of RMSDs between each pair of structures in the set. As a real-world test, we calculated the matrices for 6 different ensembles from NRW. The GPU method was 260 times faster that the fastest existing CPU based method and over 500 times faster than the method that had been previously used. Conclusions GPU-Q-J is a significant advance over previous CPU methods. It relieves a major bottleneck in the clustering of large numbers of structures for NRW. It also has applications in structure comparison methods that involve multiple superposition and RMSD determination steps, particularly when such methods are applied on a proteome and genome wide scale. PMID:21453553
Venkatasubramanian, R; Das, U M; Bhatnagar, S
2010-01-01
Sterilization is the best method to counter the threats of microorganisms. The purpose of sterilization in the field of health care is to prevent the spread of infectious diseases. In dentistry, it primarily relates to processing reusable instruments to prevent cross-infection. The aim of this study was to investigate the efficacy of 4 methods of sterilizing endodontic instruments: Autoclaving, carbon dioxide laser sterilization, chemical sterilization (with glutaraldehyde) and glass-bead sterilization. The endodontic file was sterilized by 4 different methods after contaminating it with bacillus stearothermophillus and then checked for sterility by incubating after putting it in test tubes containing thioglycollate medium. The study showed that the files sterilized by autoclave and lasers were completely sterile. Those sterilized by glass bead were 90% sterile and those with glutaraldehyde were 80% sterile. The study concluded that autoclave or laser could be used as a method of sterilization in clinical practice and in advanced clinics; laser can be used also as a chair side method of sterilization.
A comparison of two instructional methods for drawing Lewis Structures
NASA Astrophysics Data System (ADS)
Terhune, Kari
Two instructional methods for teaching Lewis structures were compared -- the Direct Octet Rule Method (DORM) and the Commonly Accepted Method (CAM). The DORM gives the number of bonds and the number of nonbonding electrons immediately, while the CAM involves moving electron pairs from nonbonding to bonding electrons, if necessary. The research question was as follows: Will high school chemistry students draw more accurate Lewis structures using the DORM or the CAM? Students in Regular Chemistry 1 (N = 23), Honors Chemistry 1 (N = 51) and Chemistry 2 (N = 15) at an urban high school were the study participants. An identical pretest and posttest was given before and after instruction. Students were given instruction with either the DORM (N = 45), the treatment method, or the CAM (N = 44), the control for two days. After the posttest, 15 students were interviewed, using a semistructured interview process. The pretest/posttest consisted of 23 numerical response questions and 2 to 6 free response questions that were graded using a rubric. A two-way ANOVA showed a significant interaction effect between the groups and the methods, F (1, 70) = 10.960, p = 0.001. Post hoc comparisons using the Bonferroni pairwise comparison showed that Reg Chem 1 students demonstrated larger gain scores when they had been taught the CAM (Mean difference = 3.275, SE = 1.324, p < 0.05), while Hon Chem 1 students demonstrated larger gain scores after learning the DORM (Mean difference = 1.931, SE = 0.848, p < 0.05). The DORM requires five mathematical operations, while the CAM only requires one. Honors Chemistry 1 students performed better with the DORM, perhaps due to better math skills, enhanced working memory, and better metacognitive skills. Regular Chemistry 1 students performed better with the CAM, perhaps because it is more visual. Teachers may want to use the CAM or a direct-pairing method to introduce the topic and use the DORM in advanced classes when a correct structure is needed quickly.
Gmyr, Valery; Bonner, Caroline; Lukowiak, Bruno; Pawlowski, Valerie; Dellaleau, Nathalie; Belaich, Sandrine; Aluka, Isanga; Moermann, Ericka; Thevenet, Julien; Ezzouaoui, Rimed; Queniat, Gurvan; Pattou, Francois; Kerr-Conte, Julie
2015-01-01
Reliable assessment of islet viability, mass, and purity must be met prior to transplanting an islet preparation into patients with type 1 diabetes. The standard method for quantifying human islet preparations is by direct microscopic analysis of dithizone-stained islet samples, but this technique may be susceptible to inter-/intraobserver variability, which may induce false positive/negative islet counts. Here we describe a simple, reliable, automated digital image analysis (ADIA) technique for accurately quantifying islets into total islet number, islet equivalent number (IEQ), and islet purity before islet transplantation. Islets were isolated and purified from n = 42 human pancreata according to the automated method of Ricordi et al. For each preparation, three islet samples were stained with dithizone and expressed as IEQ number. Islets were analyzed manually by microscopy or automatically quantified using Nikon's inverted Eclipse Ti microscope with built-in NIS-Elements Advanced Research (AR) software. The AIDA method significantly enhanced the number of islet preparations eligible for engraftment compared to the standard manual method (p < 0.001). Comparisons of individual methods showed good correlations between mean values of IEQ number (r(2) = 0.91) and total islet number (r(2) = 0.88) and thus increased to r(2) = 0.93 when islet surface area was estimated comparatively with IEQ number. The ADIA method showed very high intraobserver reproducibility compared to the standard manual method (p < 0.001). However, islet purity was routinely estimated as significantly higher with the manual method versus the ADIA method (p < 0.001). The ADIA method also detected small islets between 10 and 50 µm in size. Automated digital image analysis utilizing the Nikon Instruments software is an unbiased, simple, and reliable teaching tool to comprehensively assess the individual size of each islet cell preparation prior to transplantation. Implementation of this technology to improve engraftment may help to advance the therapeutic efficacy and accessibility of islet transplantation across centers.
Kitagawa, Moeko; Haji, Seiji; Amagai, Teruyoshi
2017-10-01
In recent years, the number of patients with cancer has increased. These patients are prone to sarcopenia as a result of the decrease in muscle mass and muscle weakness that occur in cancer cachexia. Amino Index Cancer Screening is carried out to evaluate cancer cachexia risk by examining amino acid concentration and analyzing amino acid balance. We conducted a retrospective chart review of consecutive patients with unresectable advanced gastrointestinal cancer (stage IV) receiving chemotherapy treatment (December 2012-September 2015) in an outpatient or in-hospital setting at our institution (N = 46). Data included characteristics, psoas muscle area per computed tomography, and biochemical blood test and serum amino acid profiles. Method 1: Comparison of biomarkers between 2 groups: psoas muscle index change rate (ΔPMI) decrease vs increase. Method 2.1: Correlation between ΔPMI and biomarkers. Method 2.2: Multiple regression of ΔPMI and biomarkers. EAA/TAA ratio (essential amino acids/total amino acids) in the decrease group was significantly higher than that in the increase group. Among all parameters, serum C-reactive protein (CRP), leucine, and isoleucine were negatively related to ΔPMI (correlation coefficients = -0.604, -0.540, -0.518; P = .004, .011, .016, respectively). On multiple regression analysis, serum CRP value was strongly related to ΔPMI ( r 2 = 0.452, β = -0.672, P = .001). Higher serum EAA/TAA ratio and CRP were associated with depletion in psoas muscle area, which led to a diagnosis of sarcopenia, in patients with advanced gastrointestinal cancers. These parameters at baseline could be predictors of cancer cachexia.
Complex Molecules in the Laboratory - a Comparison of Chriped Pulse and Emission Spectroscopy
NASA Astrophysics Data System (ADS)
Hermanns, Marius; Wehres, Nadine; Maßen, Jakob; Schlemmer, Stephan
2017-06-01
Detecting molecules of astrophysical interest in the interstellar medium strongly relies on precise spectroscopic data from the laboratory. In recent years, the advancement of the chirped-pulse technique has added many more options available to choose from. The Cologne emission spectrometer is an additional path to molecular spectroscopy. It allows to record instantaneously broad band spectra with calibrated intensities. Here we present a comparison of both methods: The Cologne chirped-pulse spectrometer as well as the Cologne emission spectrometer both cover the frequency range of 75-110 GHz, consistent with the ALMA Band 3 receivers. High sensitive heterodyne receivers with very low noise temperature amplifiers are used with a typical bandwidth of 2.5 GHz in a single sideband. Additionally the chirped-pulse spectrometer contains a high power amplifier of 200 mW for the excitation of molecules. Room temperature spectra of methyl cyanide and comparison of key features, such as measurement time, sensitivity, limitations and commonalities are shown in respect to identification of complex molecules of astrophysical importance. In addition, future developments for both setups will be discussed.
NASA Astrophysics Data System (ADS)
Adiri, Zakaria; El Harti, Abderrazak; Jellouli, Amine; Lhissou, Rachid; Maacha, Lhou; Azmi, Mohamed; Zouhair, Mohamed; Bachaoui, El Mostafa
2017-12-01
Certainly, lineament mapping occupies an important place in several studies, including geology, hydrogeology and topography etc. With the help of remote sensing techniques, lineaments can be better identified due to strong advances in used data and methods. This allowed exceeding the usual classical procedures and achieving more precise results. The aim of this work is the comparison of ASTER, Landsat-8 and Sentinel 1 data sensors in automatic lineament extraction. In addition to image data, the followed approach includes the use of the pre-existing geological map, the Digital Elevation Model (DEM) as well as the ground truth. Through a fully automatic approach consisting of a combination of edge detection algorithm and line-linking algorithm, we have found the optimal parameters for automatic lineament extraction in the study area. Thereafter, the comparison and the validation of the obtained results showed that the Sentinel 1 data are more efficient in restitution of lineaments. This indicates the performance of the radar data compared to those optical in this kind of study.
Lubin, Arnaud; Geerinckx, Suzy; Bajic, Steve; Cabooter, Deirdre; Augustijns, Patrick; Cuyckens, Filip; Vreeken, Rob J
2016-04-01
Eicosanoids, including prostaglandins and thromboxanes are lipid mediators synthetized from polyunsaturated fatty acids. They play an important role in cell signaling and are often reported as inflammatory markers. LC-MS/MS is the technique of choice for the analysis of these compounds, often in combination with advanced sample preparation techniques. Here we report a head to head comparison between an electrospray ionization source (ESI) and a new atmospheric pressure ionization source (UniSpray). The performance of both interfaces was evaluated in various matrices such as human plasma, pig colon and mouse colon. The UniSpray source shows an increase in method sensitivity up to a factor 5. Equivalent to better linearity and repeatability on various matrices as well as an increase in signal intensity were observed in comparison to ESI. Copyright © 2016 Elsevier B.V. All rights reserved.
Solar wind flow past Venus - Theory and comparisons
NASA Technical Reports Server (NTRS)
Spreiter, J. R.; Stahara, S. S.
1980-01-01
Advanced computational procedures are applied to an improved model of solar wind flow past Venus to calculate the locations of the ionopause and bow wave and the properties of the flowing ionosheath plasma in the intervening region. The theoretical method is based on a single-fluid, steady, dissipationless, magneto-hydrodynamic continuum model and is appropriate for the calculation of axisymmetric supersonic, super-Alfvenic solar wind flow past a nonmagnetic planet possessing a sufficiently dense ionosphere to stand off the flowing plasma above the subsolar point and elsewhere. Determination of time histories of plasma and magnetic field properties along an arbitrary spacecraft trajectory and provision for an arbitrary oncoming direction of the interplanetary solar wind have been incorporated in the model. An outline is provided of the underlying theory and computational procedures, and sample comparisons of the results are presented with observations from the Pioneer Venus orbiter.
Mass univariate analysis of event-related brain potentials/fields I: a critical tutorial review.
Groppe, David M; Urbach, Thomas P; Kutas, Marta
2011-12-01
Event-related potentials (ERPs) and magnetic fields (ERFs) are typically analyzed via ANOVAs on mean activity in a priori windows. Advances in computing power and statistics have produced an alternative, mass univariate analyses consisting of thousands of statistical tests and powerful corrections for multiple comparisons. Such analyses are most useful when one has little a priori knowledge of effect locations or latencies, and for delineating effect boundaries. Mass univariate analyses complement and, at times, obviate traditional analyses. Here we review this approach as applied to ERP/ERF data and four methods for multiple comparison correction: strong control of the familywise error rate (FWER) via permutation tests, weak control of FWER via cluster-based permutation tests, false discovery rate control, and control of the generalized FWER. We end with recommendations for their use and introduce free MATLAB software for their implementation. Copyright © 2011 Society for Psychophysiological Research.
[Comparison of port needle with safety device between Huber Plus (HP) and Poly PERF Safe (PPS)].
Shimono, Chigusa; Tanaka, Atsuko; Fujita, Ai; Ishimoto, Miki; Oura, Shoji; Yamaue, Hiroki; Sato, Morio
2010-05-01
An embedded port is frequently used for outpatients with advanced cancer in central venous chemotherapy or hepatic arterial chemoinfusion. The port needle with a safety device in an ambulatory treatment center is indispensable for medical employees and patient plus family to reduce the risk of a needle puncture accident and to prevent iatrogenic infection. The port needle with safety system has been already introduced in our chemotherapy center. There are two types of port needle with safety device; Huber Plus (HP, Medicon Co., Ltd.) and POLY PERF Safe (PPS, Pyolax Device, Co., Ltd.). The comparison of the feasibility between HP and PPS was conducted by both medical employees and patients plus family using an inquiry score method. HP was highly regarded for its stability plus fixation and PPS for its usefulness in puncture and extraction of the needle. PPS was found to be preferable to HP based on the overall evaluation.
Diagnostic methods for CW laser damage testing
NASA Astrophysics Data System (ADS)
Stewart, Alan F.; Shah, Rashmi S.
2004-06-01
High performance optical coatings are an enabling technology for many applications - navigation systems, telecom, fusion, advanced measurement systems of many types as well as directed energy weapons. The results of recent testing of superior optical coatings conducted at high flux levels will be presented. The diagnostics used in this type of nondestructive testing and the analysis of the data demonstrates the evolution of test methodology. Comparison of performance data under load to the predictions of thermal and optical models shows excellent agreement. These tests serve to anchor the models and validate the performance of the materials and coatings.
Esthetic Prosthetic Restorations: Reliability and Effects on Antagonist Dentition
Daou, Elie E.
2015-01-01
Recent advances in ceramics have greatly improved the functional and esthetic properties of restorative materials. New materials offer an esthetic and functional oral rehabilitation, however their impact on opposing teeth is not welldocumented. Peer-reviewed articles published till December 2014 were identified through Pubmed (Medline and Elsevier). Scientifically, there are several methods of measuring the wear process of natural dentition which enhances the comparison of the complicated results. This paper presents an overview of the newly used prosthetic materials and their implication on antagonist teeth or prostheses, especially emphasizing the behavior of zirconia restorations. PMID:26962376
NASA Astrophysics Data System (ADS)
Haworth, Daniel
2013-11-01
The importance of explicitly accounting for the effects of unresolved turbulent fluctuations in Reynolds-averaged and large-eddy simulations of chemically reacting turbulent flows is increasingly recognized. Transported probability density function (PDF) methods have emerged as one of the most promising modeling approaches for this purpose. In particular, PDF methods provide an elegant and effective resolution to the closure problems that arise from averaging or filtering terms that correspond to nonlinear point processes, including chemical reaction source terms and radiative emission. PDF methods traditionally have been associated with studies of turbulence-chemistry interactions in laboratory-scale, atmospheric-pressure, nonluminous, statistically stationary nonpremixed turbulent flames; and Lagrangian particle-based Monte Carlo numerical algorithms have been the predominant method for solving modeled PDF transport equations. Recent advances and trends in PDF methods are reviewed and discussed. These include advances in particle-based algorithms, alternatives to particle-based algorithms (e.g., Eulerian field methods), treatment of combustion regimes beyond low-to-moderate-Damköhler-number nonpremixed systems (e.g., premixed flamelets), extensions to include radiation heat transfer and multiphase systems (e.g., soot and fuel sprays), and the use of PDF methods as the basis for subfilter-scale modeling in large-eddy simulation. Examples are provided that illustrate the utility and effectiveness of PDF methods for physics discovery and for applications to practical combustion systems. These include comparisons of results obtained using the PDF method with those from models that neglect unresolved turbulent fluctuations in composition and temperature in the averaged or filtered chemical source terms and/or the radiation heat transfer source terms. In this way, the effects of turbulence-chemistry-radiation interactions can be isolated and quantified.
Phylogenetic Tools for Generalized HIV-1 Epidemics: Findings from the PANGEA-HIV Methods Comparison.
Ratmann, Oliver; Hodcroft, Emma B; Pickles, Michael; Cori, Anne; Hall, Matthew; Lycett, Samantha; Colijn, Caroline; Dearlove, Bethany; Didelot, Xavier; Frost, Simon; Hossain, A S Md Mukarram; Joy, Jeffrey B; Kendall, Michelle; Kühnert, Denise; Leventhal, Gabriel E; Liang, Richard; Plazzotta, Giacomo; Poon, Art F Y; Rasmussen, David A; Stadler, Tanja; Volz, Erik; Weis, Caroline; Leigh Brown, Andrew J; Fraser, Christophe
2017-01-01
Viral phylogenetic methods contribute to understanding how HIV spreads in populations, and thereby help guide the design of prevention interventions. So far, most analyses have been applied to well-sampled concentrated HIV-1 epidemics in wealthy countries. To direct the use of phylogenetic tools to where the impact of HIV-1 is greatest, the Phylogenetics And Networks for Generalized HIV Epidemics in Africa (PANGEA-HIV) consortium generates full-genome viral sequences from across sub-Saharan Africa. Analyzing these data presents new challenges, since epidemics are principally driven by heterosexual transmission and a smaller fraction of cases is sampled. Here, we show that viral phylogenetic tools can be adapted and used to estimate epidemiological quantities of central importance to HIV-1 prevention in sub-Saharan Africa. We used a community-wide methods comparison exercise on simulated data, where participants were blinded to the true dynamics they were inferring. Two distinct simulations captured generalized HIV-1 epidemics, before and after a large community-level intervention that reduced infection levels. Five research groups participated. Structured coalescent modeling approaches were most successful: phylogenetic estimates of HIV-1 incidence, incidence reductions, and the proportion of transmissions from individuals in their first 3 months of infection correlated with the true values (Pearson correlation > 90%), with small bias. However, on some simulations, true values were markedly outside reported confidence or credibility intervals. The blinded comparison revealed current limits and strengths in using HIV phylogenetics in challenging settings, provided benchmarks for future methods' development, and supports using the latest generation of phylogenetic tools to advance HIV surveillance and prevention. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
NASA Technical Reports Server (NTRS)
Morris, Shelby J., Jr.; Geiselhart, Karl A.; Coen, Peter G.
1989-01-01
The performance of an advanced technology conceptual turbojet optimized for a high-speed civil aircraft is presented. This information represents an estimate of performance of a Mach 3 Brayton (gas turbine) cycle engine optimized for minimum fuel burned at supersonic cruise. This conceptual engine had no noise or environmental constraints imposed upon it. The purpose of this data is to define an upper boundary on the propulsion performance for a conceptual commercial Mach 3 transport design. A comparison is presented demonstrating the impact of the technology proposed for this conceptual engine on the weight and other characteristics of a proposed high-speed civil transport. This comparison indicates that the advanced technology turbojet described could reduce the gross weight of a hypothetical Mach 3 high-speed civil transport design from about 714,000 pounds to about 545,000 pounds. The aircraft with the baseline engine and the aircraft with the advanced technology engine are described.
Overview of Recent Radiation Transport Code Comparisons for Space Applications
NASA Astrophysics Data System (ADS)
Townsend, Lawrence
Recent advances in radiation transport code development for space applications have resulted in various comparisons of code predictions for a variety of scenarios and codes. Comparisons among both Monte Carlo and deterministic codes have been made and published by vari-ous groups and collaborations, including comparisons involving, but not limited to HZETRN, HETC-HEDS, FLUKA, GEANT, PHITS, and MCNPX. In this work, an overview of recent code prediction inter-comparisons, including comparisons to available experimental data, is presented and discussed, with emphases on those areas of agreement and disagreement among the various code predictions and published data.
Comparison of alternate fuels for aircraft
NASA Technical Reports Server (NTRS)
Witcofski, R. D.
1979-01-01
A comparison of candidate alternate fuels for aircraft is presented. The fuels discussed include liquid hydrogen, liquid methane, and synthetic aviation kerosene. Each fuel is evaluated from the standpoint of production, transmission, airport storage and distribution facilities, and use in aircraft. Technology deficient areas for cryogenic fuels, which should be advanced prior to the introduction of the fuels into the aviation industry, are identified, as are the cost and energy penalties associated with not achieving those advances. Environmental emissions and safety aspects of fuel selection are discussed. A detailed description of the various fuel production and liquefaction processes and their efficiencies and economics is given.
NASA Astrophysics Data System (ADS)
Sobina, E.; Zimathis, A.; Prinz, C.; Emmerling, F.; Unger, W.; de Santis Neves, R.; Galhardo, C. E.; De Robertis, E.; Wang, H.; Mizuno, K.; Kurokawa, A.
2016-01-01
CCQM key comparison K-136 Measurement of porosity properties (specific adsorption, BET specific surface area, specific pore volume and pore diameter) of nanoporous Al2O3 has been performed by the Surface Analysis Working Group (SAWG) of the Consultative Committee for Amount of Substance (CCQM). The objective of this key comparison is to compare the equivalency of the National Metrology Institutes (NMIs) and Designated Institutes (DIs) for the measurement of specific adsorption, BET specific surface area, specific pore volume and pore diameter) of nanoporous substances (sorbents, catalytic agents, cross-linkers, zeolites, etc) used in advanced technology. In this key comparison, a commercial sorbent (aluminum oxide) was supplied as a sample. Five NMIs participated in this key comparison. All participants used a gas adsorption method, here nitrogen adsorption at 77.3 K, for analysis according to the international standards ISO 15901-2 and 9277. In this key comparison, the degrees of equivalence uncertainties for specific adsorption, BET specific surface area, specific pore volume and pore diameter was established. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
Extremely accurate sequential verification of RELAP5-3D
Mesina, George L.; Aumiller, David L.; Buschman, Francis X.
2015-11-19
Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method ofmore » manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.« less
Methodology or method? A critical review of qualitative case study reports.
Hyett, Nerida; Kenny, Amanda; Dickson-Swift, Virginia
2014-01-01
Despite on-going debate about credibility, and reported limitations in comparison to other approaches, case study is an increasingly popular approach among qualitative researchers. We critically analysed the methodological descriptions of published case studies. Three high-impact qualitative methods journals were searched to locate case studies published in the past 5 years; 34 were selected for analysis. Articles were categorized as health and health services (n=12), social sciences and anthropology (n=7), or methods (n=15) case studies. The articles were reviewed using an adapted version of established criteria to determine whether adequate methodological justification was present, and if study aims, methods, and reported findings were consistent with a qualitative case study approach. Findings were grouped into five themes outlining key methodological issues: case study methodology or method, case of something particular and case selection, contextually bound case study, researcher and case interactions and triangulation, and study design inconsistent with methodology reported. Improved reporting of case studies by qualitative researchers will advance the methodology for the benefit of researchers and practitioners.
Asif, Muhammad Khan; Nambiar, Phrabhakaran; Mani, Shani Ann; Ibrahim, Norliza Binti; Khan, Iqra Muhammad; Sukumaran, Prema
2018-02-01
The methods of dental age estimation and identification of unknown deceased individuals are evolving with the introduction of advanced innovative imaging technologies in forensic investigations. However, assessing small structures like root canal volumes can be challenging in spite of using highly advanced technology. The aim of the study was to investigate which amongst the two methods of volumetric analysis of maxillary central incisors displayed higher strength of correlation between chronological age and pulp/tooth volume ratio for Malaysian adults. Volumetric analysis of pulp cavity/tooth ratio was employed in Method 1 and pulp chamber/crown ratio (up to cemento-enamel junction) was analysed in Method 2. The images were acquired employing CBCT scans and enhanced by manipulating them with the Mimics software. These scans belonged to 56 males and 54 females and their ages ranged from 16 to 65 years. Pearson correlation and regression analysis indicated that both methods used for volumetric measurements had strong correlation between chronological age and pulp/tooth volume ratio. However, Method 2 gave higher coefficient of determination value (R2 = 0.78) when compared to Method 1 (R2 = 0.64). Moreover, manipulation in Method 2 was less time consuming and revealed higher inter-examiner reliability (0.982) as no manual intervention during 'multiple slice editing phase' of the software was required. In conclusion, this study showed that volumetric analysis of pulp cavity/tooth ratio is a valuable gender independent technique and the Method 2 regression equation should be recommended for dental age estimation. Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
A Comparison of Methods for Assessing Space Suit Joint Ranges of Motion
NASA Technical Reports Server (NTRS)
Aitchison, Lindsay T.
2012-01-01
Through the Advanced Exploration Systems (AES) Program, NASA is attempting to use the vast collection of space suit mobility data from 50 years worth of space suit testing to build predictive analysis tools to aid in early architecture decisions for future missions and exploration programs. However, the design engineers must first understand if and how data generated by different methodologies can be compared directly and used in an essentially interchangeable manner. To address this question, the isolated joint range of motion data from two different test series were compared. Both data sets were generated from participants wearing the Mark III Space Suit Technology Demonstrator (MK-III), Waist Entry I-suit (WEI), and minimal clothing. Additionally the two tests shared a common test subject that allowed for within subject comparisons of the methods that greatly reduced the number of variables in play. The tests varied in their methodologies: the Space Suit Comparative Technologies Evaluation used 2-D photogrammetry to analyze isolated ranges of motion while the Constellation space suit benchmarking and requirements development used 3-D motion capture to evaluate both isolated and functional joint ranges of motion. The isolated data from both test series were compared graphically, as percent differences, and by simple statistical analysis. The results indicated that while the methods generate results that are statistically the same (significance level p= 0.01), the differences are significant enough in the practical sense to make direct comparisons ill advised. The concluding recommendations propose direction for how to bridge the data gaps and address future mobility data collection to allow for backward compatibility.
Phylogenetic Tools for Generalized HIV-1 Epidemics: Findings from the PANGEA-HIV Methods Comparison
Ratmann, Oliver; Hodcroft, Emma B.; Pickles, Michael; Cori, Anne; Hall, Matthew; Lycett, Samantha; Colijn, Caroline; Dearlove, Bethany; Didelot, Xavier; Frost, Simon; Hossain, A.S. Md Mukarram; Joy, Jeffrey B.; Kendall, Michelle; Kühnert, Denise; Leventhal, Gabriel E.; Liang, Richard; Plazzotta, Giacomo; Poon, Art F.Y.; Rasmussen, David A.; Stadler, Tanja; Volz, Erik; Weis, Caroline; Leigh Brown, Andrew J.; Fraser, Christophe
2017-01-01
Viral phylogenetic methods contribute to understanding how HIV spreads in populations, and thereby help guide the design of prevention interventions. So far, most analyses have been applied to well-sampled concentrated HIV-1 epidemics in wealthy countries. To direct the use of phylogenetic tools to where the impact of HIV-1 is greatest, the Phylogenetics And Networks for Generalized HIV Epidemics in Africa (PANGEA-HIV) consortium generates full-genome viral sequences from across sub-Saharan Africa. Analyzing these data presents new challenges, since epidemics are principally driven by heterosexual transmission and a smaller fraction of cases is sampled. Here, we show that viral phylogenetic tools can be adapted and used to estimate epidemiological quantities of central importance to HIV-1 prevention in sub-Saharan Africa. We used a community-wide methods comparison exercise on simulated data, where participants were blinded to the true dynamics they were inferring. Two distinct simulations captured generalized HIV-1 epidemics, before and after a large community-level intervention that reduced infection levels. Five research groups participated. Structured coalescent modeling approaches were most successful: phylogenetic estimates of HIV-1 incidence, incidence reductions, and the proportion of transmissions from individuals in their first 3 months of infection correlated with the true values (Pearson correlation > 90%), with small bias. However, on some simulations, true values were markedly outside reported confidence or credibility intervals. The blinded comparison revealed current limits and strengths in using HIV phylogenetics in challenging settings, provided benchmarks for future methods’ development, and supports using the latest generation of phylogenetic tools to advance HIV surveillance and prevention. PMID:28053012
ERIC Educational Resources Information Center
Manpower Administration (DOL), Washington, DC. Job Corps.
An advanced General Education Program has been designed to prepare an individual with the information, concepts, and general knowledge required to successfully pass the American Council on Education's High School General Education Development (GED) Test. The Advanced General Education Program provides comprehensive self-instruction in each of the…
Jeong, Jae Yoon; Kim, Tae Yeob; Sohn, Joo Hyun; Kim, Yongsoo; Jeong, Woo Kyoung; Oh, Young-Ha; Yoo, Kyo-Sang
2014-01-01
AIM: To evaluate the correlation between liver stiffness measurement (LSM) by real-time shear wave elastography (SWE) and liver fibrosis stage and the accuracy of LSM for predicting significant and advanced fibrosis, in comparison with serum markers. METHODS: We consecutively analyzed 70 patients with various chronic liver diseases. Liver fibrosis was staged from F0 to F4 according to the Batts and Ludwig scoring system. Significant and advanced fibrosis was defined as stage F ≥ 2 and F ≥ 3, respectively. The accuracy of prediction for fibrosis was analyzed using receiver operating characteristic curves. RESULTS: Seventy patients, 15 were belonged to F0-F1 stage, 20 F2, 13 F3 and 22 F4. LSM was increased with progression of fibrosis stage (F0-F1: 6.77 ± 1.72, F2: 9.98 ± 3.99, F3: 15.80 ± 7.73, and F4: 22.09 ± 10.09, P < 0.001). Diagnostic accuracies of LSM for prediction of F ≥ 2 and F ≥ 3 were 0.915 (95%CI: 0.824-0.968, P < 0.001) and 0.913 (95%CI: 0.821-0.967, P < 0.001), respectively. The cut-off values of LSM for prediction of F ≥ 2 and F ≥ 3 were 8.6 kPa with 78.2% sensitivity and 93.3% specificity and 10.46 kPa with 88.6% sensitivity and 80.0% specificity, respectively. However, there were no significant differences between LSM and serum hyaluronic acid and type IV collagen in diagnostic accuracy. CONCLUSION: SWE showed a significant correlation with the severity of liver fibrosis and was useful and accurate to predict significant and advanced fibrosis, comparable with serum markers. PMID:25320528
Comparison of Numerical Analyses with a Static Load Test of a Continuous Flight Auger Pile
NASA Astrophysics Data System (ADS)
Hoľko, Michal; Stacho, Jakub
2014-12-01
The article deals with numerical analyses of a Continuous Flight Auger (CFA) pile. The analyses include a comparison of calculated and measured load-settlement curves as well as a comparison of the load distribution over a pile's length. The numerical analyses were executed using two types of software, i.e., Ansys and Plaxis, which are based on FEM calculations. Both types of software are different from each other in the way they create numerical models, model the interface between the pile and soil, and use constitutive material models. The analyses have been prepared in the form of a parametric study, where the method of modelling the interface and the material models of the soil are compared and analysed. Our analyses show that both types of software permit the modelling of pile foundations. The Plaxis software uses advanced material models as well as the modelling of the impact of groundwater or overconsolidation. The load-settlement curve calculated using Plaxis is equal to the results of a static load test with a more than 95 % degree of accuracy. In comparison, the load-settlement curve calculated using Ansys allows for the obtaining of only an approximate estimate, but the software allows for the common modelling of large structure systems together with a foundation system.
Acevedo, Orlando; Jorgensen, William L
2010-01-19
Application of combined quantum and molecular mechanical (QM/MM) methods focuses on predicting activation barriers and the structures of stationary points for organic and enzymatic reactions. Characterization of the factors that stabilize transition structures in solution and in enzyme active sites provides a basis for design and optimization of catalysts. Continued technological advances allowed for expansion from prototypical cases to mechanistic studies featuring detailed enzyme and condensed-phase environments with full integration of the QM calculations and configurational sampling. This required improved algorithms featuring fast QM methods, advances in computing changes in free energies including free-energy perturbation (FEP) calculations, and enhanced configurational sampling. In particular, the present Account highlights development of the PDDG/PM3 semi-empirical QM method, computation of multi-dimensional potentials of mean force (PMF), incorporation of on-the-fly QM in Monte Carlo (MC) simulations, and a polynomial quadrature method for efficient modeling of proton-transfer reactions. The utility of this QM/MM/MC/FEP methodology is illustrated for a variety of organic reactions including substitution, decarboxylation, elimination, and pericyclic reactions. A comparison to experimental kinetic results on medium effects has verified the accuracy of the QM/MM approach in the full range of solvents from hydrocarbons to water to ionic liquids. Corresponding results from ab initio and density functional theory (DFT) methods with continuum-based treatments of solvation reveal deficiencies, particularly for protic solvents. Also summarized in this Account are three specific QM/MM applications to biomolecular systems: (1) a recent study that clarified the mechanism for the reaction of 2-pyrone derivatives catalyzed by macrophomate synthase as a tandem Michael-aldol sequence rather than a Diels-Alder reaction, (2) elucidation of the mechanism of action of fatty acid amide hydrolase (FAAH), an unusual Ser-Ser-Lys proteolytic enzyme, and (3) the construction of enzymes for Kemp elimination of 5-nitrobenzisoxazole that highlights the utility of QM/MM in the design of artificial enzymes.
NASA Astrophysics Data System (ADS)
Xiong, S.; Muller, J.-P.; Carretero, R. C.
2017-09-01
Subsurface layers are preserved in the polar regions on Mars, representing a record of past climate changes on Mars. Orbital radar instruments, such as the Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS) onboard ESA Mars Express (MEX) and the SHAllow RADar (SHARAD) onboard the Mars Reconnaissance Orbiter (MRO), transmit radar signals to Mars and receive a set of return signals from these subsurface regions. Layering is a prominent subsurface feature, which has been revealed by both MARSIS and SHARAD radargrams over both polar regions on Mars. Automatic extraction of these subsurface layering is becoming increasingly important as there is now over ten years' of data archived. In this study, we investigate two different methods for extracting these subsurface layers from SHARAD data and compare the results against delineated layers derived manually to validate which methods is better for extracting these layers automatically.
Zimmermann, Johannes; Wright, Aidan G C
2017-01-01
The interpersonal circumplex is a well-established structural model that organizes interpersonal functioning within the two-dimensional space marked by dominance and affiliation. The structural summary method (SSM) was developed to evaluate the interpersonal nature of other constructs and measures outside the interpersonal circumplex. To date, this method has been primarily descriptive, providing no way to draw inferences when comparing SSM parameters across constructs or groups. We describe a newly developed resampling-based method for deriving confidence intervals, which allows for SSM parameter comparisons. In a series of five studies, we evaluated the accuracy of the approach across a wide range of possible sample sizes and parameter values, and demonstrated its utility for posing theoretical questions on the interpersonal nature of relevant constructs (e.g., personality disorders) using real-world data. As a result, the SSM is strengthened for its intended purpose of construct evaluation and theory building. © The Author(s) 2015.
Optimized gene editing technology for Drosophila melanogaster using germ line-specific Cas9.
Ren, Xingjie; Sun, Jin; Housden, Benjamin E; Hu, Yanhui; Roesel, Charles; Lin, Shuailiang; Liu, Lu-Ping; Yang, Zhihao; Mao, Decai; Sun, Lingzhu; Wu, Qujie; Ji, Jun-Yuan; Xi, Jianzhong; Mohr, Stephanie E; Xu, Jiang; Perrimon, Norbert; Ni, Jian-Quan
2013-11-19
The ability to engineer genomes in a specific, systematic, and cost-effective way is critical for functional genomic studies. Recent advances using the CRISPR-associated single-guide RNA system (Cas9/sgRNA) illustrate the potential of this simple system for genome engineering in a number of organisms. Here we report an effective and inexpensive method for genome DNA editing in Drosophila melanogaster whereby plasmid DNAs encoding short sgRNAs under the control of the U6b promoter are injected into transgenic flies in which Cas9 is specifically expressed in the germ line via the nanos promoter. We evaluate the off-targets associated with the method and establish a Web-based resource, along with a searchable, genome-wide database of predicted sgRNAs appropriate for genome engineering in flies. Finally, we discuss the advantages of our method in comparison with other recently published approaches.
Derrac, Joaquín; Triguero, Isaac; Garcia, Salvador; Herrera, Francisco
2012-10-01
Cooperative coevolution is a successful trend of evolutionary computation which allows us to define partitions of the domain of a given problem, or to integrate several related techniques into one, by the use of evolutionary algorithms. It is possible to apply it to the development of advanced classification methods, which integrate several machine learning techniques into a single proposal. A novel approach integrating instance selection, instance weighting, and feature weighting into the framework of a coevolutionary model is presented in this paper. We compare it with a wide range of evolutionary and nonevolutionary related methods, in order to show the benefits of the employment of coevolution to apply the techniques considered simultaneously. The results obtained, contrasted through nonparametric statistical tests, show that our proposal outperforms other methods in the comparison, thus becoming a suitable tool in the task of enhancing the nearest neighbor classifier.
van den Berg, Yvonne H M; Gommans, Rob
2017-09-01
New technologies have led to several major advances in psychological research over the past few decades. Peer nomination research is no exception. Thanks to these technological innovations, computerized data collection is becoming more common in peer nomination research. However, computer-based assessment is more than simply programming the questionnaire and asking respondents to fill it in on computers. In this chapter the advantages and challenges of computer-based assessments are discussed. In addition, a list of practical recommendations and considerations is provided to inform researchers on how computer-based methods can be applied to their own research. Although the focus is on the collection of peer nomination data in particular, many of the requirements, considerations, and implications are also relevant for those who consider the use of other sociometric assessment methods (e.g., paired comparisons, peer ratings, peer rankings) or computer-based assessments in general. © 2017 Wiley Periodicals, Inc.
Deep Learning Nuclei Detection in Digitized Histology Images by Superpixels
Sornapudi, Sudhir; Stanley, Ronald Joe; Stoecker, William V.; Almubarak, Haidar; Long, Rodney; Antani, Sameer; Thoma, George; Zuna, Rosemary; Frazier, Shelliane R.
2018-01-01
Background: Advances in image analysis and computational techniques have facilitated automatic detection of critical features in histopathology images. Detection of nuclei is critical for squamous epithelium cervical intraepithelial neoplasia (CIN) classification into normal, CIN1, CIN2, and CIN3 grades. Methods: In this study, a deep learning (DL)-based nuclei segmentation approach is investigated based on gathering localized information through the generation of superpixels using a simple linear iterative clustering algorithm and training with a convolutional neural network. Results: The proposed approach was evaluated on a dataset of 133 digitized histology images and achieved an overall nuclei detection (object-based) accuracy of 95.97%, with demonstrated improvement over imaging-based and clustering-based benchmark techniques. Conclusions: The proposed DL-based nuclei segmentation Method with superpixel analysis has shown improved segmentation results in comparison to state-of-the-art methods. PMID:29619277
Anastasiadi, Maria; Mohareb, Fady; Redfern, Sally P; Berry, Mark; Simmonds, Monique S J; Terry, Leon A
2017-07-05
The present study represents the first major attempt to characterize the biochemical profile in different tissues of a large selection of apple cultivars sourced from the United Kingdom's National Fruit Collection comprising dessert, ornamental, cider, and culinary apples. Furthermore, advanced machine learning methods were applied with the objective to identify whether the phenolic and sugar composition of an apple cultivar could be used as a biomarker fingerprint to differentiate between heritage and mainstream commercial cultivars as well as govern the separation among primary usage groups and harvest season. A prediction accuracy of >90% was achieved with the random forest method for all three models. The results highlighted the extraordinary phytochemical potency and unique profile of some heritage, cider, and ornamental apple cultivars, especially in comparison to more mainstream apple cultivars. Therefore, these findings could guide future cultivar selection on the basis of health-promoting phytochemical content.
Moradi, Sara; Fazlali, Alireza; Hamedi, Hamid
Background: Hydro-distillation (HD) method is a traditional technique which is used in most industrial companies. Microwave-assisted Hydro-distillation (MAHD) is an advanced HD technique utilizing a microwave oven in the extraction process. Methods: In this research, MAHD of essential oils from the aerial parts (leaves) of rosemary (Rosmarinus officinalis L.) was studied and the results were compared with those of the conventional HD in terms of extraction time, extraction efficiency, chemical composition, quality of the essential oils and cost of the operation. Results: Microwave hydro-distillation was superior in terms of saving energy and extraction time (30 min, compared to 90 min in HD). Chromatography was used for quantity analysis of the essential oils composition. Quality of essential oil improved in MAHD method due to an increase of 17% in oxygenated compounds. Conclusion: Consequently, microwave hydro-distillation can be used as a substitute of traditional hydro-distillation. PMID:29296263
Design of advanced ultrasonic transducers for welding devices.
Parrini, L
2001-11-01
A new high frequency ultrasonic transducer has been conceived, designed, prototyped, and tested. In the design phase, an advanced approach was used and established. The method is based on an initial design estimate obtained with finite element method (FEM) simulations. The simulated ultrasonic transducers and resonators are then built and characterized experimentally through laser interferometry and electrical resonance spectra. The comparison of simulation results with experimental data allows the parameters of FEM models to be adjusted and optimized. The achieved FEM simulations exhibit a remarkably high predictive potential and allow full control of the vibration behavior of the transducer. The new transducer is mounted on a wire bonder with a flange whose special geometry was calculated by means of FEM simulations. This flange allows the transducer to be attached on the wire bonder, not only in longitudinal nodes, but also in radial nodes of the ultrasonic field excited in the horn. This leads to a total decoupling of the transducer to the wire bonder, which has not been achieved so far. The new approach to mount ultrasonic transducers on a welding device is of major importance, not only for wire bonding, but also for all high power ultrasound applications and has been patented.
A time-accurate finite volume method valid at all flow velocities
NASA Technical Reports Server (NTRS)
Kim, S.-W.
1993-01-01
A finite volume method to solve the Navier-Stokes equations at all flow velocities (e.g., incompressible, subsonic, transonic, supersonic and hypersonic flows) is presented. The numerical method is based on a finite volume method that incorporates a pressure-staggered mesh and an incremental pressure equation for the conservation of mass. Comparison of three generally accepted time-advancing schemes, i.e., Simplified Marker-and-Cell (SMAC), Pressure-Implicit-Splitting of Operators (PISO), and Iterative-Time-Advancing (ITA) scheme, are made by solving a lid-driven polar cavity flow and self-sustained oscillatory flows over circular and square cylinders. Calculated results show that the ITA is the most stable numerically and yields the most accurate results. The SMAC is the most efficient computationally and is as stable as the ITA. It is shown that the PISO is the most weakly convergent and it exhibits an undesirable strong dependence on the time-step size. The degenerated numerical results obtained using the PISO are attributed to its second corrector step that cause the numerical results to deviate further from a divergence free velocity field. The accurate numerical results obtained using the ITA is attributed to its capability to resolve the nonlinearity of the Navier-Stokes equations. The present numerical method that incorporates the ITA is used to solve an unsteady transitional flow over an oscillating airfoil and a chemically reacting flow of hydrogen in a vitiated supersonic airstream. The turbulence fields in these flow cases are described using multiple-time-scale turbulence equations. For the unsteady transitional over an oscillating airfoil, the fluid flow is described using ensemble-averaged Navier-Stokes equations defined on the Lagrangian-Eulerian coordinates. It is shown that the numerical method successfully predicts the large dynamic stall vortex (DSV) and the trailing edge vortex (TEV) that are periodically generated by the oscillating airfoil. The calculated streaklines are in very good comparison with the experimentally obtained smoke picture. The calculated turbulent viscosity contours show that the transition from laminar to turbulent state and the relaminarization occur widely in space as well as in time. The ensemble-averaged velocity profiles are also in good agreement with the measured data and the good comparison indicates that the numerical method as well as the multipletime-scale turbulence equations successfully predict the unsteady transitional turbulence field. The chemical reactions for the hydrogen in the vitiated supersonic airstream are described using 9 chemical species and 48 reaction-steps. Consider that a fast chemistry can not be used to describe the fine details (such as the instability) of chemically reacting flows while a reduced chemical kinetics can not be used confidently due to the uncertainty contained in the reaction mechanisms. However, the use of a detailed finite rate chemistry may make it difficult to obtain a fully converged solution due to the coupling between the large number of flow, turbulence, and chemical equations. The numerical results obtained in the present study are in good agreement with the measured data. The good comparison is attributed to the numerical method that can yield strongly converged results for the reacting flow and to the use of the multiple-time-scale turbulence equations that can accurately describe the mixing of the fuel and the oxidant.
Atmospheric Pressure Photoionization Tandem Mass Spectrometry of Androgens in Prostate Cancer
Lih, Fred Bjørn; Titus, Mark A.; Mohler, James L.; Tomer, Kenneth B.
2010-01-01
Androgen deprivation therapy is the most common treatment option for advanced prostate cancer. Almost all prostate cancers recur during androgen deprivation therapy, and new evidence suggests that androgen receptor activation persists despite castrate levels of circulating androgens. Quantitation of tissue levels of androgens is critical to understanding the mechanism of recurrence of prostate cancer during androgen deprivation therapy. A liquid chromatography atmospheric pressure photoionization tandem mass spectrometric method was developed for quantitation of tissue levels of androgens. Quantitation of the saturated keto-steroids dihydrotestosterone and 5-α-androstanedione required detection of a novel parent ion, [M + 15]+. The nature of this parent ion was explored and the method applied to prostate tissue and cell culture with comparison to results achieved using electrospray ionization. PMID:20560527
An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Erickson, Larry L.
1994-01-01
A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.
Techniques of EMG signal analysis: detection, processing, classification and applications
Hussain, M.S.; Mohd-Yasin, F.
2006-01-01
Electromyography (EMG) signals can be used for clinical/biomedical applications, Evolvable Hardware Chip (EHW) development, and modern human computer interaction. EMG signals acquired from muscles require advanced methods for detection, decomposition, processing, and classification. The purpose of this paper is to illustrate the various methodologies and algorithms for EMG signal analysis to provide efficient and effective ways of understanding the signal and its nature. We further point up some of the hardware implementations using EMG focusing on applications related to prosthetic hand control, grasp recognition, and human computer interaction. A comparison study is also given to show performance of various EMG signal analysis methods. This paper provides researchers a good understanding of EMG signal and its analysis procedures. This knowledge will help them develop more powerful, flexible, and efficient applications. PMID:16799694
USDA-ARS?s Scientific Manuscript database
Two advanced backcross populations were developed between a popular southern US tropical japonica rice (Oryza sativa L.) cultivar Bengal and two different of Oryza nivara (IRGC100898; IRGC104705) accessions to identify quantitative trait loci (QTLs) related to sheath blight (SB) disease resistance. ...
USDA-ARS?s Scientific Manuscript database
Recent advances in genome analysis and biochemical pathway mapping have advanced our understanding of how biological systems have evolved over time. Protein and DNA marker comparisons suggest that several of these systems are both ancient in origin but highly conserved into today’s evolved species. ...
The report is one of 11 in a series describing the initial development of the Advanced Utility Simulation Model (AUSM) by the Universities Research Group on Energy (URGE) and its continued development by the Science Applications International Corporation (SAIC) research team. The...
Mean curvature model for a quasi-static advancing meniscus: a drop tower test
NASA Astrophysics Data System (ADS)
Chen, Yongkang; Tavan, Noel; Weislogel, Mark
A critical geometric wetting condition resulting in a significant shift of a capillary fluid from one region of a container to another was recently demonstrated during experiments performed aboard the International Space Station (the Capillary Flow Experiments, Vane Gap test units, bulk shift phenomena). Such phenomena are of interest for advanced methods of control for large quantities of liquids aboard spacecraft. The dynamics of the flows are well understood, but analytical models remain qualitative without the correct capillary pressure driving force for the shifting bulk fluid—where one large interface (meniscus) advances while another recedes. To determine this pressure an investigation of the mean curvature of the advancing meniscus is presented which is inspired by earlier studies of receding bulk menisci in non-circular cylindrical containers. The approach is permissible only in the quasi-static limit. It will be shown that the mean curvature of the advancing bulk meniscus is related to that of the receding bulk meniscus, both of which are highly sensitive to container geometry and wetting conditions. The two meniscus curvatures are identical for any control parameter at the critical value identified by the Concus-Finn analysis. However, they differ when the control parameter is below its critical value. Experiments along these lines are well suited for drop towers and comparisons with the analytical predictions implementing the mean curvature model are presented. The validation opens a pathway to the analysis of such flows in containers of great geometric complexity.
Lyon, Maureen E; Garvie, Patricia A; Briggs, Linda; He, Jianping; Malow, Robert; D’Angelo, Lawrence J; McCarter, Robert
2010-01-01
Purpose To determine the safety of engaging HIV-positive (HIV+) adolescents in a Family Centered Advance Care (FACE) planning intervention. Patients and methods We conducted a 2-armed, randomized controlled clinical trial in 2 hospital-based outpatient clinics from 2006–2008 with HIV+ adolescents and their surrogates (n = 76). Three 60–90 minutes sessions were conducted weekly. FACE intervention groups received: Lyon FCACP Survey©, the Respecting Choices® interview, and completion of The Five Wishes©. The Healthy Living Control (HLC) received: Developmental History, Healthy Tips, Future Planning (vocational, school or vocational rehabilitation). Three-month post-intervention outcomes were: completion of advance directive (Five Wishes©); psychological adjustment (Beck Depression, Anxiety Inventories); quality of life (PedsQL™); and HIV symptoms (General Health Self-Assessment). Results Adolescents had a mean age, 16 years; 40% male; 92% African-American; 68% with perinatally acquired HIV, 29% had AIDS diagnosis. FACE participants completed advance directives more than controls, using time matched comparison (P < 0.001). Neither anxiety, nor depression, increased at clinically or statistically significant levels post-intervention. FACE adolescents maintained quality of life. FACE families perceived their adolescents as worsening in their school (P = 0.018) and emotional (P = 0.029) quality of life at 3 months, compared with controls. Conclusions Participating in advance care planning did not unduly distress HIV+ adolescents. PMID:22096382
Assessment of the hybrid propagation model, Volume 2: Comparison with the Integrated Noise Model
DOT National Transportation Integrated Search
2012-08-31
This is the second of two volumes of the report on the Hybrid Propagation Model (HPM), an advanced prediction model for aviation noise propagation. This volume presents comparisons of the HPM and the Integrated Noise Model (INM) for conditions of une...
A Comparison of Atmospheric Quantities Determined from Advanced WVR and Weather Analysis Data
NASA Astrophysics Data System (ADS)
Morabito, D.; Wu, L.; Slobin, S.
2017-05-01
Lower frequency bands used for deep space communications (e.g., 2.3 GHz and 8.4 GHz) are oversubscribed. Thus, NASA has become interested in using higher frequency bands (e.g., 26 GHz and 32 GHz) for telemetry, making use of the available wider bandwidth. However, these bands are more susceptible to atmospheric degradation. Currently, flight projects tend to be conservative in preparing their communications links by using worst-case or conservative assumptions, which result in nonoptimum data return. We previously explored the use of weather forecasting over different weather condition scenarios to determine more optimal values of atmospheric attenuation and atmospheric noise temperature for use in telecommunications link design. In this article, we present the results of a comparison of meteorological parameters (columnar water vapor and liquid water content) estimated from multifrequency Advanced Water Vapor Radiometer (AWVR) data with those estimated from weather analysis tools (FNL). We find that for the Deep Space Network's Goldstone and Madrid tracking sites, the statistics are in reasonable agreement between the two methods. We can then use the statistics of these quantities based on FNL runs to estimate statistics of atmospheric signal degradation for tracking sites that do not have the benefit of possessing multiyear WVR data sets, such as those of the NASA Near-Earth Network (NEN). The resulting statistics of atmospheric attenuation and atmospheric noise temperature increase can then be used in link budget calculations.
A Method for Large Eddy Simulation of Acoustic Combustion Instabilities
NASA Astrophysics Data System (ADS)
Wall, Clifton; Moin, Parviz
2003-11-01
A method for performing Large Eddy Simulation of acoustic combustion instabilities is presented. By extending the low Mach number pressure correction method to the case of compressible flow, a numerical method is developed in which the Poisson equation for pressure is replaced by a Helmholtz equation. The method avoids the acoustic CFL condition by using implicit time advancement, leading to large efficiency gains at low Mach number. The method also avoids artificial damping of acoustic waves. The numerical method is attractive for the simulation of acoustics combustion instabilities, since these flows are typically at low Mach number, and the acoustic frequencies of interest are usually low. Additionally, new boundary conditions based on the work of Poinsot and Lele have been developed to model the acoustic effect of a long channel upstream of the computational inlet, thus avoiding the need to include such a channel in the computational domain. The turbulent combustion model used is the Level Set model of Duchamp de Lageneste and Pitsch for premixed combustion. Comparison of LES results to the reacting experiments of Besson et al. will be presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwantes, Jon M.
Founded in 1996 upon the initiative of the “Group of 8” governments (G8), the Nuclear Forensics International Technical Working Group (ITWG) is an ad hoc organization of official Nuclear Forensics practitioners (scientists, law enforcement, and regulators) that can be called upon to provide technical assistance to the global community in the event of a seizure of nuclear or radiological materials. The ITWG is supported by and is affiliated with nearly 40 countries and international partner organizations including the International Atomic Energy Agency (IAEA), EURATOM, INTERPOL, EUROPOL, and the United Nations Interregional Crime and Justice Research Institute (UNICRI) (Figure 1). Besidesmore » providing a network of nuclear forensics laboratories that are able to assist the global community during a nuclear smuggling event, the ITWG is also committed to the advancement of the science of nuclear forensic analysis, largely through participation in periodic table top and Collaborative Materials Exercises (CMXs). Exercise scenarios use “real world” samples with realistic forensics investigation time constraints and reporting requirements. These exercises are designed to promote best practices in the field and test, evaluate, and improve new technical capabilities, methods and techniques in order to advance the science of nuclear forensics. Past efforts to advance nuclear forensic science have also included scenarios that asked laboratories to adapt conventional forensics methods (e.g. DNA, fingerprints, tool marks, and document comparisons) for collecting and preserving evidence comingled with radioactive materials.« less
NASA Astrophysics Data System (ADS)
Xu, Yong; Dong, Wen-Cai
2013-08-01
A frequency domain analysis method based on the three-dimensional translating-pulsating (3DTP) source Green function is developed to investigate wave loads and free motions of two ships advancing on parallel course in waves. Two experiments are carried out respectively to measure the wave loads and the freemotions for a pair of side-byside arranged ship models advancing with an identical speed in head regular waves. For comparison, each model is also tested alone. Predictions obtained by the present solution are found in favorable agreement with the model tests and are more accurate than the traditional method based on the three dimensional pulsating (3DP) source Green function. Numerical resonances and peak shift can be found in the 3DP predictions, which result from the wave energy trapped in the gap between two ships and the extremely inhomogeneous wave load distribution on each hull. However, they can be eliminated by 3DTP, in which the speed affects the free surface and most of the wave energy can be escaped from the gap. Both the experiment and the present prediction show that hydrodynamic interaction effects on wave loads and free motions are significant. The present solver may serve as a validated tool to predict wave loads and motions of two vessels under replenishment at sea, and may help to evaluate the hydrodynamic interaction effects on the ships safety in replenishment operation.
Nondestructive Evaluation of Advanced Materials with X-ray Phase Mapping
NASA Technical Reports Server (NTRS)
Hu, Zhengwei
2005-01-01
X-ray radiation has been widely used for imaging applications since Rontgen first discovered X-rays over a century ago. Its large penetration depth makes it ideal for the nondestructive visualization of the internal structure and/or defects of materials unobtainable otherwise. Currently used nondestructive evaluation (NDE) tools, X-ray radiography and tomography, are absorption-based, and work well in heavy-element materials where density or composition variations due to internal structure or defects are high enough to produce appreciable absorption contrast. However, in many cases where materials are light-weight and/or composites that have similar mass absorption coefficients, the conventional absorption-based X-ray methods for NDE become less useful. Indeed, the light-weight and ultra-high-strength requirements for the most advanced materials used or developed for current flight mission and future space exploration pose a great challenge to the standard NDE tools in that the absorption contrast arising from the internal structure of these materials is often too weak to be resolved. In this presentation, a solution to the problem, the use of phase information of X-rays for phase contrast X-ray imaging, will be discussed, along with a comparison between the absorption-based and phase-contrast imaging methods. Latest results on phase contrast X-ray imaging of lightweight Space Shuttle foam in 2D and 3D will be presented, demonstrating new opportunities to solve the challenging issues encountered in advanced materials development and processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Rui
2017-09-03
Mixing, thermal-stratification, and mass transport phenomena in large pools or enclosures play major roles for the safety of reactor systems. Depending on the fidelity requirement and computational resources, various modeling methods, from the 0-D perfect mixing model to 3-D Computational Fluid Dynamics (CFD) models, are available. Each is associated with its own advantages and shortcomings. It is very desirable to develop an advanced and efficient thermal mixing and stratification modeling capability embedded in a modern system analysis code to improve the accuracy of reactor safety analyses and to reduce modeling uncertainties. An advanced system analysis tool, SAM, is being developedmore » at Argonne National Laboratory for advanced non-LWR reactor safety analysis. While SAM is being developed as a system-level modeling and simulation tool, a reduced-order three-dimensional module is under development to model the multi-dimensional flow and thermal mixing and stratification in large enclosures of reactor systems. This paper provides an overview of the three-dimensional finite element flow model in SAM, including the governing equations, stabilization scheme, and solution methods. Additionally, several verification and validation tests are presented, including lid-driven cavity flow, natural convection inside a cavity, laminar flow in a channel of parallel plates. Based on the comparisons with the analytical solutions and experimental results, it is demonstrated that the developed 3-D fluid model can perform very well for a wide range of flow problems.« less
Econometric comparisons of liquid rocket engines for dual-fuel advanced earth-to-orbit shuttles
NASA Technical Reports Server (NTRS)
Martin, J. A.
1978-01-01
Econometric analyses of advanced Earth-to-orbit vehicles indicate that there are economic benefits from development of new vehicles beyond the space shuttle as traffic increases. Vehicle studies indicate the advantage of the dual-fuel propulsion in single-stage vehicles. This paper shows the economic effect of incorporating dual-fuel propulsion in advanced vehicles. Several dual-fuel propulsion systems are compared to a baseline hydrogen and oxygen system.
A Comparison of Satellite Conjunction Analysis Screening Tools
2011-09-01
visualization tool. Version 13.1.4 for Linux was tested. The SOAP conjunction analysis function does not have the capacity to perform the large...was examined by SOAP to confirm the conjunction. STK Advanced CAT STK Advanced CAT (Conjunction Analysis Tools) is an add-on module for the STK ...run with each tool. When attempting to perform the seven day all vs all analysis with STK Advanced CAT, the program consistently crashed during report
Control of Disturbing Loads in Residential and Commercial Buildings via Geometric Algebra
2013-01-01
Many definitions have been formulated to represent nonactive power for distorted voltages and currents in electronic and electrical systems. Unfortunately, no single universally suitable representation has been accepted as a prototype for this power component. This paper defines a nonactive power multivector from the most advanced multivectorial power theory based on the geometric algebra (GA). The new concept can have more importance on harmonic loads compensation, identification, and metering, between other applications. Likewise, this paper is concerned with a pioneering method for the compensation of disturbing loads. In this way, we propose a multivectorial relative quality index δ~ associated with the power multivector. It can be assumed as a new index for power quality evaluation, harmonic sources detection, and power factor improvement in residential and commercial buildings. The proposed method consists of a single-point strategy based of a comparison among different relative quality index multivectors, which may be measured at the different loads on the same metering point. The comparison can give pieces of information with magnitude, direction, and sense on the presence of disturbing loads. A numerical example is used to illustrate the clear capabilities of the suggested approach. PMID:24260017
Survival and mortality among users and non-users of hydroxyurea with sickle cell disease
de Araujo, Olinda Maria Rodrigues; Ivo, Maria Lúcia; Ferreira, Marcos Antonio; Pontes, Elenir Rose Jardim Cury; Bispo, Ieda Maria Gonçalves Pacce; de Oliveira, Eveny Cristine Luna
2015-01-01
OBJECTIVE: to estimate survival, mortality and cause of death among users or not of hydroxyurea with sickle cell disease. METHOD: cohort study with retrospective data collection, from 1980 to 2010 of patients receiving inpatient treatment in two Brazilian public hospitals. The survival probability was determined using the Kaplan-Meier estimator, survival calculations (SPSS version 10.0), comparison between survival curves, using the log rank method. The level of significance was p=0.05. RESULTS: of 63 patients, 87% had sickle cell anemia, with 39 using hydroxyurea, with a mean time of use of the drug of 20.0±10.0 years and a mean dose of 17.37±5.4 to 20.94±7.2 mg/kg/day, raising the fetal hemoglobin. In the comparison between those using hydroxyurea and those not, the survival curve was greater among the users (p=0.014). A total of 10 deaths occurred, with a mean age of 28.1 years old, and with Acute Respiratory Failure as the main cause. CONCLUSION: the survival curve is greater among the users of hydroxyurea. The results indicate the importance of the nurse incorporating therapeutic advances of hydroxyurea in her care actions. PMID:25806633
Khan, Haseeb Ahmad
2005-01-28
Due to versatile diagnostic and prognostic fidelity molecular signatures or fingerprints are anticipated as the most powerful tools for cancer management in the near future. Notwithstanding the experimental advancements in microarray technology, methods for analyzing either whole arrays or gene signatures have not been firmly established. Recently, an algorithm, ArraySolver has been reported by Khan for two-group comparison of microarray gene expression data using two-tailed Wilcoxon signed-rank test. Most of the molecular signatures are composed of two sets of genes (hybrid signatures) wherein up-regulation of one set and down-regulation of the other set collectively define the purpose of a gene signature. Since the direction of a selected gene's expression (positive or negative) with respect to a particular disease condition is known, application of one-tailed statistics could be a more relevant choice. A novel method, ArrayVigil, is described for comparing hybrid signatures using segregated-one-tailed (SOT) Wilcoxon signed-rank test and the results compared with integrated-two-tailed (ITT) procedures (SPSS and ArraySolver). ArrayVigil resulted in lower P values than those obtained from ITT statistics while comparing real data from four signatures.
Control of disturbing loads in residential and commercial buildings via geometric algebra.
Castilla, Manuel-V
2013-01-01
Many definitions have been formulated to represent nonactive power for distorted voltages and currents in electronic and electrical systems. Unfortunately, no single universally suitable representation has been accepted as a prototype for this power component. This paper defines a nonactive power multivector from the most advanced multivectorial power theory based on the geometric algebra (GA). The new concept can have more importance on harmonic loads compensation, identification, and metering, between other applications. Likewise, this paper is concerned with a pioneering method for the compensation of disturbing loads. In this way, we propose a multivectorial relative quality index δ(~) associated with the power multivector. It can be assumed as a new index for power quality evaluation, harmonic sources detection, and power factor improvement in residential and commercial buildings. The proposed method consists of a single-point strategy based of a comparison among different relative quality index multivectors, which may be measured at the different loads on the same metering point. The comparison can give pieces of information with magnitude, direction, and sense on the presence of disturbing loads. A numerical example is used to illustrate the clear capabilities of the suggested approach.
Ito, Atsuo; Sogo, Yu; Yamazaki, Atsushi; Aizawa, Mamoru; Osaka, Akiyoshi; Hayakawa, Satoshi; Kikuchi, Masanori; Yamashita, Kimihiro; Tanaka, Yumi; Tadokoro, Mika; de Sena, Lídia Ágata; Buchanan, Fraser; Ohgushi, Hajime; Bohner, Marc
2015-10-01
A potential standard method for measuring the relative dissolution rate to estimate the resorbability of calcium-phosphate-based ceramics is proposed. Tricalcium phosphate (TCP), magnesium-substituted TCP (MgTCP) and zinc-substituted TCP (ZnTCP) were dissolved in a buffer solution free of calcium and phosphate ions at pH 4.0, 5.5 or 7.3 at nine research centers. Relative values of the initial dissolution rate (relative dissolution rates) were in good agreement among the centers. The relative dissolution rate coincided with the relative volume of resorption pits of ZnTCP in vitro. The relative dissolution rate coincided with the relative resorbed volume in vivo in the case of comparison between microporous MgTCPs with different Mg contents and similar porosity. However, the relative dissolution rate was in poor agreement with the relative resorbed volume in vivo in the case of comparison between microporous TCP and MgTCP due to the superimposition of the Mg-mediated decrease in TCP solubility on the Mg-mediated increase in the amount of resorption. An unambiguous conclusion could not be made as to whether the relative dissolution rate is predictive of the relative resorbed volume in vivo in the case of comparison between TCPs with different porosity. The relative dissolution rate may be useful for predicting the relative amount of resorption for calcium-phosphate-based ceramics having different solubility under the condition that the differences in the materials compared have little impact on the resorption process such as the number and activity of resorbing cells. The evaluation and subsequent optimization of the resorbability of calcium phosphate are crucial in the use of resorbable calcium phosphates. Although the resorbability of calcium phosphates has usually been evaluated in vivo, establishment of a standard in vitro method that can predict in vivo resorption is beneficial for accelerating development and commercialization of new resorbable calcium phosphate materials as well as reducing use of animals. However, there are only a few studies to propose such an in vitro method within which direct comparison was carried out between in vitro and in vivo resorption. We propose here an in vitro method based on measuring dissolution rate. The efficacy and limitations of the method were evaluated by international round-robin tests as well as comparison with in vivo resorption studies for future standardization. This study was carried out as one of Versailles Projects on Advanced Materials and Standards (VAMAS). Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Doshi, Urmi; Hamelberg, Donald
2015-05-01
Accelerated molecular dynamics (aMD) has been proven to be a powerful biasing method for enhanced sampling of biomolecular conformations on general-purpose computational platforms. Biologically important long timescale events that are beyond the reach of standard molecular dynamics can be accessed without losing the detailed atomistic description of the system in aMD. Over other biasing methods, aMD offers the advantages of tuning the level of acceleration to access the desired timescale without any advance knowledge of the reaction coordinate. Recent advances in the implementation of aMD and its applications to small peptides and biological macromolecules are reviewed here along with a brief account of all the aMD variants introduced in the last decade. In comparison to the original implementation of aMD, the recent variant in which all the rotatable dihedral angles are accelerated (RaMD) exhibits faster convergence rates and significant improvement in statistical accuracy of retrieved thermodynamic properties. RaMD in conjunction with accelerating diffusive degrees of freedom, i.e. dual boosting, has been rigorously tested for the most difficult conformational sampling problem, protein folding. It has been shown that RaMD with dual boosting is capable of efficiently sampling multiple folding and unfolding events in small fast folding proteins. RaMD with the dual boost approach opens exciting possibilities for sampling multiple timescales in biomolecules. While equilibrium properties can be recovered satisfactorily from aMD-based methods, directly obtaining dynamics and kinetic rates for larger systems presents a future challenge. This article is part of a Special Issue entitled Recent developments of molecular dynamics. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Dawson, A.; Trachsel, M.; Goring, S. J.; Paciorek, C. J.; McLachlan, J. S.; Jackson, S. T.; Williams, J. W.
2017-12-01
Pollen records have been extensively used to reconstruct past changes in vegetation and study the underlying processes. However, developing the statistical techniques needed to accurately represent both data and process uncertainties is a formidable challenge. Recent advances in paleoecoinformatics (e.g. the Neotoma Paleoecology Database and the European Pollen Database), Bayesian age-depth models, and process-based pollen-vegetation models, and Bayesian hierarchical modeling have pushed paleovegetation reconstructions forward to a point where multiple sources of uncertainty can be incorporated into reconstructions, which in turn enables new hypotheses to be asked and more rigorous integration of paleovegetation data with earth system models and terrestrial ecosystem models. Several kinds of pollen-vegetation models have been developed, notably LOVE/REVEALS, STEPPS, and classical transfer functions such as the modern analog technique. LOVE/REVEALS has been adopted as the standard method for the LandCover6k effort to develop quantitative reconstructions of land cover for the Holocene, while STEPPS has been developed recently as part of the PalEON project and applied to reconstruct with uncertainty shifts in forest composition in New England and the upper Midwest during the late Holocene. Each PVM has different assumptions and structure and uses different input data, but few comparisons among approaches yet exist. Here, we present new reconstructions of land cover change in northern North America during the Holocene based on LOVE/REVEALS and data drawn from the Neotoma database and compare STEPPS-based reconstructions to those from LOVE/REVEALS. These parallel developments with LOVE/REVEALS provide an opportunity to compare and contrast models, and to begin to generate continental scale reconstructions, with explicit uncertainties, that can provide a base for interdisciplinary research within the biogeosciences. We show how STEPPS provides an important benchmark for past land-cover reconstruction, and how the LandCover 6k effort in North America advances our understanding of the past by allowing cross-continent comparisons using standardized methods and quantifying the impact of humans in the early Anthropocene.
Numerical comparisons of ground motion predictions with kinematic rupture modeling
NASA Astrophysics Data System (ADS)
Yuan, Y. O.; Zurek, B.; Liu, F.; deMartin, B.; Lacasse, M. D.
2017-12-01
Recent advances in large-scale wave simulators allow for the computation of seismograms at unprecedented levels of detail and for areas sufficiently large to be relevant to small regional studies. In some instances, detailed information of the mechanical properties of the subsurface has been obtained from seismic exploration surveys, well data, and core analysis. Using kinematic rupture modeling, this information can be used with a wave propagation simulator to predict the ground motion that would result from an assumed fault rupture. The purpose of this work is to explore the limits of wave propagation simulators for modeling ground motion in different settings, and in particular, to explore the numerical accuracy of different methods in the presence of features that are challenging to simulate such as topography, low-velocity surface layers, and shallow sources. In the main part of this work, we use a variety of synthetic three-dimensional models and compare the relative costs and benefits of different numerical discretization methods in computing the seismograms of realistic-size models. The finite-difference method, the discontinuous-Galerkin method, and the spectral-element method are compared for a range of synthetic models having different levels of complexity such as topography, large subsurface features, low-velocity surface layers, and the location and characteristics of fault ruptures represented as an array of seismic sources. While some previous studies have already demonstrated that unstructured-mesh methods can sometimes tackle complex problems (Moczo et al.), we investigate the trade-off between unstructured-mesh methods and regular-grid methods for a broad range of models and source configurations. Finally, for comparison, our direct simulation results are briefly contrasted with those predicted by a few phenomenological ground-motion prediction equations, and a workflow for accurately predicting ground motion is proposed.
PWSCC Assessment by Using Extended Finite Element Method
NASA Astrophysics Data System (ADS)
Lee, Sung-Jun; Lee, Sang-Hwan; Chang, Yoon-Suk
2015-12-01
The head penetration nozzle of control rod driving mechanism (CRDM) is known to be susceptible to primary water stress corrosion cracking (PWSCC) due to the welding-induced residual stress. Especially, the J-groove dissimilar metal weld regions have received many attentions in the previous studies. However, even though several advanced techniques such as weight function and finite element alternating methods have been introduced to predict the occurrence of PWSCC, there are still difficulties in respect of applicability and efficiency. In this study, the extended finite element method (XFEM), which allows convenient crack element modeling by enriching degree of freedom (DOF) with special displacement function, was employed to evaluate structural integrity of the CRDM head penetration nozzle. The resulting stress intensity factors of surface cracks were verified for the reliability of proposed method through the comparison with those suggested in the American Society of Mechanical Engineering (ASME) code. The detailed results from the FE analyses are fully discussed in the manuscript.
Yu, Hua-Gen
2015-01-28
We report a rigorous full dimensional quantum dynamics algorithm, the multi-layer Lanczos method, for computing vibrational energies and dipole transition intensities of polyatomic molecules without any dynamics approximation. The multi-layer Lanczos method is developed by using a few advanced techniques including the guided spectral transform Lanczos method, multi-layer Lanczos iteration approach, recursive residue generation method, and dipole-wavefunction contraction. The quantum molecular Hamiltonian at the total angular momentum J = 0 is represented in a set of orthogonal polyspherical coordinates so that the large amplitude motions of vibrations are naturally described. In particular, the algorithm is general and problem-independent. An applicationmore » is illustrated by calculating the infrared vibrational dipole transition spectrum of CH₄ based on the ab initio T8 potential energy surface of Schwenke and Partridge and the low-order truncated ab initio dipole moment surfaces of Yurchenko and co-workers. A comparison with experiments is made. The algorithm is also applicable for Raman polarizability active spectra.« less
Split Space-Marching Finite-Volume Method for Chemically Reacting Supersonic Flow
NASA Technical Reports Server (NTRS)
Rizzi, Arthur W.; Bailey, Harry E.
1976-01-01
A space-marching finite-volume method employing a nonorthogonal coordinate system and using a split differencing scheme for calculating steady supersonic flow over aerodynamic shapes is presented. It is a second-order-accurate mixed explicit-implicit procedure that solves the inviscid adiabatic and nondiffusive equations for chemically reacting flow in integral conservation-law form. The relationship between the finite-volume and differential forms of the equations is examined and the relative merits of each discussed. The method admits initial Cauchy data situated on any arbitrary surface and integrates them forward along a general curvilinear coordinate, distorting and deforming the surface as it advances. The chemical kinetics term is split from the convective terms which are themselves dimensionally split, thereby freeing the fluid operators from the restricted step size imposed by the chemical reactions and increasing the computational efficiency. The accuracy of this splitting technique is analyzed, a sufficient stability criterion is established, a representative flow computation is discussed, and some comparisons are made with another method.
Ohmic Heating: An Emerging Concept in Organic Synthesis.
Silva, Vera L M; Santos, Luis M N B F; Silva, Artur M S
2017-06-12
The ohmic heating also known as direct Joule heating, is an advanced thermal processing method, mainly used in the food industry to rapidly increase the temperature for either cooking or sterilization purposes. Its use in organic synthesis, in the heating of chemical reactors, is an emerging method that shows great potential, the development of which has started recently. This Concept article focuses on the use of ohmic heating as a new tool for organic synthesis. It presents the fundamentals of ohmic heating and makes a qualitative and quantitative comparison with other common heating methods. A brief description of the ohmic reactor prototype in operation is presented as well as recent examples of its use in organic synthesis at laboratory scale, thus showing the current state of the research. The advantages and limitations of this heating method, as well as its main current applications are also discussed. Finally, the prospects and potential implications of ohmic heating in future research in chemical synthesis are proposed. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Design of an advanced bundle divertor for the Demonstration Tokamak Hybrid Reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, T.F.; Lee, A.Y.; Ruck, G.W.
1979-01-25
The conclusion of this work is that a bundle divertor, using an improved method of designing the magnetic field configuration, is feasible for the Demonstration Tokamak Hybrid Reactor (DTHR) investigated by Westinghouse. The most significant achievement of this design is the reduction in current density (1 kA/cm/sup 2/) in the divertor coils in comparison to the overall averaged current densities per tesla of field to be nulled for DITE (25 kA/cm/sup 2/) and for ISX-B/sup 2/ (11 kA/cm/sup 2/). Therefore, superconducting magnets can be built into the tight space available with a sound mechanical structure.
Comptational Design Of Functional CA-S-H and Oxide Doped Alloy Systems
NASA Astrophysics Data System (ADS)
Yang, Shizhong; Chilla, Lokeshwar; Yang, Yan; Li, Kuo; Wicker, Scott; Zhao, Guang-Lin; Khosravi, Ebrahim; Bai, Shuju; Zhang, Boliang; Guo, Shengmin
Computer aided functional materials design accelerates the discovery of novel materials. This presentation will cover our recent research advance on the Ca-S-H system properties prediction and oxide doped high entropy alloy property simulation and experiment validation. Several recent developed computational materials design methods were utilized to the two systems physical and chemical properties prediction. A comparison of simulation results to the corresponding experiment data will be introduced. This research is partially supported by NSF CIMM project (OIA-15410795 and the Louisiana BoR), NSF HBCU Supplement climate change and ecosystem sustainability subproject 3, and LONI high performance computing time allocation loni mat bio7.
Interior noise considerations for advanced high-speed turboprop aircraft
NASA Technical Reports Server (NTRS)
Mixson, J. S.; Farassat, F.; Leatherwood, J. D.; Prydz, R.; Revell, J. D.
1982-01-01
This paper describes recent research on noise generated by high-speed propellers, on noise transmission through acoustically treated aircraft sidewalls and on subjective response to simulated turboprop noise. Propeller noise discussion focuses on theoretical prediction methods for complex blade shapes designed for low noise at Mach = 0.8 flight and on comparisons with experimental test results. Noise transmission experiments using a 168 cm. diameter aircraft fuselage model and scaled heavy-double-wall treatments indicate that the treatments perform well and that the predictions are usually conservative. Studies of subjective comfort response in an anechoic environment are described for noise signatures having combinations of broadband and propeller-type tone components.
NASA Astrophysics Data System (ADS)
Penetrante, B. M.
1993-08-01
The physics and chemistry of non-thermal plasma processing for post-combustion NO(x) control in internal combustion engines are discussed. A comparison of electron beam and electrical discharge processing is made regarding their power consumption, radical production, NO(x) removal mechanisms, and by-product formation. Pollution control applications present a good opportunity for transferring pulsed power techniques to the commercial sector. However, unless advances are made to drastically reduce the price and power consumption of electron beam sources and pulsed power systems, these plasma techniques will not become commercially competitive with conventional thermal or surface-catalytic methods.
Nabavi, Sheida
2016-08-15
With advances in technologies, huge amounts of multiple types of high-throughput genomics data are available. These data have tremendous potential to identify new and clinically valuable biomarkers to guide the diagnosis, assessment of prognosis, and treatment of complex diseases, such as cancer. Integrating, analyzing, and interpreting big and noisy genomics data to obtain biologically meaningful results, however, remains highly challenging. Mining genomics datasets by utilizing advanced computational methods can help to address these issues. To facilitate the identification of a short list of biologically meaningful genes as candidate drivers of anti-cancer drug resistance from an enormous amount of heterogeneous data, we employed statistical machine-learning techniques and integrated genomics datasets. We developed a computational method that integrates gene expression, somatic mutation, and copy number aberration data of sensitive and resistant tumors. In this method, an integrative method based on module network analysis is applied to identify potential driver genes. This is followed by cross-validation and a comparison of the results of sensitive and resistance groups to obtain the final list of candidate biomarkers. We applied this method to the ovarian cancer data from the cancer genome atlas. The final result contains biologically relevant genes, such as COL11A1, which has been reported as a cis-platinum resistant biomarker for epithelial ovarian carcinoma in several recent studies. The described method yields a short list of aberrant genes that also control the expression of their co-regulated genes. The results suggest that the unbiased data driven computational method can identify biologically relevant candidate biomarkers. It can be utilized in a wide range of applications that compare two conditions with highly heterogeneous datasets.
New method of 2-dimensional metrology using mask contouring
NASA Astrophysics Data System (ADS)
Matsuoka, Ryoichi; Yamagata, Yoshikazu; Sugiyama, Akiyuki; Toyoda, Yasutaka
2008-10-01
We have developed a new method of accurately profiling and measuring of a mask shape by utilizing a Mask CD-SEM. The method is intended to realize high accuracy, stability and reproducibility of the Mask CD-SEM adopting an edge detection algorithm as the key technology used in CD-SEM for high accuracy CD measurement. In comparison with a conventional image processing method for contour profiling, this edge detection method is possible to create the profiles with much higher accuracy which is comparable with CD-SEM for semiconductor device CD measurement. This method realizes two-dimensional metrology for refined pattern that had been difficult to measure conventionally by utilizing high precision contour profile. In this report, we will introduce the algorithm in general, the experimental results and the application in practice. As shrinkage of design rule for semiconductor device has further advanced, an aggressive OPC (Optical Proximity Correction) is indispensable in RET (Resolution Enhancement Technology). From the view point of DFM (Design for Manufacturability), a dramatic increase of data processing cost for advanced MDP (Mask Data Preparation) for instance and surge of mask making cost have become a big concern to the device manufacturers. This is to say, demands for quality is becoming strenuous because of enormous quantity of data growth with increasing of refined pattern on photo mask manufacture. In the result, massive amount of simulated error occurs on mask inspection that causes lengthening of mask production and inspection period, cost increasing, and long delivery time. In a sense, it is a trade-off between the high accuracy RET and the mask production cost, while it gives a significant impact on the semiconductor market centered around the mask business. To cope with the problem, we propose the best method of a DFM solution using two-dimensional metrology for refined pattern.
Rewriting in Advanced Composition.
ERIC Educational Resources Information Center
Stone, William B.
A college English instructor made an informal comparison of rewriting habits of students in a freshman composition course and two advanced composition courses. Notes kept on student rewriting focused on this central question: given peer and instructor response to their papers and a choice as to what and how to rewrite, what will students decide to…
A Comparison of Adolescents' Friendship Networks by Advanced Coursework Participation Status
ERIC Educational Resources Information Center
Barber, Carolyn; Wasson, Jillian Woodford
2015-01-01
Friendships serve as a source of support and as a context for developing social competence. Although advanced coursework may provide a unique context for the development of friendships, more research is needed to explore exactly what differences exist. Using the National Longitudinal Study of Adolescent Health and the Adolescent Health and…
USDA-ARS?s Scientific Manuscript database
Formosan subterranean termites, Coptotermes formosanus, are an important world wide pest. Molecular gene expression is an important tool for understanding the physiology of organisms. The recent advancement of molecular tools for Coptotermes formosanus is leading to advancement of the understanding ...
A Comparison of State Advance Directive Documents
ERIC Educational Resources Information Center
Gunter-Hunt, Gail; Mahoney, Jane E.; Sieger, Carol E.
2002-01-01
Purpose: Advance directive (AD) documents are based on state-specific statutes and vary in terms of content. These differences can create confusion and inconsistencies resulting in a possible failure to honor the health care wishes of people who execute health care documents for one state and receive health care in another state. The purpose of…
Immune Checkpoint Inhibition in Hepatocellular Carcinoma: Basics and Ongoing Clinical Trials.
Kudo, Masatoshi
2017-01-01
Clinical trials of antibodies targeting the immune checkpoint inhibitors programmed cell death 1 (PD-1), programmed cell death ligand 1 (PD-L1), or cytotoxic T-lymphocyte-associated protein 4 (CTLA-4) for the treatment of advanced hepatocellular carcinoma (HCC) are ongoing. Expansion cohorts of a phase I/II trial of the anti-PD-1 antibody nivolumab in advanced HCC showed favorable results. Two phase III studies are currently ongoing: a comparison of nivolumab and sorafenib in the first-line setting for advanced HCC, and a comparison of the anti-PD-1 antibody pembrolizumab and a placebo in the second-line setting for patients with advanced HCC who progressed on sorafenib therapy. The combination of anti-PD-1/PD-L1 and anti-CTLA-4 antibodies is being evaluated in other phase I/II trials, and the results suggest that an anti-PD-1 antibody combined with locoregional therapy or other molecular targeted agents is an effective treatment strategy for HCC. Immune checkpoint inhibitors may therefore open new doors to the treatment of HCC. © 2017 S. Karger AG, Basel.
Evaluation of double photon coincidence Compton imaging method with GEANT4 simulation
NASA Astrophysics Data System (ADS)
Yoshihara, Yuri; Shimazoe, Kenji; Mizumachi, Yuki; Takahashi, Hiroyuki
2017-11-01
Compton imaging has been used for various applications including astronomical observations, radioactive waste management, and biomedical imaging. The positions of radioisotopes are determined in the intersections of multiple cone traces through a large number of events, which reduces signal to noise ratio (SNR) of the images. We have developed an advanced Compton imaging method to localize radioisotopes with high SNR by using information of the interactions of Compton scattering caused by two gamma rays at the same time, as the double photon coincidence Compton imaging method. The targeted radioisotopes of this imaging method are specific nuclides that emit several gamma rays at the same time such as 60Co, 134Cs, and 111In, etc. Since their locations are determined in the intersections of two Compton cones, the most of cone traces would disappear in the three-dimensional space, which enhances the SNR and angular resolution. In this paper, the comparison of the double photon coincidence Compton imaging method and the single photon Compton imaging method was conducted by using GEANT4 Monte Carlo simulation.
The taxonomy statistic uncovers novel clinical patterns in a population of ischemic stroke patients.
Tukiendorf, Andrzej; Kaźmierski, Radosław; Michalak, Sławomir
2013-01-01
In this paper, we describe a simple taxonomic approach for clinical data mining elaborated by Marczewski and Steinhaus (M-S), whose performance equals the advanced statistical methodology known as the expectation-maximization (E-M) algorithm. We tested these two methods on a cohort of ischemic stroke patients. The comparison of both methods revealed strong agreement. Direct agreement between M-S and E-M classifications reached 83%, while Cohen's coefficient of agreement was κ = 0.766(P < 0.0001). The statistical analysis conducted and the outcomes obtained in this paper revealed novel clinical patterns in ischemic stroke patients. The aim of the study was to evaluate the clinical usefulness of Marczewski-Steinhaus' taxonomic approach as a tool for the detection of novel patterns of data in ischemic stroke patients and the prediction of disease outcome. In terms of the identification of fairly frequent types of stroke patients using their age, National Institutes of Health Stroke Scale (NIHSS), and diabetes mellitus (DM) status, when dealing with rough characteristics of patients, four particular types of patients are recognized, which cannot be identified by means of routine clinical methods. Following the obtained taxonomical outcomes, the strong correlation between the health status at moment of admission to emergency department (ED) and the subsequent recovery of patients is established. Moreover, popularization and simplification of the ideas of advanced mathematicians may provide an unconventional explorative platform for clinical problems.
Construction of dynamic stochastic simulation models using knowledge-based techniques
NASA Technical Reports Server (NTRS)
Williams, M. Douglas; Shiva, Sajjan G.
1990-01-01
Over the past three decades, computer-based simulation models have proven themselves to be cost-effective alternatives to the more structured deterministic methods of systems analysis. During this time, many techniques, tools and languages for constructing computer-based simulation models have been developed. More recently, advances in knowledge-based system technology have led many researchers to note the similarities between knowledge-based programming and simulation technologies and to investigate the potential application of knowledge-based programming techniques to simulation modeling. The integration of conventional simulation techniques with knowledge-based programming techniques is discussed to provide a development environment for constructing knowledge-based simulation models. A comparison of the techniques used in the construction of dynamic stochastic simulation models and those used in the construction of knowledge-based systems provides the requirements for the environment. This leads to the design and implementation of a knowledge-based simulation development environment. These techniques were used in the construction of several knowledge-based simulation models including the Advanced Launch System Model (ALSYM).
NASA Technical Reports Server (NTRS)
Trolinger, J. D. (Editor); Moore, W. W.
1977-01-01
These papers deal with recent research, developments, and applications in laser and electrooptics technology, particularly with regard to atmospheric effects in imaging and propagation, laser instrumentation and measurements, and particle measurement. Specific topics include advanced imaging techniques, image resolution through atmospheric turbulence over the ocean, an efficient method for calculating transmittance profiles, a comparison of a corner-cube reflector and a plane mirror in folded-path and direct transmission through atmospheric turbulence, line-spread instrumentation for propagation measurements, scaling laws for thermal fluctuations in the layer adjacent to ocean waves, particle sizing by laser photography, and an optical Fourier transform analysis of satellite cloud imagery. Other papers discuss a subnanosecond photomultiplier tube for laser application, holography of solid propellant combustion, diagnostics of turbulence by holography, a camera for in situ photography of cloud particles from a hail research aircraft, and field testing of a long-path laser transmissometer designed for atmospheric visibility measurements.
Recent advances in studies on milk oligosaccharides of cows and other domestic farm animals.
Urashima, Tadasu; Taufik, Epi; Fukuda, Kenji; Asakuma, Sadaki
2013-01-01
Human mature milk and colostrum contain 12-13 g/L and 22-24 g/L of milk oligosaccharides respectively, and the structures of least 115 human milk oligosaccharides (HMOs) have been characterized to date. By way of comparison, bovine colostrum collected immediately post partum contains only around 1 g/L of oligosaccharides, and this concentration rapidly decreases after 48 h. It was recently recognized that HMOs have several biological functions, and this study area has become very active, as illustrated by a recent symposium, but it appears that advances in studies on the milk oligosaccharides of domestic farm animals, including cows, have been rather slow compared with those on HMOs. Nevertheless, studies on bovine milk oligosaccharides (BMOs) have progressed recently, especially in regard to structural characterization, with the development of methods termed glycomics. This review is concerned with recent progress in studies on the milk oligosaccharides of domestic farm animals, especially of BMOs and bovine glycoproteins, and it discusses the possibility of industrial utilization in the near future.
NASA Technical Reports Server (NTRS)
Aljabri, Abdullah S.
1988-01-01
High speed subsonic transports powered by advanced propellers provide significant fuel savings compared to turbofan powered transports. Unfortunately, however, propfans must operate in aircraft-induced nonuniform flow fields which can lead to high blade cyclic stresses, vibration and noise. To optimize the design and installation of these advanced propellers, therefore, detailed knowledge of the complex flow field is required. As part of the NASA Propfan Test Assessment (PTA) program, a 1/9 scale semispan model of the Gulfstream II propfan test-bed aircraft was tested in the NASA-Lewis 8 x 6 supersonic wind tunnel to obtain propeller flow field data. Detailed radial and azimuthal surveys were made to obtain the total pressure in the flow and the three components of velocity. Data was acquired for Mach numbers ranging from 0.6 to 0.85. Analytical predictions were also made using a subsonic panel method, QUADPAN. Comparison of wind-tunnel measurements and analytical predictions show good agreement throughout the Mach range.
Capodaglio, Andrea G; Bojanowska-Czajka, Anna; Trojanowicz, Marek
2018-04-18
Carbamazepine and diclofenac are two examples of drugs with widespread geographical and environmental media proliferation that are poorly removed by traditional wastewater treatment processes. Advanced oxidation processes (AOPs) have been proposed as alternative methods to remove these compounds in solution. AOPs are based on a wide class of powerful technologies, including UV radiation, ozone, hydrogen peroxide, Fenton process, catalytic wet peroxide oxidation, heterogeneous photocatalysis, electrochemical oxidation and their combinations, sonolysis, and microwaves applicable to both water and wastewater. Moreover, processes rely on the production of oxidizing radicals (•OH and others) in a solution to decompose present pollutants. Water radiolysis-based processes, which are an alternative to the former, involve the use of concentrated energy (beams of accelerated electrons or γ-rays) to split water molecules, generating strong oxidants and reductants (radicals) at the same time. In this paper, the degradation of carbamazepine and diclofenac by means of all these processes is discussed and compared. Energy and byproduct generation issues are also addressed.
Harnessing Whole Genome Sequencing in Medical Mycology.
Cuomo, Christina A
2017-01-01
Comparative genome sequencing studies of human fungal pathogens enable identification of genes and variants associated with virulence and drug resistance. This review describes current approaches, resources, and advances in applying whole genome sequencing to study clinically important fungal pathogens. Genomes for some important fungal pathogens were only recently assembled, revealing gene family expansions in many species and extreme gene loss in one obligate species. The scale and scope of species sequenced is rapidly expanding, leveraging technological advances to assemble and annotate genomes with higher precision. By using iteratively improved reference assemblies or those generated de novo for new species, recent studies have compared the sequence of isolates representing populations or clinical cohorts. Whole genome approaches provide the resolution necessary for comparison of closely related isolates, for example, in the analysis of outbreaks or sampled across time within a single host. Genomic analysis of fungal pathogens has enabled both basic research and diagnostic studies. The increased scale of sequencing can be applied across populations, and new metagenomic methods allow direct analysis of complex samples.
Two-photon reduction: a cost-effective method for fabrication of functional metallic nanostructures
NASA Astrophysics Data System (ADS)
Tabrizi, Sahar; Cao, YaoYu; Lin, Han; Jia, BaoHua
2017-03-01
Metallic nanostructures have underpinned plasmonic-based advanced photonic devices in a broad range of research fields over the last decade including physics, engineering, material science and bioscience. The key to realizing functional plasmonic resonances that can manipulate light at the optical frequencies relies on the creation of conductive metallic structures at the nanoscale with low structural defects. Currently, most plasmonic nanostructures are fabricated either by electron beam lithography (EBL) or by focused ion beam (FIB) milling, which are expensive, complicated and time-consuming. In comparison, the direct laser writing (DLW) technique has demonstrated its high spatial resolution and cost-effectiveness in three-dimensional fabrication of micro/nanostructures. Furthermore, the recent breakthroughs in superresolution nanofabrication and parallel writing have significantly advanced the fabrication resolution and throughput of the DLW method and made it one of the promising future nanofabrication technologies with low-cost and scalability. In this review, we provide a comprehensive summary of the state-of-the-art DLW fabrication technology for nanometer scale metallic structures. The fabrication mechanisms, different material choices, fabrication capability, including resolution, conductivity and structure surface smoothness, as well as the characterization methods and achievable devices for different applications are presented. In particular, the development trends of the field and the perspectives for future opportunities and challenges are provided at the end of the review. It has been demonstrated that the quality of the metallic structures fabricated using the DLW method is excellent compared with other methods providing a new and enabling platform for functional nanophotonic device fabrication.
Measurement of fracture toughness by nanoindentation methods: Recent advances and future challenges
Sebastiani, Marco; Johanns, K. E.; Herbert, Erik G.; ...
2015-04-30
In this study, we describe recent advances and developments for the measurement of fracture toughness at small scales by the use of nanoindentation-based methods including techniques based on micro-cantilever beam bending and micro-pillar splitting. A critical comparison of the techniques is made by testing a selected group of bulk and thin film materials. For pillar splitting, cohesive zone finite element simulations are used to validate a simple relationship between the critical load at failure, the pillar radius, and the fracture toughness for a range of material properties and coating/substrate combinations. The minimum pillar diameter required for nucleation and growth ofmore » a crack during indentation is also estimated. An analysis of pillar splitting for a film on a dissimilar substrate material shows that the critical load for splitting is relatively insensitive to the substrate compliance for a large range of material properties. Experimental results from a selected group of materials show good agreement between single cantilever and pillar splitting methods, while a discrepancy of ~25% is found between the pillar splitting technique and double-cantilever testing. It is concluded that both the micro-cantilever and pillar splitting techniques are valuable methods for micro-scale assessment of fracture toughness of brittle ceramics, provided the underlying assumptions can be validated. Although the pillar splitting method has some advantages because of the simplicity of sample preparation and testing, it is not applicable to most metals because their higher toughness prevents splitting, and in this case, micro-cantilever bend testing is preferred.« less
A comparison of theoretical and experimental pressure distributions for two advanced fighter wings
NASA Technical Reports Server (NTRS)
Haney, H. P.; Hicks, R. M.
1981-01-01
A comparison was made between experimental pressure distributions measured during testing of the Vought A-7 fighter and the theoretical predictions of four transonic potential flow codes. Isolated wind and three wing-body codes were used for comparison. All comparisons are for transonic Mach numbers and include both attached and separate flows. In general, the wing-body codes gave better agreement with the experiment than did the isolated wing code but, because of the greater complexity of the geometry, were found to be considerably more expensive and less reliable.
International Comparisons of Teachers' Salaries: An Exploratory Study. Survey Report.
ERIC Educational Resources Information Center
Barro, Steven M.; Suter, Larry
This paper, the final product of a study, "International Comparison of Teachers' Salaries," on an exploratory effort to compare salaries of elementary and secondary school teachers in the United States with those in other economically advanced countries. Data was obtained from Canada, Denmark, Federal Republic of Germany, France, Italy, Japan,…
Energy conversion alternatives study
NASA Technical Reports Server (NTRS)
Shure, L. T.
1979-01-01
Comparison of coal based energy systems is given. Study identifies and compares various advanced energy conversion systems using coal or coal derived fuels for baselaoad electric power generation. Energy Conversion Alternatives Study (ECAS) reports provede government, industry, and general public with technically consistent basis for comparison of system's options of interest for fossilfired electric-utility application.
A Comparison of Graduate and Professional Students: Their Daily Stressors.
ERIC Educational Resources Information Center
Smith, M. Shelton; And Others
The stressful effects of advanced academic training were examined in a comparison of six graduate and professional programs at Vanderbilt University. The focus was on the nonacademic, daily stressors and negative mood states of 152 students in medicine, business, divinity, graduate department of religion, and two graduate psychology departments.…
Brecker, Stephen; Mealing, Stuart; Padhiar, Amie; Eaton, James; Sculpher, Mark; Busca, Rachele; Bosmans, Johan; Gerckens, Ulrich J; Wenaweser, Peter; Tamburino, Corrado; Bleiziffer, Sabine; Piazza, Nicolo; Moat, Neil; Linke, Axel
2014-01-01
Objective To use patient-level data from the ADVANCE study to evaluate the cost-effectiveness of transcatheter aortic valve implantation (TAVI) compared to medical management (MM) in patients with severe aortic stenosis from the perspective of the UK NHS. Methods A published decision-analytic model was adapted to include information on TAVI from the ADVANCE study. Patient-level data informed the choice as well as the form of mathematical functions that were used to model all-cause mortality, health-related quality of life and hospitalisations. TAVI-related resource use protocols were based on the ADVANCE study. MM was modelled on publicly available information from the PARTNER-B study. The outcome measures were incremental cost-effectiveness ratios (ICERs) estimated at a range of time horizons with benefits expressed as quality-adjusted life-years (QALY). Extensive sensitivity/subgroup analyses were undertaken to explore the impact of uncertainty in key clinical areas. Results Using a 5-year time horizon, the ICER for the comparison of all ADVANCE to all PARTNER-B patients was £13 943 per QALY gained. For the subset of ADVANCE patients classified as high risk (Logistic EuroSCORE >20%) the ICER was £17 718 per QALY gained). The ICER was below £30 000 per QALY gained in all sensitivity analyses relating to choice of MM data source and alternative modelling approaches for key parameters. When the time horizon was extended to 10 years, all ICERs generated in all analyses were below £20 000 per QALY gained. Conclusion TAVI is highly likely to be a cost-effective treatment for patients with severe aortic stenosis. PMID:25349700
Zhu, Ning; Gong, Yi; He, Jian; Xia, Jingwen
2013-01-01
Purpose Methylenetetrahydrofolate reductase (MTHFR) has been implicated in lung cancer risk and response to platinum-based chemotherapy in advanced non-small cell lung cancer (NSCLC). However, the results are controversial. We performed meta-analysis to investigate the effect of MTHFR C677T polymorphism on lung cancer risk and response to platinum-based chemotherapy in advanced NSCLC. Materials and Methods The databases of PubMed, Ovid, Wanfang and Chinese Biomedicine were searched for eligible studies. Nineteen studies on MTHFR C677T polymorphism and lung cancer risk and three articles on C677T polymorphism and response to platinum-based chemotherapy in advanced NSCLC, were identified. Results The results indicated that the allelic contrast, homozygous contrast and recessive model of the MTHFR C677T polymorphism were associated significantly with increased lung cancer risk. In the subgroup analysis, the C677T polymorphism was significantly correlated with an increased risk of NSCLC, with the exception of the recessive model. The dominant model and the variant T allele showed a significant association with lung cancer susceptibility of ever smokers. Male TT homozygote carriers had a higher susceptibility, but the allelic contrast and homozygote model had a protective effect in females. No relationship was observed for SCLC in any comparison model. In addition, MTHFR 677TT homozygote carriers had a better response to platinum-based chemotherapy in advanced NSCLC in the recessive model. Conclusion The MTHFR C677T polymorphism might be a genetic marker for lung cancer risk or response to platinum-based chemotherapy in advanced NSCLC. However, our results require further verification. PMID:24142642
Evaluating research for disruptive innovation in the space sector
NASA Astrophysics Data System (ADS)
Summerer, L.
2012-12-01
Many governmental space activities need to be planned with a time horizon that extends beyond the comfort zone of reliable technology development assessments and predictions. In an environment of accelerating technological change, a methodological approach to addressing non-core technology trends and potentially disruptive, game-changing developments not yet linked to the space sector is increasingly important to complement efforts in core technology R&D planning. Various models and organisational setups aimed at fulfilling this purpose are in existence. These include, with varying levels of relevance to space, the National Aeronautics and Space Administration (NASA) Institute for Advanced Concepts (NIAC, operational form 1998 to 2007 and recently re-established), the Defence Advanced Research Projects Agency of the US Department of Defence, the Massachusetts Institute of Technology (MIT) Medialab, the early versions of Starlab, the Lockheed Skunk Works and the European Space Agency's Advanced Concepts Team. Some of these organisations have been reviewed and assessed individually, though systematic comparison of their methods, approaches and results have not been published. This may be due in part to the relatively sparse scientific literature on organisational parameters for enabling disruptive innovation as well as to the lack of commonly agreed indicators for the evaluation of their performance. Furthermore, innovation support systems in the space sector are organised differently than in traditional, open competitive markets, which serve as the basis for most scholarly literature on the organisation of innovation. The present paper is intended to advance and stimulate discussion on the organisation of disruptive innovation mechanisms specifically for the space sector. It uses the examples of the NASA Institute for Advanced Concepts and the ESA Advanced Concepts Team, analyses their respective approaches and compares their results, leading to the proposal of measures for the analysis and eventual evaluation of research for disruptive innovation in the space sector.
Kunimatsu-Sanuki, Shiho; Iwase, Aiko; Araie, Makoto; Aoki, Yuki; Hara, Takeshi; Fukuchi, Takeo; Udagawa, Sachiko; Ohkubo, Shinji; Sugiyama, Kazuhisa; Matsumoto, Chota; Nakazawa, Toru; Yamaguchi, Takuhiro; Ono, Hiroshi
2017-01-01
Background/aims To assess the role of specific visual subfields in collisions with oncoming cars during simulated driving in patients with advanced glaucoma. Methods Normal subjects and patients with glaucoma with mean deviation <–12 dB in both eyes (Humphrey Field Analyzer 24-2 SITA-S program) used a driving simulator (DS; Honda Motor, Tokyo). Two scenarios in which oncoming cars turned right crossing the driver's path were chosen. We compared the binocular integrated visual field (IVF) in the patients who were involved in collisions and those who were not. We performed a multivariate logistic regression analysis; the dependent parameter was collision involvement, and the independent parameters were age, visual acuity and mean sensitivity of the IVF subfields. Results The study included 43 normal subjects and 100 patients with advanced glaucoma. And, 5 of the 100 patients with advanced glaucoma experienced simulator sickness during the main test and were thus excluded. In total, 95 patients with advanced glaucoma and 43 normal subjects completed the main test of DS. Advanced glaucoma patients had significantly more collisions than normal patients in one or both DS scenarios (p<0.001). The patients with advanced glaucoma who were involved in collisions were older (p=0.050) and had worse visual acuity in the better eye (p<0.001) and had lower mean IVF sensitivity in the inferior hemifield, both 0°–12° and 13°–24° in comparison with who were not involved in collisions (p=0.012 and p=0.034). A logistic regression analysis revealed that collision involvement was significantly associated with decreased inferior IVF mean sensitivity from 13° to 24° (p=0.041), in addition to older age and lower visual acuity (p=0.018 and p<0.001). Conclusions Our data suggest that the inferior hemifield was associated with the incidence of motor vehicle collisions with oncoming cars in patients with advanced glaucoma. PMID:28400370
Prieve, Kurt; Rice, Amanda; Raynor, Peter C
2017-08-01
The aims of this study were to evaluate sound levels produced by compressed air guns in research and development (R&D) environments, replace conventional air gun models with advanced noise-reducing air nozzles, and measure changes in sound levels to assess the effectiveness of the advanced nozzles as engineering controls for noise. Ten different R&D manufacturing areas that used compressed air guns were identified and included in the study. A-weighted sound level and Z-weighted octave band measurements were taken simultaneously using a single instrument. In each area, three sets of measurements, each lasting for 20 sec, were taken 1 m away and perpendicular to the air stream of the conventional air gun while a worker simulated typical air gun work use. Two different advanced noise-reducing air nozzles were then installed. Sound level and octave band data were collected for each of these nozzles using the same methods as for the original air guns. Both of the advanced nozzles provided sound level reductions of about 7 dBA, on average. The highest noise reductions measured were 17.2 dBA for one model and 17.7 dBA for the other. In two areas, the advanced nozzles yielded no sound level reduction, or they produced small increases in sound level. The octave band data showed strong similarities in sound level among all air gun nozzles within the 10-1,000 Hz frequency range. However, the advanced air nozzles generally had lower noise contributions in the 1,000-20,000 Hz range. The observed decreases at these higher frequencies caused the overall sound level reductions that were measured. Installing new advanced noise-reducing air nozzles can provide large sound level reductions in comparison to existing conventional nozzles, which has direct benefit for hearing conservation efforts.
Butail, Sachit; Salerno, Philip; Bollt, Erik M; Porfiri, Maurizio
2015-12-01
Traditional approaches for the analysis of collective behavior entail digitizing the position of each individual, followed by evaluation of pertinent group observables, such as cohesion and polarization. Machine learning may enable considerable advancements in this area by affording the classification of these observables directly from images. While such methods have been successfully implemented in the classification of individual behavior, their potential in the study collective behavior is largely untested. In this paper, we compare three methods for the analysis of collective behavior: simple tracking (ST) without resolving occlusions, machine learning with real data (MLR), and machine learning with synthetic data (MLS). These methods are evaluated on videos recorded from an experiment studying the effect of ambient light on the shoaling tendency of Giant danios. In particular, we compute average nearest-neighbor distance (ANND) and polarization using the three methods and compare the values with manually-verified ground-truth data. To further assess possible dependence on sampling rate for computing ANND, the comparison is also performed at a low frame rate. Results show that while ST is the most accurate at higher frame rate for both ANND and polarization, at low frame rate for ANND there is no significant difference in accuracy between the three methods. In terms of computational speed, MLR and MLS take significantly less time to process an image, with MLS better addressing constraints related to generation of training data. Finally, all methods are able to successfully detect a significant difference in ANND as the ambient light intensity is varied irrespective of the direction of intensity change.
Richardson-Harman, Nicola; Lackman-Smith, Carol; Fletcher, Patricia S.; Anton, Peter A.; Bremer, James W.; Dezzutti, Charlene S.; Elliott, Julie; Grivel, Jean-Charles; Guenthner, Patricia; Gupta, Phalguni; Jones, Maureen; Lurain, Nell S.; Margolis, Leonid B.; Mohan, Swarna; Ratner, Deena; Reichelderfer, Patricia; Roberts, Paula; Shattock, Robin J.; Cummins, James E.
2009-01-01
Microbicide candidates with promising in vitro activity are often advanced for evaluations using human primary tissue explants relevant to the in vivo mucosal transmission of human immunodeficiency virus type 1 (HIV-1), such as tonsil, cervical, or rectal tissue. To compare virus growth or the anti-HIV-1 efficacies of candidate microbicides in tissue explants, a novel soft-endpoint method was evaluated to provide a single, objective measurement of virus growth. The applicability of the soft endpoint is shown across several different ex vivo tissue types, with the method performed in different laboratories, and for a candidate microbicide (PRO 2000). The soft-endpoint method was compared to several other endpoint methods, including (i) the growth of virus on specific days after infection, (ii) the area under the virus growth curve, and (iii) the slope of the virus growth curve. Virus growth at the assay soft endpoint was compared between laboratories, methods, and experimental conditions, using nonparametric statistical analyses. Intra-assay variability determinations using the coefficient of variation demonstrated higher variability for virus growth in rectal explants. Significant virus inhibition by PRO 2000 and significant differences in the growth of certain primary HIV-1 isolates were observed by the majority of laboratories. These studies indicate that different laboratories can provide consistent measurements of anti-HIV-1 microbicide efficacy when (i) the soft endpoint or another standardized endpoint is used, (ii) drugs and/or virus reagents are centrally sourced, and (iii) the same explant tissue type and method are used. Application of the soft-endpoint method reduces the inherent variability in comparisons of preclinical assays used for microbicide development. PMID:19726602
NASA Technical Reports Server (NTRS)
Mccurdy, David A.
1989-01-01
A laboratory experiment was conducted to compare the annoyance of flyover noise from advanced turboprop aircraft having different propeller configurations with the annoyance of conventional turboprop and jet aircraft flyover noise. It was found that advanced turboprops with single-rotating propellers were, on average, slightly less annoying than the other aircraft. Fundamental frequency and tone-to-broadband noise ratio affected annoyance response to advanced turboprops but the effects varied with propeller configuration and noise metric. The addition of duration corrections and corrections for tones above 500 Hz to the noise measurement procedures improved prediction ability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habte, Aron; Sengupta, Manajit; Andreas, Afshin
Banks financing solar energy projects require assurance that these systems will produce the energy predicted. Furthermore, utility planners and grid system operators need to understand the impact of the variable solar resource on solar energy conversion system performance. Accurate solar radiation data sets reduce the expense associated with mitigating performance risk and assist in understanding the impacts of solar resource variability. The accuracy of solar radiation measured by radiometers depends on the instrument performance specification, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methods provided by radiometric calibrationmore » service providers, such as NREL and manufacturers of radiometers, on the resulting calibration responsivity. Some of these radiometers are calibrated indoors and some outdoors. To establish or understand the differences in calibration methodology, we processed and analyzed field-measured data from these radiometers. This study investigates calibration responsivities provided by NREL's broadband outdoor radiometer calibration (BORCAL) and a few prominent manufacturers. The BORCAL method provides the outdoor calibration responsivity of pyranometers and pyrheliometers at 45 degree solar zenith angle, and as a function of solar zenith angle determined by clear-sky comparisons with reference irradiance. The BORCAL method also employs a thermal offset correction to the calibration responsivity of single-black thermopile detectors used in pyranometers. Indoor calibrations of radiometers by their manufacturers are performed using a stable artificial light source in a side-by-side comparison between the test radiometer under calibration and a reference radiometer of the same type. In both methods, the reference radiometer calibrations are traceable to the World Radiometric Reference (WRR). These different methods of calibration demonstrated +1% to +2% differences in solar irradiance measurement. Analyzing these differences will ultimately help determine the uncertainty of the field radiometer data and guide the development of a consensus standard for calibration. Further advancing procedures for precisely calibrating radiometers to world reference standards that reduce measurement uncertainty will allow more accurate prediction of solar output and improve the bankability of solar projects.« less
Advanced statistical methods for improved data analysis of NASA astrophysics missions
NASA Technical Reports Server (NTRS)
Feigelson, Eric D.
1992-01-01
The investigators under this grant studied ways to improve the statistical analysis of astronomical data. They looked at existing techniques, the development of new techniques, and the production and distribution of specialized software to the astronomical community. Abstracts of nine papers that were produced are included, as well as brief descriptions of four software packages. The articles that are abstracted discuss analytical and Monte Carlo comparisons of six different linear least squares fits, a (second) paper on linear regression in astronomy, two reviews of public domain software for the astronomer, subsample and half-sample methods for estimating sampling distributions, a nonparametric estimation of survival functions under dependent competing risks, censoring in astronomical data due to nondetections, an astronomy survival analysis computer package called ASURV, and improving the statistical methodology of astronomical data analysis.
NASA Technical Reports Server (NTRS)
Zoby, E. V.
1981-01-01
An engineering method has been developed for computing the windward-symmetry plane convective heat-transfer rates on Shuttle-like vehicles at large angles of attack. The engineering code includes an approximate inviscid flowfield technique, laminar and turbulent heating-rate expressions, an approximation to account for the variable-entropy effects on the surface heating and the concept of an equivalent axisymmetric body to model the windward-ray flowfields of Shuttle-like vehicles at angles of attack from 25 to 45 degrees. The engineering method is validated by comparing computed heating results with corresponding experimental data measured on Shuttle and advanced transportation models over a wide range of flow conditions and angles of attack from 25 to 40 degrees and also with results of existing prediction techniques. The comparisons are in good agreement.
Deep Learning Nuclei Detection in Digitized Histology Images by Superpixels.
Sornapudi, Sudhir; Stanley, Ronald Joe; Stoecker, William V; Almubarak, Haidar; Long, Rodney; Antani, Sameer; Thoma, George; Zuna, Rosemary; Frazier, Shelliane R
2018-01-01
Advances in image analysis and computational techniques have facilitated automatic detection of critical features in histopathology images. Detection of nuclei is critical for squamous epithelium cervical intraepithelial neoplasia (CIN) classification into normal, CIN1, CIN2, and CIN3 grades. In this study, a deep learning (DL)-based nuclei segmentation approach is investigated based on gathering localized information through the generation of superpixels using a simple linear iterative clustering algorithm and training with a convolutional neural network. The proposed approach was evaluated on a dataset of 133 digitized histology images and achieved an overall nuclei detection (object-based) accuracy of 95.97%, with demonstrated improvement over imaging-based and clustering-based benchmark techniques. The proposed DL-based nuclei segmentation Method with superpixel analysis has shown improved segmentation results in comparison to state-of-the-art methods.
Legood, Rosa; Pitt, Catherine
2016-01-01
Abstract There are marked differences in methods used for undertaking economic evaluations across low‐income, middle‐income, and high‐income countries. We outline the most apparent dissimilarities and reflect on their underlying reasons. We randomly sampled 50 studies from each of three country income groups from a comprehensive database of 2844 economic evaluations published between January 2012 and May 2014. Data were extracted on ten methodological areas: (i) availability of guidelines; (ii) research questions; (iii) perspective; (iv) cost data collection methods; (v) cost data analysis; (vi) outcome measures; (vii) modelling techniques; (viii) cost‐effectiveness thresholds; (ix) uncertainty analysis; and (x) applicability. Comparisons were made across income groups and odds ratios calculated. Contextual heterogeneity rightly drives some of the differences identified. Other differences appear less warranted and may be attributed to variation in government health sector capacity, in health economics research capacity and in expectations of funders, journals and peer reviewers. By highlighting these differences, we seek to start a debate about the underlying reasons why they have occurred and to what extent the differences are conducive for methodological advancements. We suggest a number of specific areas in which researchers working in countries of differing environments could learn from one another. © 2016 The Authors. Health Economics published by John Wiley & Sons Ltd. PMID:26775571
Jameson, K; Averley, P A; Shackley, P; Steele, J
2007-09-22
To compare the cost-effectiveness of dental sedation techniques used in the treatment of children, focusing on hospital-based dental general anaesthetic (DGA) and advanced conscious sedation in a controlled primary care environment. Data on fees, costs and treatment pathways were obtained from a primary care clinic specialising in advanced sedation techniques. For the hospital-based DGA cohort, data were gathered from hospital trusts in the same area. Comparison was via an average cost per child treated and subsequent sensitivity analysis. Analysing records spanning one year, the average cost per child treated via advanced conscious sedation was pound245.47. As some treatments fail (3.5% of cases attempted), and the technique is not deemed suitable for all patients (4-5%), DGA is still required and has been factored into this cost. DGA has an average cost per case treated of pound359.91, 46.6% more expensive than advanced conscious sedation. These cost savings were robust to plausible variation in all parameters. The costs of advanced conscious sedation techniques, applied in a controlled primary care environment, are substantially lower than the equivalent costs of hospital-based DGA, informing the debate about the optimum way of managing this patient group.
NASA Astrophysics Data System (ADS)
Feygelman, V.; Nelms, B.
2013-06-01
As IMRT technology continues to evolve, so do the dosimetric QA methods. A historical review of those is presented, starting with longstanding techniques such as film and ion chamber in a phantom and progressing towards 3D and 4D dose reconstruction in the patient. Regarding patient-specific QA, we envision that the currently prevalent limited comparison of dose distributions in the phantom by γ-analysis will be eventually replaced by clinically meaningful patient dose analyses with improved sensitivity and specificity. In a larger sense, we envision a future of QA built upon lessons from the rich history of "quality" as a science and philosophy. This future will aim to improve quality (and ultimately reduce cost) via advanced commissioning processes that succeed in detecting and rooting out systematic errors upstream of patient treatment, thus reducing our reliance on, and the resource burden associated with, per-beam/per-plan inspection.
A critical review of principal traffic noise models: Strategies and implications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garg, Naveen, E-mail: ngarg@mail.nplindia.ernet.in; Department of Mechanical, Production and Industrial Engineering, Delhi Technological University, Delhi 110042; Maji, Sagar
2014-04-01
The paper presents an exhaustive comparison of principal traffic noise models adopted in recent years in developed nations. The comparison is drawn on the basis of technical attributes including source modelling and sound propagation algorithms. Although the characterization of source in terms of rolling and propulsion noise in conjunction with advanced numerical methods for sound propagation has significantly reduced the uncertainty in traffic noise predictions, the approach followed is quite complex and requires specialized mathematical skills for predictions which is sometimes quite cumbersome for town planners. Also, it is sometimes difficult to follow the best approach when a variety ofmore » solutions have been proposed. This paper critically reviews all these aspects pertaining to the recent models developed and adapted in some countries and also discusses the strategies followed and implications of these models. - Highlights: • Principal traffic noise models developed are reviewed. • Sound propagation algorithms used in traffic noise models are compared. • Implications of models are discussed.« less
Numerical simulation of a full-loop circulating fluidized bed under different operating conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Yupeng; Musser, Jordan M.; Li, Tingwen
Both experimental and computational studies of the fluidization of high-density polyethylene (HDPE) particles in a small-scale full-loop circulating fluidized bed are conducted. Experimental measurements of pressure drop are taken at different locations along the bed. The solids circulation rate is measured with an advanced Particle Image Velocimetry (PIV) technique. The bed height of the quasi-static region in the standpipe is also measured. Comparative numerical simulations are performed with a Computational Fluid Dynamics solver utilizing a Discrete Element Method (CFD-DEM). This paper reports a detailed and direct comparison between CFD-DEM results and experimental data for realistic gas-solid fluidization in a full-loopmore » circulating fluidized bed system. The comparison reveals good agreement with respect to system component pressure drop and inventory height in the standpipe. In addition, the effect of different drag laws applied within the CFD simulation is examined and compared with experimental results.« less
NASA Astrophysics Data System (ADS)
Rückwardt, M.; Göpfert, A.; Correns, M.; Schellhorn, M.; Linß, G.
2010-07-01
Coordinate measuring machines are high precession all-rounder in three dimensional measuring. Therefore the versatility of parameters and expandability of additionally hardware is very comprehensive. Consequently you need much expert knowledge of the user and mostly a lot of advanced information about the measuring object. In this paper a coordinate measuring machine and a specialized measuring machine are compared at the example of the measuring of eyeglass frames. For this case of three dimensional measuring challenges the main focus is divided into metrological and economical aspects. At first there is shown a fully automated method for tactile measuring of this abstract form. At second there is shown a comparison of the metrological characteristics of a coordinate measuring machine and a tracer for eyeglass frames. The result is in favour to the coordinate measuring machine. It was not surprising in these aspects. At last there is shown a comparison of the machine in front of the economical aspects.
ERIC Educational Resources Information Center
Kahle, Jane Butler
The use of an advanced organizer (a generalizable, encompassing concept) prior to an individualized instructional sequence in a self-paced, audiotutorial learning format was accompanied by gains in individual unit achievement and in retention by disadvantaged biology students. Although behavioral objectives generally were shown to make no…
ERIC Educational Resources Information Center
Koch, Bevan; Slate, John R.; Moore, George W.
2016-01-01
We compared the performance of Hispanic students from California, Texas, and Arizona on the two Advanced Placement (AP) English exams (i.e., English Language and Composition and English Literature and Composition) using archival data from the College Board from 1997 through 2012. Pearson chi-square tests yielded statistically significant…
An Overview and Empirical Comparison of Distance Metric Learning Methods.
Moutafis, Panagiotis; Leng, Mengjun; Kakadiaris, Ioannis A
2016-02-16
In this paper, we first offer an overview of advances in the field of distance metric learning. Then, we empirically compare selected methods using a common experimental protocol. The number of distance metric learning algorithms proposed keeps growing due to their effectiveness and wide application. However, existing surveys are either outdated or they focus only on a few methods. As a result, there is an increasing need to summarize the obtained knowledge in a concise, yet informative manner. Moreover, existing surveys do not conduct comprehensive experimental comparisons. On the other hand, individual distance metric learning papers compare the performance of the proposed approach with only a few related methods and under different settings. This highlights the need for an experimental evaluation using a common and challenging protocol. To this end, we conduct face verification experiments, as this task poses significant challenges due to varying conditions during data acquisition. In addition, face verification is a natural application for distance metric learning because the encountered challenge is to define a distance function that: 1) accurately expresses the notion of similarity for verification; 2) is robust to noisy data; 3) generalizes well to unseen subjects; and 4) scales well with the dimensionality and number of training samples. In particular, we utilize well-tested features to assess the performance of selected methods following the experimental protocol of the state-of-the-art database labeled faces in the wild. A summary of the results is presented along with a discussion of the insights obtained and lessons learned by employing the corresponding algorithms.
Automated quantification of neuronal networks and single-cell calcium dynamics using calcium imaging
Patel, Tapan P.; Man, Karen; Firestein, Bonnie L.; Meaney, David F.
2017-01-01
Background Recent advances in genetically engineered calcium and membrane potential indicators provide the potential to estimate the activation dynamics of individual neurons within larger, mesoscale networks (100s–1000 +neurons). However, a fully integrated automated workflow for the analysis and visualization of neural microcircuits from high speed fluorescence imaging data is lacking. New method Here we introduce FluoroSNNAP, Fluorescence Single Neuron and Network Analysis Package. FluoroSNNAP is an open-source, interactive software developed in MATLAB for automated quantification of numerous biologically relevant features of both the calcium dynamics of single-cells and network activity patterns. FluoroSNNAP integrates and improves upon existing tools for spike detection, synchronization analysis, and inference of functional connectivity, making it most useful to experimentalists with little or no programming knowledge. Results We apply FluoroSNNAP to characterize the activity patterns of neuronal microcircuits undergoing developmental maturation in vitro. Separately, we highlight the utility of single-cell analysis for phenotyping a mixed population of neurons expressing a human mutant variant of the microtubule associated protein tau and wild-type tau. Comparison with existing method(s) We show the performance of semi-automated cell segmentation using spatiotemporal independent component analysis and significant improvement in detecting calcium transients using a template-based algorithm in comparison to peak-based or wavelet-based detection methods. Our software further enables automated analysis of microcircuits, which is an improvement over existing methods. Conclusions We expect the dissemination of this software will facilitate a comprehensive analysis of neuronal networks, promoting the rapid interrogation of circuits in health and disease. PMID:25629800
Karathanasis, Nestoras; Tsamardinos, Ioannis
2016-01-01
Background The advance of omics technologies has made possible to measure several data modalities on a system of interest. In this work, we illustrate how the Non-Parametric Combination methodology, namely NPC, can be used for simultaneously assessing the association of different molecular quantities with an outcome of interest. We argue that NPC methods have several potential applications in integrating heterogeneous omics technologies, as for example identifying genes whose methylation and transcriptional levels are jointly deregulated, or finding proteins whose abundance shows the same trends of the expression of their encoding genes. Results We implemented the NPC methodology within “omicsNPC”, an R function specifically tailored for the characteristics of omics data. We compare omicsNPC against a range of alternative methods on simulated as well as on real data. Comparisons on simulated data point out that omicsNPC produces unbiased / calibrated p-values and performs equally or significantly better than the other methods included in the study; furthermore, the analysis of real data show that omicsNPC (a) exhibits higher statistical power than other methods, (b) it is easily applicable in a number of different scenarios, and (c) its results have improved biological interpretability. Conclusions The omicsNPC function competitively behaves in all comparisons conducted in this study. Taking into account that the method (i) requires minimal assumptions, (ii) it can be used on different studies designs and (iii) it captures the dependences among heterogeneous data modalities, omicsNPC provides a flexible and statistically powerful solution for the integrative analysis of different omics data. PMID:27812137
Advanced propulsion for LEO-Moon transport. 3: Transportation model. M.S. Thesis - California Univ.
NASA Technical Reports Server (NTRS)
Henley, Mark W.
1992-01-01
A simplified computational model of low Earth orbit-Moon transportation system has been developed to provide insight into the benefits of new transportation technologies. A reference transportation infrastructure, based upon near-term technology developments, is used as a departure point for assessing other, more advanced alternatives. Comparison of the benefits of technology application, measured in terms of a mass payback ratio, suggests that several of the advanced technology alternatives could substantially improve the efficiency of low Earth orbit-Moon transportation.
Innovative experimental particle physics through technological advances: Past, present and future
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, Harry W.K.; /Fermilab
This mini-course gives an introduction to the techniques used in experimental particle physics with an emphasis on the impact of technological advances. The basic detector types and particle accelerator facilities will be briefly covered with examples of their use and with comparisons. The mini-course ends with what can be expected in the near future from current technology advances. The mini-course is intended for graduate students and post-docs and as an introduction to experimental techniques for theorists.
Advanced general aviation comparative engine/airframe integration study
NASA Technical Reports Server (NTRS)
Huggins, G. L.; Ellis, D. R.
1981-01-01
The NASA Advanced Aviation Comparative Engine/Airframe Integration Study was initiated to help determine which of four promising concepts for new general aviation engines for the 1990's should be considered for further research funding. The engine concepts included rotary, diesel, spark ignition, and turboprop powerplants; a conventional state-of-the-art piston engine was used as a baseline for the comparison. Computer simulations of the performance of single and twin engine pressurized aircraft designs were used to determine how the various characteristics of each engine interacted in the design process. Comparisons were made of how each engine performed relative to the others when integrated into an airframe and required to fly a transportation mission.
Aasvang, E K; Werner, M U; Kehlet, H
2014-09-01
Deep pain complaints are more frequent than cutaneous in post-surgical patients, and a prevalent finding in quantitative sensory testing studies. However, the preferred assessment method - pressure algometry - is indirect and tissue unspecific, hindering advances in treatment and preventive strategies. Thus, there is a need for development of methods with direct stimulation of suspected hyperalgesic tissues to identify the peripheral origin of nociceptive input. We compared the reliability of an ultrasound-guided needle stimulation protocol of electrical detection and pain thresholds to pressure algometry, by performing identical test-retest sequences 10 days apart, in deep tissues in the groin region. Electrical stimulation was performed by five up-and-down staircase series of single impulses of 0.04 ms duration, starting from 0 mA in increments of 0.2 mA until a threshold was reached and descending until sensation was lost. Method reliability was assessed by Bland-Altman plots, descriptive statistics, coefficients of variance and intraclass correlation coefficients. The electrical stimulation method was comparable to pressure algometry regarding 10 days test-retest repeatability, but with superior same-day reliability for electrical stimulation (P < 0.05). Between-subject variance rather than within-subject variance was the main source for test variation. There were no systematic differences in electrical thresholds across tissues and locations (P > 0.05). The presented tissue-specific direct deep tissue electrical stimulation technique has equal or superior reliability compared with the indirect tissue-unspecific stimulation by pressure algometry. This method may facilitate advances in mechanism based preventive and treatment strategies in acute and chronic post-surgical pain states. © 2014 The Acta Anaesthesiologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
Manolis, E; Holford, N; Cheung, SYA; Friberg, LE; Ogungbenro, K; Posch, M; Yates, JWT; Berry, S; Thomas, N; Corriol‐Rohou, S; Bornkamp, B; Bretz, F; Hooker, AC; Van der Graaf, PH; Standing, JF; Hay, J; Cole, S; Gigante, V; Karlsson, K; Dumortier, T; Benda, N; Serone, F; Das, S; Brochot, A; Ehmann, F; Hemmings, R; Rusten, I Skottheim
2017-01-01
Inadequate dose selection for confirmatory trials is currently still one of the most challenging issues in drug development, as illustrated by high rates of late‐stage attritions in clinical development and postmarketing commitments required by regulatory institutions. In an effort to shift the current paradigm in dose and regimen selection and highlight the availability and usefulness of well‐established and regulatory‐acceptable methods, the European Medicines Agency (EMA) in collaboration with the European Federation of Pharmaceutical Industries Association (EFPIA) hosted a multistakeholder workshop on dose finding (London 4–5 December 2014). Some methodologies that could constitute a toolkit for drug developers and regulators were presented. These methods are described in the present report: they include five advanced methods for data analysis (empirical regression models, pharmacometrics models, quantitative systems pharmacology models, MCP‐Mod, and model averaging) and three methods for study design optimization (Fisher information matrix (FIM)‐based methods, clinical trial simulations, and adaptive studies). Pairwise comparisons were also discussed during the workshop; however, mostly for historical reasons. This paper discusses the added value and limitations of these methods as well as challenges for their implementation. Some applications in different therapeutic areas are also summarized, in line with the discussions at the workshop. There was agreement at the workshop on the fact that selection of dose for phase III is an estimation problem and should not be addressed via hypothesis testing. Dose selection for phase III trials should be informed by well‐designed dose‐finding studies; however, the specific choice of method(s) will depend on several aspects and it is not possible to recommend a generalized decision tree. There are many valuable methods available, the methods are not mutually exclusive, and they should be used in conjunction to ensure a scientifically rigorous understanding of the dosing rationale. PMID:28722322
Engineering Design Test 4 YAH-64 Advanced Attack Helicopter
1980-01-01
representative absorber installed data. However, this data was used for comparison with the absorber removed vibration levels. Additionally, a comparison of the...Figure 5 presents a comparison of performance measured duriag EDT-4 and EDT-2. Dimensional data for each flight are presented in figures 6 through 9...pilot could easily compensate for the lack of yaw rate danilpilg. 28. The YAH-64 requires a high brake pedal pres,,ure dirw ! r )wtJ i.iv (pcrations and
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, William; Krakowiak, Konrad J.; Ulm, Franz-Josef, E-mail: ulm@mit.edu
2014-01-15
According to recent developments in cement clinker engineering, the optimization of chemical substitutions in the main clinker phases offers a promising approach to improve both reactivity and grindability of clinkers. Thus, monitoring the chemistry of the phases may become part of the quality control at the cement plants, along with the usual measurements of the abundance of the mineralogical phases (quantitative X-ray diffraction) and the bulk chemistry (X-ray fluorescence). This paper presents a new method to assess these three complementary quantities with a single experiment. The method is based on electron microprobe spot analyses, performed over a grid located onmore » a representative surface of the sample and interpreted with advanced statistical tools. This paper describes the method and the experimental program performed on industrial clinkers to establish the accuracy in comparison to conventional methods. -- Highlights: •A new method of clinker characterization •Combination of electron probe technique with cluster analysis •Simultaneous assessment of phase abundance, composition and bulk chemistry •Experimental validation performed on industrial clinkers.« less
Methodology or method? A critical review of qualitative case study reports
Hyett, Nerida; Kenny, Amanda; Dickson-Swift, Virginia
2014-01-01
Despite on-going debate about credibility, and reported limitations in comparison to other approaches, case study is an increasingly popular approach among qualitative researchers. We critically analysed the methodological descriptions of published case studies. Three high-impact qualitative methods journals were searched to locate case studies published in the past 5 years; 34 were selected for analysis. Articles were categorized as health and health services (n=12), social sciences and anthropology (n=7), or methods (n=15) case studies. The articles were reviewed using an adapted version of established criteria to determine whether adequate methodological justification was present, and if study aims, methods, and reported findings were consistent with a qualitative case study approach. Findings were grouped into five themes outlining key methodological issues: case study methodology or method, case of something particular and case selection, contextually bound case study, researcher and case interactions and triangulation, and study design inconsistent with methodology reported. Improved reporting of case studies by qualitative researchers will advance the methodology for the benefit of researchers and practitioners. PMID:24809980
Agnihotri, Samira; Sundeep, P. V. D. S.; Seelamantula, Chandra Sekhar; Balakrishnan, Rohini
2014-01-01
Objective identification and description of mimicked calls is a primary component of any study on avian vocal mimicry but few studies have adopted a quantitative approach. We used spectral feature representations commonly used in human speech analysis in combination with various distance metrics to distinguish between mimicked and non-mimicked calls of the greater racket-tailed drongo, Dicrurus paradiseus and cross-validated the results with human assessment of spectral similarity. We found that the automated method and human subjects performed similarly in terms of the overall number of correct matches of mimicked calls to putative model calls. However, the two methods also misclassified different subsets of calls and we achieved a maximum accuracy of ninety five per cent only when we combined the results of both the methods. This study is the first to use Mel-frequency Cepstral Coefficients and Relative Spectral Amplitude - filtered Linear Predictive Coding coefficients to quantify vocal mimicry. Our findings also suggest that in spite of several advances in automated methods of song analysis, corresponding cross-validation by humans remains essential. PMID:24603717
ERIC Educational Resources Information Center
Midwestern Higher Education Compact, 2014
2014-01-01
This report portrays various performance indicators that are intended to facilitate an assessment of the postsecondary education system in Missouri. Descriptive statistics are presented for Missouri and five other comparison states as well as the nation. Comparison states were selected according to the degree of similarity of population…
ERIC Educational Resources Information Center
Midwestern Higher Education Compact, 2014
2014-01-01
This report portrays various performance indicators that are intended to facilitate an assessment of the postsecondary education system in Michigan. Descriptive statistics are presented for Michigan and five other comparison states as well as the nation. Comparison states were selected according to the degree of similarity of population…
ERIC Educational Resources Information Center
Midwestern Higher Education Compact, 2014
2014-01-01
This report portrays various performance indicators that are intended to facilitate an assessment of the postsecondary education system in Iowa. Descriptive statistics are presented for Iowa and five other comparison states as well as the nation. Comparison states were selected according to the degree of similarity of population characteristics,…
ERIC Educational Resources Information Center
Midwestern Higher Education Compact, 2014
2014-01-01
This report portrays various performance indicators that are intended to facilitate an assessment of the postsecondary education system in Kansas. Descriptive statistics are presented for Kansas and five other comparison states as well as the nation. Comparison states were selected according to the degree of similarity of population…
ERIC Educational Resources Information Center
Midwestern Higher Education Compact, 2014
2014-01-01
This report portrays various performance indicators that are intended to facilitate an assessment of the postsecondary education system in Ohio. Descriptive statistics are presented for Ohio and five other comparison states as well as the nation. Comparison states were selected according to the degree of similarity of population characteristics,…
ERIC Educational Resources Information Center
Midwestern Higher Education Compact, 2014
2014-01-01
This report portrays various performance indicators that are intended to facilitate an assessment of the postsecondary education system in Minnesota. Descriptive statistics are presented for Minnesota and five other comparison states as well as the nation. Comparison states were selected according to the degree of similarity of population…
ERIC Educational Resources Information Center
Midwestern Higher Education Compact, 2014
2014-01-01
This report portrays various performance indicators that are intended to facilitate an assessment of the postsecondary education system in Nebraska. Descriptive statistics are presented for Nebraska and five other comparison states as well as the nation. Comparison states were selected according to the degree of similarity of population…
ERIC Educational Resources Information Center
Midwestern Higher Education Compact, 2014
2014-01-01
This report portrays various performance indicators that are intended to facilitate an assessment of the postsecondary education system in Wisconsin. Descriptive statistics are presented for Wisconsin and five other comparison states as well as the nation. Comparison states were selected according to the degree of similarity of population…
ERIC Educational Resources Information Center
Midwestern Higher Education Compact, 2014
2014-01-01
This report portrays various performance indicators that are intended to facilitate an assessment of the postsecondary education system in Indiana. Descriptive statistics are presented for Indiana and five other comparison states as well as the nation. Comparison states were selected according to the degree of similarity of population…
ERIC Educational Resources Information Center
Midwestern Higher Education Compact, 2014
2014-01-01
This report portrays various performance indicators that are intended to facilitate an assessment of the postsecondary education system in Illinois. Descriptive statistics are presented for Illinois and five other comparison states as well as the nation. Comparison states were selected according to the degree of similarity of population…
A Comparison of Advanced Placement Scores for Hispanic Students from California, Texas, and Arizona
ERIC Educational Resources Information Center
Koch, Bevan M.
2012-01-01
Purpose: One purpose of this study was to analyze the overall AP exam performance of Hispanic students of Mexican origin from California, Texas, and Arizona. A second purpose was to conduct a comparison of Hispanic student exam scores from California, Texas, and Arizona on mathematics and English exams. Specifically, the performance of Hispanic…
Multi-dimensional free-electron laser simulation codes : a comparison study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biedron, S. G.; Chae, Y. C.; Dejus, R. J.
A self-amplified spontaneous emission (SASE) free-electron laser (FEL) is under construction at the Advanced Photon Source (APS). Five FEL simulation codes were used in the design phase: GENESIS, GINGER, MEDUSA, RON, and TDA3D. Initial comparisons between each of these independent formulations show good agreement for the parameters of the APS SASE FEL.
Multi-Dimensional Free-Electron Laser Simulation Codes: A Comparison Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nuhn, Heinz-Dieter
A self-amplified spontaneous emission (SASE) free-electron laser (FEL) is under construction at the Advanced Photon Source (APS). Five FEL simulation codes were used in the design phase: GENESIS, GINGER, MEDUSA, RON, and TDA3D. Initial comparisons between each of these independent formulations show good agreement for the parameters of the APS SASE FEL.
The APS SASE FEL : modeling and code comparison.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biedron, S. G.
A self-amplified spontaneous emission (SASE) free-electron laser (FEL) is under construction at the Advanced Photon Source (APS). Five FEL simulation codes were used in the design phase: GENESIS, GINGER, MEDUSA, RON, and TDA3D. Initial comparisons between each of these independent formulations show good agreement for the parameters of the APS SASE FEL.
Lin, Michael F.; Deoras, Ameya N.; Rasmussen, Matthew D.; Kellis, Manolis
2008-01-01
Comparative genomics of multiple related species is a powerful methodology for the discovery of functional genomic elements, and its power should increase with the number of species compared. Here, we use 12 Drosophila genomes to study the power of comparative genomics metrics to distinguish between protein-coding and non-coding regions. First, we study the relative power of different comparative metrics and their relationship to single-species metrics. We find that even relatively simple multi-species metrics robustly outperform advanced single-species metrics, especially for shorter exons (≤240 nt), which are common in animal genomes. Moreover, the two capture largely independent features of protein-coding genes, with different sensitivity/specificity trade-offs, such that their combinations lead to even greater discriminatory power. In addition, we study how discovery power scales with the number and phylogenetic distance of the genomes compared. We find that species at a broad range of distances are comparably effective informants for pairwise comparative gene identification, but that these are surpassed by multi-species comparisons at similar evolutionary divergence. In particular, while pairwise discovery power plateaued at larger distances and never outperformed the most advanced single-species metrics, multi-species comparisons continued to benefit even from the most distant species with no apparent saturation. Last, we find that genes in functional categories typically considered fast-evolving can nonetheless be recovered at very high rates using comparative methods. Our results have implications for comparative genomics analyses in any species, including the human. PMID:18421375
Framework for more standardized evaluation of crater detection algorithms
NASA Astrophysics Data System (ADS)
Salamuniccar, G.; Loncaric, S.
Crater detection algorithms CDAs applications range from approximating the age of a planetary surface and autonomous landing to planets and asteroids to advanced statistical analyses ASR 33 2281-2287 The simplest evaluation of CDAs is visual comparison of detected craters with topography More advanced evaluations include comparison with craters catalogue s and cumulative size-frequency distribution s as well as use of Receiver Operating Characteristics ROC However in order for evaluation results from different papers to be comparable more standardized evaluation of CDAs is required As a first step the catalogue of 17582 craters was assembled which can be used as ground truth GT in future evaluations of CDAs 37 th LPS 1137 Each crater from this catalogue is aligned with MOLA topography and confirmed by three independent sources 1 catalogue from N G Barlow et al 2 catalogue from J F Rodionova et al and 3 revised version of catalogue used in previous work 34 th LPS 1403 As a second step a method for estimation of false detections for CDAs is proposed which in combination with known GT and other available analyses can improve evaluation of CDAs 37 th LPS 1138 While those two steps are important there are also some other requirements as e g usability of framework and flexibility for possible improvements of used data and methodology For CDAs that cannot use MOLA data as input visual images were generated using different projections and shadowing Tools are also provided for analysis
A novel approach to enhance the accuracy of vibration control of Frames
NASA Astrophysics Data System (ADS)
Toloue, Iraj; Shahir Liew, Mohd; Harahap, I. S. H.; Lee, H. E.
2018-03-01
All structures built within known seismically active regions are typically designed to endure earthquake forces. Despite advances in earthquake resistant structures, it can be inferred from hindsight that no structure is entirely immune to damage from earthquakes. Active vibration control systems, unlike the traditional methods which enlarge beams and columns, are highly effective countermeasures to reduce the effects of earthquake loading on a structure. It requires fast computation of nonlinear structural analysis in near time and has historically demanded advanced programming hosted on powerful computers. This research aims to develop a new approach for active vibration control of frames, which is applicable over both elastic and plastic material behavior. In this study, the Force Analogy Method (FAM), which is based on Hook's Law is further extended using the Timoshenko element which considers shear deformations to increase the reliability and accuracy of the controller. The proposed algorithm is applied to a 2D portal frame equipped with linear actuator, which is designed based on full state Linear Quadratic Regulator (LQR). For comparison purposes, the portal frame is analysed by both the Euler Bernoulli and Timoshenko element respectively. The results clearly demonstrate the superiority of the Timoshenko element over Euler Bernoulli for application in nonlinear analysis.
Lawson, Peter R.; Poyneer, Lisa; Barrett, Harrison; Frazin, Richard; Caucci, Luca; Devaney, Nicholas; Furenlid, Lars; Gładysz, Szymon; Guyon, Olivier; Krist, John; Maire, Jérôme; Marois, Christian; Mawet, Dimitri; Mouillet, David; Mugnier, Laurent; Pearson, Iain; Perrin, Marshall; Pueyo, Laurent; Savransky, Dmitry
2015-01-01
The direct imaging of planets around nearby stars is exceedingly difficult. Only about 14 exoplanets have been imaged to date that have masses less than 13 times that of Jupiter. The next generation of planet-finding coronagraphs, including VLT-SPHERE, the Gemini Planet Imager, Palomar P1640, and Subaru HiCIAO have predicted contrast performance of roughly a thousand times less than would be needed to detect Earth-like planets. In this paper we review the state of the art in exoplanet imaging, most notably the method of Locally Optimized Combination of Images (LOCI), and we investigate the potential of improving the detectability of faint exoplanets through the use of advanced statistical methods based on the concepts of the ideal observer and the Hotelling observer. We propose a formal comparison of techniques using a blind data challenge with an evaluation of performance using the Receiver Operating Characteristic (ROC) and Localization ROC (LROC) curves. We place particular emphasis on the understanding and modeling of realistic sources of measurement noise in ground-based AO-corrected coronagraphs. The work reported in this paper is the result of interactions between the co-authors during a week-long workshop on exoplanet imaging that was held in Squaw Valley, California, in March of 2012. PMID:26347393
Verification of Minimum Detectable Activity for Radiological Threat Source Search
NASA Astrophysics Data System (ADS)
Gardiner, Hannah; Myjak, Mitchell; Baciak, James; Detwiler, Rebecca; Seifert, Carolyn
2015-10-01
The Department of Homeland Security's Domestic Nuclear Detection Office is working to develop advanced technologies that will improve the ability to detect, localize, and identify radiological and nuclear sources from airborne platforms. The Airborne Radiological Enhanced-sensor System (ARES) program is developing advanced data fusion algorithms for analyzing data from a helicopter-mounted radiation detector. This detector platform provides a rapid, wide-area assessment of radiological conditions at ground level. The NSCRAD (Nuisance-rejection Spectral Comparison Ratios for Anomaly Detection) algorithm was developed to distinguish low-count sources of interest from benign naturally occurring radiation and irrelevant nuisance sources. It uses a number of broad, overlapping regions of interest to statistically compare each newly measured spectrum with the current estimate for the background to identify anomalies. We recently developed a method to estimate the minimum detectable activity (MDA) of NSCRAD in real time. We present this method here and report on the MDA verification using both laboratory measurements and simulated injects on measured backgrounds at or near the detection limits. This work is supported by the US Department of Homeland Security, Domestic Nuclear Detection Office, under competitively awarded contract/IAA HSHQDC-12-X-00376. This support does not constitute an express or implied endorsement on the part of the Gov't.
NASA Technical Reports Server (NTRS)
Lawson, Peter R.; Frazin, Richard; Barrett, Harrison; Caucci, Luca; Devaney, Nicholas; Furenlid, Lars; Gladysz, Szymon; Guyon, Olivier; Krist, John; Maire, Jerome;
2012-01-01
The direct imaging of planets around nearby stars is exceedingly difficult. Only about 14 exoplanets have been imaged to date that have masses less than 13 times that of Jupiter. The next generation of planet-finding coronagraphs, including VLT-SPHERE, the Gemini Planet Imager, Palomar P1640, and Subaru HiCIAO have predicted contrast performance of roughly a thousand times less than would be needed to detect Earth-like planets. In this paper we review the state of the art in exoplanet imaging, most notably the method of Locally Optimized Combination of Images (LOCI), and we investigate the potential of improving the detectability of faint exoplanets through the use of advanced statistical methods based on the concepts of the ideal observer and the Hotelling observer. We provide a formal comparison of techniques through a blind data challenge and evaluate performance using the Receiver Operating Characteristic (ROC) and Localization ROC (LROC) curves. We place particular emphasis on the understanding and modeling of realistic sources of measurement noise in ground-based AO-corrected coronagraphs. The work reported in this paper is the result of interactions between the co-authors during a week-long workshop on exoplanet imaging that was held in Squaw Valley, California, in March of 2012.
Lawson, Peter R; Poyneer, Lisa; Barrett, Harrison; Frazin, Richard; Caucci, Luca; Devaney, Nicholas; Furenlid, Lars; Gładysz, Szymon; Guyon, Olivier; Krist, John; Maire, Jérôme; Marois, Christian; Mawet, Dimitri; Mouillet, David; Mugnier, Laurent; Pearson, Iain; Perrin, Marshall; Pueyo, Laurent; Savransky, Dmitry
2012-07-01
The direct imaging of planets around nearby stars is exceedingly difficult. Only about 14 exoplanets have been imaged to date that have masses less than 13 times that of Jupiter. The next generation of planet-finding coronagraphs, including VLT-SPHERE, the Gemini Planet Imager, Palomar P1640, and Subaru HiCIAO have predicted contrast performance of roughly a thousand times less than would be needed to detect Earth-like planets. In this paper we review the state of the art in exoplanet imaging, most notably the method of Locally Optimized Combination of Images (LOCI), and we investigate the potential of improving the detectability of faint exoplanets through the use of advanced statistical methods based on the concepts of the ideal observer and the Hotelling observer. We propose a formal comparison of techniques using a blind data challenge with an evaluation of performance using the Receiver Operating Characteristic (ROC) and Localization ROC (LROC) curves. We place particular emphasis on the understanding and modeling of realistic sources of measurement noise in ground-based AO-corrected coronagraphs. The work reported in this paper is the result of interactions between the co-authors during a week-long workshop on exoplanet imaging that was held in Squaw Valley, California, in March of 2012.
NASA Astrophysics Data System (ADS)
Lawson, Peter R.; Poyneer, Lisa; Barrett, Harrison; Frazin, Richard; Caucci, Luca; Devaney, Nicholas; Furenlid, Lars; Gładysz, Szymon; Guyon, Olivier; Krist, John; Maire, Jérôme; Marois, Christian; Mawet, Dimitri; Mouillet, David; Mugnier, Laurent; Pearson, Iain; Perrin, Marshall; Pueyo, Laurent; Savransky, Dmitry
2012-07-01
The direct imaging of planets around nearby stars is exceedingly difficult. Only about 14 exoplanets have been imaged to date that have masses less than 13 times that of Jupiter. The next generation of planet-finding coronagraphs, including VLT-SPHERE, the Gemini Planet Imager, Palomar P1640, and Subaru HiCIAO have predicted contrast performance of roughly a thousand times less than would be needed to detect Earth-like planets. In this paper we review the state of the art in exoplanet imaging, most notably the method of Locally Optimized Combination of Images (LOCI), and we investigate the potential of improving the detectability of faint exoplanets through the use of advanced statistical methods based on the concepts of the ideal observer and the Hotelling observer. We propose a formal comparison of techniques using a blind data challenge with an evaluation of performance using the Receiver Operating Characteristic (ROC) and Localization ROC (LROC) curves. We place particular emphasis on the understanding and modeling of realistic sources of measurement noise in ground-based AO-corrected coronagraphs. The work reported in this paper is the result of interactions between the co-authors during a week-long workshop on exoplanet imaging that was held in Squaw Valley, California, in March of 2012.
Maupin, Molly A.; Senay, Gabriel B.; Kenny, Joan F.; Savoca, Mark E.
2012-01-01
Recent advances in remote-sensing technology and Simplified Surface Energy Balance (SSEB) methods can provide accurate and repeatable estimates of evapotranspiration (ET) when used with satellite observations of irrigated lands. Estimates of ET are generally considered equivalent to consumptive use (CU) because they represent the part of applied irrigation water that is evaporated, transpired, or otherwise not available for immediate reuse. The U.S. Geological Survey compared ET estimates from SSEB methods to CU data collected for 1995 using indirect methods as part of the National Water Use Information Program (NWUIP). Ten-year (2000-2009) average ET estimates from SSEB methods were derived using Moderate Resolution Imaging Spectroradiometer (MODIS) 1-kilometer satellite land surface temperature and gridded weather datasets from the Global Data Assimilation System (GDAS). County-level CU estimates for 1995 were assembled and referenced to 1-kilometer grid cells to synchronize with the SSEB ET estimates. Both datasets were seasonally and spatially weighted to represent the irrigation season (June-September) and those lands that were identified in the county as irrigated. A strong relation (R2 greater than 0.7) was determined between NWUIP CU and SSEB ET data. Regionally, the relation is stronger in arid western states than in humid eastern states, and positive and negative biases are both present at state-level comparisons. SSEB ET estimates can play a major role in monitoring and updating county-based CU estimates by providing a quick and cost-effective method to detect major year-to-year changes at county levels, as well as providing a means to disaggregate county-based ET estimates to sub-county levels. More research is needed to identify the causes for differences in state-based relations.
Das, Prasenjit; Gahlot, Gaurav P S; Mehta, Ritu; Makharia, Archita; Verma, Anil K; Sreenivas, Vishnubhatla; Panda, Subrat K; Ahuja, Vineet; Gupta, Siddhartha Datta; Makharia, Govind K
2016-11-01
Severity of villous atrophy in celiac disease (CeD) is the cumulative effect of enterocyte loss and cell regeneration. Gluten-free diet has been shown to benefit even in patients having a positive anti-tissue transglutaminase (tTG) antibody titre and mild enteropathy. We explored the balance between mucosal apoptotic enterocyte loss and cell regeneration in mild and advanced enteropathies. Duodenal biopsies from patients with mild enteropathy (Marsh grade 0 and 1) (n=26), advanced enteropathy (Marsh grade ≥2) (n=41) and control biopsies (n=12) were subjected to immunohistochemical staining for end-apoptotic markers (M30, H2AX); markers of cell death (perforin, annexin V); and cell proliferation (Ki67). Composite H-scores based on the intensity and distribution of markers were compared. End-apoptotic markers and marker of cell death (perforin) were significantly up-regulated in both mild and advanced enteropathies, in comparison to controls; without any difference between mild and advanced enteropathies. Ki67 labelling index was significantly higher in crypts of mild enteropathy, in comparison to controls, suggesting maintained regenerative activity in the former. Even in patients with mild enteropathy, the rate of apoptosis is similar to those with advanced enteropathy. These findings suggest the necessity of reviewing the existing practice of not treating patients with mild enteropathy. Copyright © 2016 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.
NASA Solar Array Demonstrates Commercial Potential
NASA Technical Reports Server (NTRS)
Creech, Gray
2006-01-01
A state-of-the-art solar-panel array demonstration site at NASA's Dryden Flight Research Center provides a unique opportunity for studying the latest in high-efficiency solar photovoltaic cells. This five-kilowatt solar-array site (see Figure 1) is a technology-transfer and commercialization success for NASA. Among the solar cells at this site are cells of a type that was developed in Dryden Flight Research Center s Environmental Research Aircraft and Sensor Technology (ERAST) program for use in NASA s Helios solar-powered airplane. This cell type, now denoted as A-300, has since been transferred to SunPower Corporation of Sunnyvale, California, enabling mass production of the cells for the commercial market. High efficiency separates these advanced cells from typical previously commercially available solar cells: Whereas typical previously commercially available cells are 12 to 15 percent efficient at converting sunlight to electricity, these advanced cells exhibit efficiencies approaching 23 percent. The increase in efficiency is due largely to the routing of electrical connections behind the cells (see Figure 2). This approach to increasing efficiency originated as a solution to the problem of maximizing the degree of utilization of the limited space available atop the wing of the Helios airplane. In retrospect, the solar cells in use at this site could be used on Helios, but the best cells otherwise commercially available could not be so used, because of their lower efficiencies. Historically, solar cells have been fabricated by use of methods that are common in the semiconductor industry. One of these methods includes the use of photolithography to define the rear electrical-contact features - diffusions, contact openings, and fingers. SunPower uses these methods to produce the advanced cells. To reduce fabrication costs, SunPower continues to explore new methods to define the rear electrical-contact features. The equipment at the demonstration site includes two fixed-angle solar arrays and one single-axis Sun-tracking array. One of the fixed arrays contains typical less-efficient commercial solar cells and is being used as a baseline for comparison of the other fixed array, which contains the advanced cells. The Sun-tracking array tilts to follow the Sun, using an advanced, real-time tracking device rather than customary pre-programmed mechanisms. Part of the purpose served by the demonstration is to enable determination of any potential advantage of a tracking array over a fixed array. The arrays are monitored remotely on a computer that displays pertinent information regarding the functioning of the arrays.
ERIC Educational Resources Information Center
Day, Sandra K.
2012-01-01
This study compared selected college/career readiness outcomes for students attending an urban high school who voluntarily participated in an academic support program, Advancement Via Individual Determination (AVID), to demographically similar/same school peers who completed the traditional academic program (TAP) of study. Grade point average,…
Advanced Bode Plot Techniques for Ultrasonic Transducers
NASA Astrophysics Data System (ADS)
DeAngelis, D. A.; Schulze, G. W.
The Bode plot, displayed as either impedance or admittance versus frequency, is the most basic test used by ultrasonic transducer designers. With simplicity and ease-of-use, Bode plots are ideal for baseline comparisons such as spacing of parasitic modes or impedance, but quite often the subtleties that manifest as poor process control are hard to interpret or are nonexistence. In-process testing of transducers is time consuming for quantifying statistical aberrations, and assessments made indirectly via the workpiece are difficult. This research investigates the use of advanced Bode plot techniques to compare ultrasonic transducers with known "good" and known "bad" process performance, with the goal of a-priori process assessment. These advanced techniques expand from the basic constant voltage versus frequency sweep to include constant current and constant velocity interrogated locally on transducer or tool; they also include up and down directional frequency sweeps to quantify hysteresis effects like jumping and dropping phenomena. The investigation focuses solely on the common PZT8 piezoelectric material used with welding transducers for semiconductor wire bonding. Several metrics are investigated such as impedance, displacement/current gain, velocity/current gain, displacement/voltage gain and velocity/voltage gain. The experimental and theoretical research methods include Bode plots, admittance loops, laser vibrometry and coupled-field finite element analysis.
Pei, Yuchen; Qi, Zhiyuan; Li, Xinle; ...
2017-02-21
Hollow carbon nanostructures are emerging as advanced electrocatalysts for the oxygen reduction reaction (ORR) due to the effective usage of active sites and the reduced dependence on expensive noble metals. Conventional preparation of these hollow structures is achieved through templates (e.g. SiO 2, CdS, and Ni 3C), which serve to retain the void interiors during carbonization, leading to an essential template-removal procedure using hazardous chemical etchants. Herein, we demonstrate the direct carbonization of unique hollow zeolitic imidazolate frameworks (ZIFs) for the synthesis of hollow carbon polyhedrons (HCPs) with well-defined morphologies. The hollow ZIF particles behave bi-functionally as a carbon sourcemore » and a morphology directing agent. This method evidences the strong morphology inherence from the hollow ZIFs during the carbonization, advancing the significant simplicity and environmental friendliness of this synthesis strategy. The as-prepared HCPs show a uniform polyhedral morphology and large void interiors, which enable their superior ORR activity. Iron can be doped into the HCPs (Fe/HCPs), providing the Fe/HCPs with enhanced ORR properties ( E 1/2 = 0.850 V) in comparison with those of HCPs. As a result, we highlight the efficient structural engineering to transform ZIFs into advanced carbon nanostructures accomplishing morphological control and high electrocatalytic activity.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pei, Yuchen; Qi, Zhiyuan; Li, Xinle
Hollow carbon nanostructures are emerging as advanced electrocatalysts for the oxygen reduction reaction (ORR) due to the effective usage of active sites and the reduced dependence on expensive noble metals. Conventional preparation of these hollow structures is achieved through templates (e.g. SiO 2, CdS, and Ni 3C), which serve to retain the void interiors during carbonization, leading to an essential template-removal procedure using hazardous chemical etchants. Herein, we demonstrate the direct carbonization of unique hollow zeolitic imidazolate frameworks (ZIFs) for the synthesis of hollow carbon polyhedrons (HCPs) with well-defined morphologies. The hollow ZIF particles behave bi-functionally as a carbon sourcemore » and a morphology directing agent. This method evidences the strong morphology inherence from the hollow ZIFs during the carbonization, advancing the significant simplicity and environmental friendliness of this synthesis strategy. The as-prepared HCPs show a uniform polyhedral morphology and large void interiors, which enable their superior ORR activity. Iron can be doped into the HCPs (Fe/HCPs), providing the Fe/HCPs with enhanced ORR properties ( E 1/2 = 0.850 V) in comparison with those of HCPs. As a result, we highlight the efficient structural engineering to transform ZIFs into advanced carbon nanostructures accomplishing morphological control and high electrocatalytic activity.« less
Shu, Lisa L; Mazar, Nina; Gino, Francesca; Ariely, Dan; Bazerman, Max H
2012-09-18
Many written forms required by businesses and governments rely on honest reporting. Proof of honest intent is typically provided through signature at the end of, e.g., tax returns or insurance policy forms. Still, people sometimes cheat to advance their financial self-interests-at great costs to society. We test an easy-to-implement method to discourage dishonesty: signing at the beginning rather than at the end of a self-report, thereby reversing the order of the current practice. Using laboratory and field experiments, we find that signing before-rather than after-the opportunity to cheat makes ethics salient when they are needed most and significantly reduces dishonesty.
Advanced ETC/LSS computerized analytical models, CO2 concentration. Volume 1: Summary document
NASA Technical Reports Server (NTRS)
Taylor, B. N.; Loscutoff, A. V.
1972-01-01
Computer simulations have been prepared for the concepts of C02 concentration which have the potential for maintaining a C02 partial pressure of 3.0 mmHg, or less, in a spacecraft environment. The simulations were performed using the G-189A Generalized Environmental Control computer program. In preparing the simulations, new subroutines to model the principal functional components for each concept were prepared and integrated into the existing program. Sample problems were run to demonstrate the methods of simulation and performance characteristics of the individual concepts. Comparison runs for each concept can be made for parametric values of cabin pressure, crew size, cabin air dry and wet bulb temperatures, and mission duration.
Low bit rate coding of Earth science images
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.
1993-01-01
In this paper, the authors discuss compression based on some new ideas in vector quantization and their incorporation in a sub-band coding framework. Several variations are considered, which collectively address many of the individual compression needs within the earth science community. The approach taken in this work is based on some recent advances in the area of variable rate residual vector quantization (RVQ). This new RVQ method is considered separately and in conjunction with sub-band image decomposition. Very good results are achieved in coding a variety of earth science images. The last section of the paper provides some comparisons that illustrate the improvement in performance attributable to this approach relative the the JPEG coding standard.
Panelli, Simona; Damiani, Giuseppe; Espen, Luca; Micheli, Gioacchino; Sgaramella, Vittorio
2006-05-10
The development of methods for the analysis and comparison of the nucleic acids contained in single cells is an ambitious and challenging goal that may provide useful insights in many physiopathological processes. We review here some of the published protocols for the amplification of whole genomes (WGA). We focus on the reaction known as Multiple Displacement Amplification (MDA), which probably represents the most reliable and efficient WGA protocol developed to date. We discuss some recent advances and applications, as well as some modifications to the reaction, which should improve its use and enlarge its range of applicability possibly to degraded genomes, and also to RNA via complementary DNA.
Advanced stability analysis for laminar flow control
NASA Technical Reports Server (NTRS)
Orszag, S. A.
1981-01-01
Five classes of problems are addressed: (1) the extension of the SALLY stability analysis code to the full eighth order compressible stability equations for three dimensional boundary layer; (2) a comparison of methods for prediction of transition using SALLY for incompressible flows; (3) a study of instability and transition in rotating disk flows in which the effects of Coriolis forces and streamline curvature are included; (4) a new linear three dimensional instability mechanism that predicts Reynolds numbers for transition to turbulence in planar shear flows in good agreement with experiment; and (5) a study of the stability of finite amplitude disturbances in axisymmetric pipe flow showing the stability of this flow to all nonlinear axisymmetric disturbances.
Kerney, Ryan; Wassersug, Richard; Hall, Brian K
2010-01-01
This study examines the skeletons of giant non-metamorphosing (GNM) Xenopus laevis tadpoles, which arrest their development indefinitely before metamorphosis, and grow to excessively large sizes in the absence of detectable thyroid glands. Cartilage growth is isometric; however, chondrocyte size is smaller in GNM tadpoles than in controls. Most cartilages stain weakly with alcian blue, and several cartilages are calcified (unlike controls). However, cartilages subjacent to periosteum-derived bone retain strong affinities for alcian blue, indicating a role for periosteum-derived bone in the retention of glycosaminoglycans during protracted larval growth. Bone formation in the head, limb, and axial skeletons is advanced in comparison with stage-matched controls, but arrests at various mid-metamorphic states. Both dermal and periosteum-derived bones grow to disproportionately large sizes in comparison to controls. Additionally, mature monocuspid teeth form in several GNM tadpoles. Advances in skeletal development are attributable to the old ages and large sizes of these tadpoles, and reveal unexpected developmental potentials of the pre-metamorphic skeleton. PMID:20402828
Contactless physiological signals extraction based on skin color magnification
NASA Astrophysics Data System (ADS)
Suh, Kun Ha; Lee, Eui Chul
2017-11-01
Although the human visual system is not sufficiently sensitive to perceive blood circulation, blood flow caused by cardiac activity makes slight changes on human skin surfaces. With advances in imaging technology, it has become possible to capture these changes through digital cameras. However, it is difficult to obtain clear physiological signals from such changes due to its fineness and noise factors, such as motion artifacts and camera sensing disturbances. We propose a method for extracting physiological signals with improved quality from skin colored-videos recorded with a remote RGB camera. The results showed that our skin color magnification method reveals the hidden physiological components remarkably in the time-series signal. A Korea Food and Drug Administration-approved heart rate monitor was used for verifying the resulting signal synchronized with the actual cardiac pulse, and comparisons of signal peaks showed correlation coefficients of almost 1.0. In particular, our method can be an effective preprocessing before applying additional postfiltering techniques to improve accuracy in image-based physiological signal extractions.
Analysis and Design of Rotors at Ultra-Low Reynolds Numbers
NASA Technical Reports Server (NTRS)
Kunz, Peter J.; Strawn, Roger C.
2003-01-01
Design tools have been developed for ultra-low Reynolds number rotors, combining enhanced actuator-ring / blade-element theory with airfoil section data based on two-dimensional Navier-Stokes calculations. This performance prediction method is coupled with an optimizer for both design and analysis applications. Performance predictions from these tools have been compared with three-dimensional Navier Stokes analyses and experimental data for a 2.5 cm diameter rotor with chord Reynolds numbers below 10,000. Comparisons among the analyses and experimental data show reasonable agreement both in the global thrust and power required, but the spanwise distributions of these quantities exhibit significant deviations. The study also reveals that three-dimensional and rotational effects significantly change local airfoil section performance. The magnitude of this issue, unique to this operating regime, may limit the applicability of blade-element type methods for detailed rotor design at ultra-low Reynolds numbers, but these methods are still useful for evaluating concept feasibility and rapidly generating initial designs for further analysis and optimization using more advanced tools.
Near-field hazard assessment of March 11, 2011 Japan Tsunami sources inferred from different methods
Wei, Y.; Titov, V.V.; Newman, A.; Hayes, G.; Tang, L.; Chamberlin, C.
2011-01-01
Tsunami source is the origin of the subsequent transoceanic water waves, and thus the most critical component in modern tsunami forecast methodology. Although impractical to be quantified directly, a tsunami source can be estimated by different methods based on a variety of measurements provided by deep-ocean tsunameters, seismometers, GPS, and other advanced instruments, some in real time, some in post real-time. Here we assess these different sources of the devastating March 11, 2011 Japan tsunami by model-data comparison for generation, propagation and inundation in the near field of Japan. This study provides a comparative study to further understand the advantages and shortcomings of different methods that may be potentially used in real-time warning and forecast of tsunami hazards, especially in the near field. The model study also highlights the critical role of deep-ocean tsunami measurements for high-quality tsunami forecast, and its combination with land GPS measurements may lead to better understanding of both the earthquake mechanisms and tsunami generation process. ?? 2011 MTS.
Research on Modeling of Propeller in a Turboprop Engine
NASA Astrophysics Data System (ADS)
Huang, Jiaqin; Huang, Xianghua; Zhang, Tianhong
2015-05-01
In the simulation of engine-propeller integrated control system for a turboprop aircraft, a real-time propeller model with high-accuracy is required. A study is conducted to compare the real-time and precision performance of propeller models based on strip theory and lifting surface theory. The emphasis in modeling by strip theory is focused on three points as follows: First, FLUENT is adopted to calculate the lift and drag coefficients of the propeller. Next, a method to calculate the induced velocity which occurs in the ground rig test is presented. Finally, an approximate method is proposed to obtain the downwash angle of the propeller when the conventional algorithm has no solution. An advanced approximation of the velocities induced by helical horseshoe vortices is applied in the model based on lifting surface theory. This approximate method will reduce computing time and remain good accuracy. Comparison between the two modeling techniques shows that the model based on strip theory which owns more advantage on both real-time and high-accuracy can meet the requirement.
Dendrometer bands made easy: Using modified cable ties to measure incremental growth of trees1
Anemaet, Evelyn R.; Middleton, Beth A.
2013-01-01
• Premise of the study: Dendrometer bands are a useful way to make sequential repeated measurements of tree growth, but traditional dendrometer bands can be expensive, time consuming, and difficult to construct in the field. An alternative to the traditional method of band construction is to adapt commercially available materials. This paper describes how to construct and install dendrometer bands using smooth-edged, stainless steel, cable tie banding and attachable rollerball heads. • Methods and Results: As a performance comparison, both traditional and cable tie dendrometer bands were installed on baldcypress trees at the National Wetlands Research Center in Lafayette, Louisiana, by both an experienced and a novice worker. Band installation times were recorded, and growth of the trees as estimated by the two band types was measured after approximately one year, demonstrating equivalence of the two methods. • Conclusions: This efficient approach to dendrometer band construction can help advance the knowledge of long-term tree growth in ecological studies. PMID:25202589
NASA Technical Reports Server (NTRS)
Sprowls, D. O.; Bucci, R. J.; Ponchel, B. M.; Brazill, R. L.; Bretz, P. E.
1984-01-01
A technique is demonstrated for accelerated stress corrosion testing of high strength aluminum alloys. The method offers better precision and shorter exposure times than traditional pass fail procedures. The approach uses data from tension tests performed on replicate groups of smooth specimens after various lengths of exposure to static stress. The breaking strength measures degradation in the test specimen load carrying ability due to the environmental attack. Analysis of breaking load data by extreme value statistics enables the calculation of survival probabilities and a statistically defined threshold stress applicable to the specific test conditions. A fracture mechanics model is given which quantifies depth of attack in the stress corroded specimen by an effective flaw size calculated from the breaking stress and the material strength and fracture toughness properties. Comparisons are made with experimental results from three tempers of 7075 alloy plate tested by the breaking load method and by traditional tests of statistically loaded smooth tension bars and conventional precracked specimens.
Step-Climbing Power Wheelchairs: A Literature Review
Sundaram, S. Andrea; Wang, Hongwu; Ding, Dan
2017-01-01
Background: Power wheelchairs capable of overcoming environmental barriers, such as uneven terrain, curbs, or stairs, have been under development for more than a decade. Method: We conducted a systematic review of the scientific and engineering literature to identify these devices, and we provide brief descriptions of the mechanism and method of operation for each. We also present data comparing their capabilities in terms of step climbing and standard wheelchair functions. Results: We found that all the devices presented allow for traversal of obstacles that cannot be accomplished with traditional power wheelchairs, but the slow speeds and small wheel diameters of some designs make them only moderately effective in the basic area of efficient transport over level ground and the size and configuration of some others limit maneuverability in tight spaces. Conclusion: We propose that safety and performance test methods more comprehensive than the International Organization for Standards (ISO) testing protocols be developed for measuring the capabilities of advanced wheelchairs with step-climbing and other environment-negotiating features to allow comparison of their clinical effectiveness. PMID:29339886
A comparison of two methods for measuring vessel length in woody plants.
Pan, Ruihua; Geng, Jing; Cai, Jing; Tyree, Melvin T
2015-12-01
Vessel lengths are important to plant hydraulic studies, but are not often reported because of the time required to obtain measurements. This paper compares the fast dynamic method (air injection method) with the slower but traditional static method (rubber injection method). Our hypothesis was that the dynamic method should yield a larger mean vessel length than the static method. Vessel length was measured by both methods in current year stems of Acer, Populus, Vitis and Quercus representing short- to long-vessel species. The hypothesis was verified. The reason for the consistently larger values of vessel length is because the dynamic method measures air flow rates in cut open vessels. The Hagen-Poiseuille law predicts that the air flow rate should depend on the product of number of cut open vessels times the fourth power of vessel diameter. An argument is advanced that the dynamic method is more appropriate because it measures the length of the vessels that contribute most to hydraulic flow. If all vessels had the same vessel length distribution regardless of diameter, then both methods should yield the same average length. This supports the hypothesis that large-diameter vessels might be longer than short-diameter vessels in most species. © 2015 John Wiley & Sons Ltd.
Vaughan, Ian P.; Ramirez Saldivar, Diana A.; Nathan, Senthilvel K. S. S.; Goossens, Benoit
2017-01-01
The development of GPS tags for tracking wildlife has revolutionised the study of home ranges, habitat use and behaviour. Concomitantly, there have been rapid developments in methods for estimating habitat use from GPS data. In combination, these changes can cause challenges in choosing the best methods for estimating home ranges. In primatology, this issue has received little attention, as there have been few GPS collar-based studies to date. However, as advancing technology is making collaring studies more feasible, there is a need for the analysis to advance alongside the technology. Here, using a high quality GPS collaring data set from 10 proboscis monkeys (Nasalis larvatus), we aimed to: 1) compare home range estimates from the most commonly used method in primatology, the grid-cell method, with three recent methods designed for large and/or temporally correlated GPS data sets; 2) evaluate how well these methods identify known physical barriers (e.g. rivers); and 3) test the robustness of the different methods to data containing either less frequent or random losses of GPS fixes. Biased random bridges had the best overall performance, combining a high level of agreement between the raw data and estimated utilisation distribution with a relatively low sensitivity to reduced fixed frequency or loss of data. It estimated the home range of proboscis monkeys to be 24–165 ha (mean 80.89 ha). The grid-cell method and approaches based on local convex hulls had some advantages including simplicity and excellent barrier identification, respectively, but lower overall performance. With the most suitable model, or combination of models, it is possible to understand more fully the patterns, causes, and potential consequences that disturbances could have on an animal, and accordingly be used to assist in the management and restoration of degraded landscapes. PMID:28362872
Strom, Suzanne L; Anderson, Craig L; Yang, Luanna; Canales, Cecilia; Amin, Alpesh; Lotfipour, Shahram; McCoy, C Eric; Osborn, Megan Boysen; Langdorf, Mark I
2015-11-01
Traditional Advanced Cardiac Life Support (ACLS) courses are evaluated using written multiple-choice tests. High-fidelity simulation is a widely used adjunct to didactic content, and has been used in many specialties as a training resource as well as an evaluative tool. There are no data to our knowledge that compare simulation examination scores with written test scores for ACLS courses. To compare and correlate a novel high-fidelity simulation-based evaluation with traditional written testing for senior medical students in an ACLS course. We performed a prospective cohort study to determine the correlation between simulation-based evaluation and traditional written testing in a medical school simulation center. Students were tested on a standard acute coronary syndrome/ventricular fibrillation cardiac arrest scenario. Our primary outcome measure was correlation of exam results for 19 volunteer fourth-year medical students after a 32-hour ACLS-based Resuscitation Boot Camp course. Our secondary outcome was comparison of simulation-based vs. written outcome scores. The composite average score on the written evaluation was substantially higher (93.6%) than the simulation performance score (81.3%, absolute difference 12.3%, 95% CI [10.6-14.0%], p<0.00005). We found a statistically significant moderate correlation between simulation scenario test performance and traditional written testing (Pearson r=0.48, p=0.04), validating the new evaluation method. Simulation-based ACLS evaluation methods correlate with traditional written testing and demonstrate resuscitation knowledge and skills. Simulation may be a more discriminating and challenging testing method, as students scored higher on written evaluation methods compared to simulation.
Käser, T; Pasternak, J A; Hamonic, G; Rieder, M; Lai, K; Delgado-Ortega, M; Gerdts, V; Meurens, F
2016-05-01
Chlamydiaceae is a family of intracellular bacteria causing a range of diverse pathological outcomes. The most devastating human diseases are ocular infections with C. trachomatis leading to blindness and genital infections causing pelvic inflammatory disease with long-term sequelae including infertility and chronic pelvic pain. In order to enable the comparison of experiments between laboratories investigating host-chlamydia interactions, the infectious titer has to be determined. Titer determination of chlamydia is most commonly performed via microscopy of host cells infected with a serial dilution of chlamydia. However, other methods including fluorescent ELISpot (Fluorospot) and DNA Chip Scanning Technology have also been proposed to enumerate chlamydia-infected cells. For viruses, flow cytometry has been suggested as a superior alternative to standard titration methods. In this study we compared the use of flow cytometry with microscopy and Fluorospot for the titration of C. suis as a representative of other intracellular bacteria. Titer determination via Fluorospot was unreliable, while titration via microscopy led to a linear read-out range of 16 - 64 dilutions and moderate reproducibility with acceptable standard deviations within and between investigators. In contrast, flow cytometry had a vast linear read-out range of 1,024 dilutions and the lowest standard deviations given a basic training in these methods. In addition, flow cytometry was faster and material costs were lower compared to microscopy. Flow cytometry offers a fast, cheap, precise, and reproducible alternative for the titration of intracellular bacteria like C. suis. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.
A possible explanation for foreland thrust propagation
NASA Astrophysics Data System (ADS)
Panian, John; Pilant, Walter
1990-06-01
A common feature of thin-skinned fold and thrust belts is the sequential nature of foreland directed thrust systems. As a rule, younger thrusts develop in the footwalls of older thrusts, the whole sequence propagating towards the foreland in the transport direction. As each new younger thrust develops, the entire sequence is thickened; particularly in the frontal region. The compressive toe region can be likened to an advancing wave; as the mountainous thrust belt advanced the down-surface slope stresses drive thrusts ahead of it much like a surfboard rider. In an attempt to investigate the stresses in the frontal regions of thrustsheets, a numerical method has been devised from the algorithm given by McTigue and Mei [1981]. The algorithm yields a quickly computed approximate solution of the gravity- and tectonic-induced stresses of a two-dimensional homogeneous elastic half-space with an arbitrarily shaped free surface of small slope. A comparison of the numerical method with analytical examples shows excellent agreement. The numerical method was devised because it greatly facilitates the stress calculations and frees one from using the restrictive, simple topographic profiles necessary to obtain an analytical solution. The numerical version of the McTigue and Mei algorithm shows that there is a region of increased maximum resolved shear stress, τ, directly beneath the toe of the overthrust sheet. Utilizing the Mohr-Coulomb failure criterion, predicted fault lines are computed. It is shown that they flatten and become horizontal in some portions of this zone of increased τ. Thrust sheets are known to advance upon weak decollement zones. If there is a coincidence of increased τ, a weak rock layer, and a potential fault line parallel to this weak layer, we have in place all the elements necessary to initiate a new thrusting event. That is, this combination acts as a nucleating center to initiate a new thrusting event. Therefore, thrusts develop in sequence towards the foreland as a consequence of the stress concentrating abilities of the toe of the thrust sheet. The gravity- and tectonic-induced stresses due to the surface topography (usually ignored in previous analyses) of an advancing thrust sheet play a key role in the nature of shallow foreland thrust propagation.
Compact binary merger rates: Comparison with LIGO/Virgo upper limits
Belczynski, Krzysztof; Repetto, Serena; Holz, Daniel E.; ...
2016-03-03
Here, we compare evolutionary predictions of double compact object merger rate densities with initial and forthcoming LIGO/Virgo upper limits. We find that: (i) Due to the cosmological reach of advanced detectors, current conversion methods of population synthesis predictions into merger rate densities are insufficient. (ii) Our optimistic models are a factor of 18 below the initial LIGO/Virgo upper limits for BH–BH systems, indicating that a modest increase in observational sensitivity (by a factor of ~2.5) may bring the first detections or first gravitational wave constraints on binary evolution. (iii) Stellar-origin massive BH–BH mergers should dominate event rates in advanced LIGO/Virgo and can be detected out to redshift z sime 2 with templates including inspiral, merger, and ringdown. Normal stars (more » $$\\lt 150\\;{M}_{\\odot }$$) can produce such mergers with total redshifted mass up to $${M}_{{\\rm{tot,z}}}\\simeq 400\\;{M}_{\\odot }$$. (iv) High black hole (BH) natal kicks can severely limit the formation of massive BH–BH systems (both in isolated binary and in dynamical dense cluster evolution), and thus would eliminate detection of these systems even at full advanced LIGO/Virgo sensitivity. We find that low and high BH natal kicks are allowed by current observational electromagnetic constraints. (v) The majority of our models yield detections of all types of mergers (NS–NS, BH–NS, BH–BH) with advanced detectors. Numerous massive BH–BH merger detections will indicate small (if any) natal kicks for massive BHs.« less
Public Opinions Regarding Advanced Dental Hygiene Practitioners in a High-Need State.
Walsh, Sarah E; Chubinski, Jennifer; Sallee, Toby; Rademacher, Eric W
2016-10-01
Purpose: The new Advanced Dental Hygiene Practitioner (ADHP) profession is expected to increase access to oral health care for the general population, particularly in rural and underserved areas. In order for this strategy to be successful, the public must feel comfortable with the care provided by ADHPs and seek out their services, yet consumer receptivity has been overlooked in the literature. The current study explores comfort with ADHPs for one high-need state: Kentucky. Methods: Consumer receptivity to the ADHP was assessed using a large, random sample telephone survey. As a point of comparison, respondents were first asked about their comfort with care provided by two other advanced practice clinicians already licensed in the state: advanced practice registered nurses (APRN) and physician assistants (PA). Results: After hearing a brief description of the profession, nearly 3 in 4 Kentucky adults said they would be somewhat (35.4%) or very (38.2%) comfortable seeing an ADHP for routine dental care. The total proportion of Kentucky adults who were comfortable seeking care from an ADHP (73.6%) was slightly less than the proportion indicating comfort seeing an APRN (79.7%) or PA (81.3%). Conclusion: Overall, this study demonstrates that adults are receptive to new models of care delivery and report high levels of comfort with ADHPs. Consumer concerns are unlikely to be a barrier to expanded licensure for dental hygienists in high-need areas like Kentucky. Copyright © 2016 The American Dental Hygienists’ Association.
Yao, Shuyang; Qian, Kun; Wang, Ruotian; Li, Yuanbo; Zhang, Yi
2015-06-01
This study compared the efficacy and safety of icotinib with standard second-line chemotherapy (single-agent docetaxel or pemetrexed) in previously treated advanced non-small cell lung cancer (NSCLC). Thirty-two consecutive patients treated with icotinib and 33 consecutive patients treated with standard second-line chemotherapy in Xuanwu Hospital from January 2012 to July 2013 were enrolled in our retrospective research. The Response Evaluation Criteria in Solid Tumors were used to evaluate the tumor responses, and the progression-free survival (PFS) was evaluated by Kaplan-Meier method. Icotinib was comparable with standard second-line chemotherapy for advanced NSCLC in terms of overall response rate (ORR) (28.1% vs 18.2%, P=0.341), disease control rate (DFS)(43.8% vs 45.5%, P=0.890), and PFS (4.3 months vs 3.8 months, P=0.506). In the icotinib group, the ORR of epidermal growth factor receptor (EGFR) mutant was significantly higher than that of EGFR unknown or wild type (P=0.017). In multivariate analysis, age, gender, histology, and the optimum first-line treatment response were dependent prognostic factors based on the PFS of the icotinib group. The incidence of adverse events was significantly fewer in the icotinib group than in the chemotherapy group (P=0.001). Compared with the standard second-line chemotherapy, icotinib is active in the treatment of advanced NSCLC patients, especially with EGFR unknown in the second line, with an acceptable adverse event profile.
Evidence Gaps in the Use of Spinal Cord Stimulation for Treating Chronic Spine Conditions.
Provenzano, David A; Amirdelfan, Kasra; Kapural, Leonardo; Sitzman, B Todd
2017-07-15
A review of literature. The aim of this study was to define and explore the current evidence gaps in the use of spinal cord stimulation (SCS) for treating chronic spine conditions. Although over the last 40 years SCS therapy has undergone significant technological advancements, evidence gaps still exist. A literature review was conducted to define current evidence gaps for the use of SCS. Areas of focus included 1) treatment of cervical spine conditions, 2) treatment of lumbar spine conditions, 3) technological advancement and device selection, 4) appropriate patient selection, 5) the ability to curb pharmacological treatment, and 6) methods to prolong efficacy over time. New SCS strategies using advanced waveforms are explored. The efficacy, safety, and cost-effectiveness of traditional SCS for chronic pain conditions are well-established. Evidence gaps do exist. Recently, advancement in waveforms and programming parameters have allowed for paresthesia-reduced/free stimulation that in specific clinical areas may improve clinical outcomes. New waveforms such as 10-kHz high-frequency have resulted in an improvement in back coverage. To date, clinical efficacy data are more prevalent for the treatment of painful conditions originating from the lumbar spine in comparison to the cervical spine. Evidence gaps still exist that require appropriate study designs with long-term follow-up to better define and improve the use of this therapy for the treatment of chronic spine pain in both the cervical and lumbar regions. N/A.
NASA Astrophysics Data System (ADS)
Kurnia, H.; Noerhadi, N. A. I.
2017-08-01
Three-dimensional digital study models were introduced following advances in digital technology. This study was carried out to assess the reliability of digital study models scanned by a laser scanning device newly assembled. The aim of this study was to compare the digital study models and conventional models. Twelve sets of dental impressions were taken from patients with mild-to-moderate crowding. The impressions were taken twice, one with alginate and the other with polyvinylsiloxane. The alginate impressions were made into conventional models, and the polyvinylsiloxane impressions were scanned to produce digital models. The mesiodistal tooth width and Little’s irregularity index (LII) were measured manually with digital calipers on the conventional models and digitally on the digital study models. Bolton analysis was performed on each study models. Each method was carried out twice to check for intra-observer variability. The reproducibility (comparison of the methods) was assessed using independent-sample t-tests. The mesiodistal tooth width between conventional and digital models did not significantly differ (p > 0.05). Independent-sample t-tests did not identify statistically significant differences for Bolton analysis and LII (p = 0.603 for Bolton and p = 0894 for LII). The measurements of the digital study models are as accurate as those of the conventional models.
Review of Large Spacecraft Deployable Membrane Antenna Structures
NASA Astrophysics Data System (ADS)
Liu, Zhi-Quan; Qiu, Hui; Li, Xiao; Yang, Shu-Li
2017-11-01
The demand for large antennas in future space missions has increasingly stimulated the development of deployable membrane antenna structures owing to their light weight and small stowage volume. However, there is little literature providing a comprehensive review and comparison of different membrane antenna structures. Space-borne membrane antenna structures are mainly classified as either parabolic or planar membrane antenna structures. For parabolic membrane antenna structures, there are five deploying and forming methods, including inflation, inflation-rigidization, elastic ribs driven, Shape Memory Polymer (SMP)-inflation, and electrostatic forming. The development and detailed comparison of these five methods are presented. Then, properties of membrane materials (including polyester film and polyimide film) for parabolic membrane antennas are compared. Additionally, for planar membrane antenna structures, frame shapes have changed from circular to rectangular, and different tensioning systems have emerged successively, including single Miura-Natori, double, and multi-layer tensioning systems. Recent advances in structural configurations, tensioning system design, and dynamic analysis for planar membrane antenna structures are investigated. Finally, future trends for large space membrane antenna structures are pointed out and technical problems are proposed, including design and analysis of membrane structures, materials and processes, membrane packing, surface accuracy stability, and test and verification technology. Through a review of large deployable membrane antenna structures, guidance for space membrane-antenna research and applications is provided.
NASA Technical Reports Server (NTRS)
O'Donnell, Patricia M. (Editor)
1990-01-01
Attention is given to topics of advanced concepts, hydrogen-oxygen fuel cells and electrolyzers, nickel electrodes, and advanced rechargeable batteries. Papers are presented on human exploration mission studies, advanced rechargeable sodium batteries with novel cathodes, advanced double-layer capacitors, recent advances in solid-polymer electrolyte fuel cell technology with low platinum loading electrodes, electrocatalysts for oxygen electrodes in fuel cells and water electrolyzers for space applications, and the corrosion testing of candidates for the alkaline fuel cell cathode. Other papers are on a structural comparison of nickel electodes and precursor phases, the application of electrochemical impedance spectroscopy for characterizing the degradation of Ni(OH)2/NiOOH electrodes, advances in lightweight nickel electrode technology, multimission nickel-hydrogen battery cell for the 1990s, a sodium-sulfur battery flight experiment definition study, and advances in ambient-temperature secondary lithium cells.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menlove, Howard Olsen; Henzlova, Daniela
This informal report presents the measurement data and information to document the performance of the advanced Precision Data Technology, Inc. (PDT) sealed cell boron-10 plate neutron detector that makes use of the advanced coating materials and procedures. In 2015, PDT changed the boron coating materials and application procedures to significantly increase the efficiency of their basic corrugated plate detector performance. A prototype sealed cell unit was supplied to LANL for testing and comparison with prior detector cells. Also, LANL had reference detector slabs from the original neutron collar (UNCL) and the new Antech UNCL with the removable 3He tubes. Themore » comparison data is presented in this report.« less
An advanced approach for computer modeling and prototyping of the human tooth.
Chang, Kuang-Hua; Magdum, Sheetalkumar; Khera, Satish C; Goel, Vijay K
2003-05-01
This paper presents a systematic and practical method for constructing accurate computer and physical models that can be employed for the study of human tooth mechanics. The proposed method starts with a histological section preparation of a human tooth. Through tracing outlines of the tooth on the sections, discrete points are obtained and are employed to construct B-spline curves that represent the exterior contours and dentino-enamel junction (DEJ) of the tooth using a least square curve fitting technique. The surface skinning technique is then employed to quilt the B-spline curves to create a smooth boundary and DEJ of the tooth using B-spline surfaces. These surfaces are respectively imported into SolidWorks via its application protocol interface to create solid models. The solid models are then imported into Pro/MECHANICA Structure for finite element analysis (FEA). The major advantage of the proposed method is that it first generates smooth solid models, instead of finite element models in discretized form. As a result, a more advanced p-FEA can be employed for structural analysis, which usually provides superior results to traditional h-FEA. In addition, the solid model constructed is smooth and can be fabricated with various scales using the solid freeform fabrication technology. This method is especially useful in supporting bioengineering applications, where the shape of the object is usually complicated. A human maxillary second molar is presented to illustrate and demonstrate the proposed method. Note that both the solid and p-FEA models of the molar are presented. However, comparison between p- and h-FEA models is out of the scope of the paper.
NASA Technical Reports Server (NTRS)
James, G. H.; Imbrie, P. K.; Hill, P. S.; Allen, D. H.; Haisler, W. E.
1988-01-01
Four current viscoplastic models are compared experimentally for Inconel 718 at 593 C. This material system responds with apparent negative strain rate sensitivity, undergoes cyclic work softening, and is susceptible to low cycle fatigue. A series of tests were performed to create a data base from which to evaluate material constants. A method to evaluate the constants is developed which draws on common assumptions for this type of material, recent advances by other researchers, and iterative techniques. A complex history test, not used in calculating the constants, is then used to compare the predictive capabilities of the models. The combination of exponentially based inelastic strain rate equations and dynamic recovery is shown to model this material system with the greatest success. The method of constant calculation developed was successfully applied to the complex material response encountered. Backstress measuring tests were found to be invaluable and to warrant further development.
Primate comparative neuroscience using magnetic resonance imaging: promises and challenges
Mars, Rogier B.; Neubert, Franz-Xaver; Verhagen, Lennart; Sallet, Jérôme; Miller, Karla L.; Dunbar, Robin I. M.; Barton, Robert A.
2014-01-01
Primate comparative anatomy is an established field that has made rich and substantial contributions to neuroscience. However, the labor-intensive techniques employed mean that most comparisons are often based on a small number of species, which limits the conclusions that can be drawn. In this review we explore how new developments in magnetic resonance imaging have the potential to apply comparative neuroscience to a much wider range of species, allowing it to realize an even greater potential. We discuss (1) new advances in the types of data that can be acquired, (2) novel methods for extracting meaningful measures from such data that can be compared between species, and (3) methods to analyse these measures within a phylogenetic framework. Together these developments will allow researchers to characterize the relationship between different brains, the ecological niche they occupy, and the behavior they produce in more detail than ever before. PMID:25339857
Kusumoto, Dai; Lachmann, Mark; Kunihiro, Takeshi; Yuasa, Shinsuke; Kishino, Yoshikazu; Kimura, Mai; Katsuki, Toshiomi; Itoh, Shogo; Seki, Tomohisa; Fukuda, Keiichi
2018-06-05
Deep learning technology is rapidly advancing and is now used to solve complex problems. Here, we used deep learning in convolutional neural networks to establish an automated method to identify endothelial cells derived from induced pluripotent stem cells (iPSCs), without the need for immunostaining or lineage tracing. Networks were trained to predict whether phase-contrast images contain endothelial cells based on morphology only. Predictions were validated by comparison to immunofluorescence staining for CD31, a marker of endothelial cells. Method parameters were then automatically and iteratively optimized to increase prediction accuracy. We found that prediction accuracy was correlated with network depth and pixel size of images to be analyzed. Finally, K-fold cross-validation confirmed that optimized convolutional neural networks can identify endothelial cells with high performance, based only on morphology. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
On the application of subcell resolution to conservation laws with stiff source terms
NASA Technical Reports Server (NTRS)
Chang, Shih-Hung
1989-01-01
LeVeque and Yee recently investigated a one-dimensional scalar conservation law with stiff source terms modeling the reacting flow problems and discovered that for the very stiff case most of the current finite difference methods developed for non-reacting flows would produce wrong solutions when there is a propagating discontinuity. A numerical scheme, essentially nonoscillatory/subcell resolution - characteristic direction (ENO/SRCD), is proposed for solving conservation laws with stiff source terms. This scheme is a modification of Harten's ENO scheme with subcell resolution, ENO/SR. The locations of the discontinuities and the characteristic directions are essential in the design. Strang's time-splitting method is used and time evolutions are done by advancing along the characteristics. Numerical experiment using this scheme shows excellent results on the model problem of LeVeque and Yee. Comparisons of the results of ENO, ENO/SR, and ENO/SRCD are also presented.
A chiral diamine: practical implications of a three-stereoisomer cocrystallization.
Dolinar, Brian S; Samedov, Kerim; Maloney, Andrew G P; West, Robert; Khrustalev, Victor N; Guzei, Ilia A
2018-01-01
A brief comparison of seven straightforward methods for molecular crystal-volume estimation revealed that their precisions are comparable. A chiral diamine, N 2 ,N 3 -bis[2,6-bis(propan-2-yl)phenyl]butane-2,3-diamine, C 28 H 44 N 2 , has been used to illustrate the application of the methods. Three stereoisomers of the diamine cocrystallize in the centrosymmetric space group P2 1 /c with Z' = 1.5. The molecules occupying general positions are RR and SS, whereas that residing on an inversion center is meso. This is one of only ten examples of three stereoisomers with two asymmetric atoms cocrystallizing together reported to the Cambridge Structural Database (CSD). The conformations of the SS/RR and meso molecules differ considerably and lead to statistically significantly different C(asymmetric)-C(asymmetric) bond lengths in the diastereomers. An advanced Python script-based CSD searching technique for chiral compounds is presented.
Dendrometer bands made easy: using modified cable ties to measure incremental growth of trees
Anemaet, Evelyn R.; Middleton, Beth A.
2013-01-01
Dendrometer bands are a useful way to make sequential repeated measurements of tree growth, but traditional dendrometer bands can be expensive, time consuming, and difficult to construct in the field. An alternative to the traditional method of band construction is to adapt commercially available materials. This paper describes how to construct and install dendrometer bands using smooth-edged, stainless steel, cable tie banding and attachable rollerball heads. As a performance comparison, both traditional and cable tie dendrometer bands were installed on baldcypress trees at the National Wetlands Research Center in Lafayette, Louisiana, by both an experienced and a novice worker. Band installation times were recorded, and growth of the trees as estimated by the two band types was measured after approximately one year, demonstrating equivalence of the two methods. This efficient approach to dendrometer band construction can help advance the knowledge of long-term tree growth in ecological studies.
Moradi, Sara; Fazlali, Alireza; Hamedi, Hamid
2018-01-01
Hydro-distillation (HD) method is a traditional technique which is used in most industrial companies. Microwave-assisted Hydro-distillation (MAHD) is an advanced HD technique utilizing a microwave oven in the extraction process. In this research, MAHD of essential oils from the aerial parts (leaves) of rosemary ( Rosmarinus officinalis L. ) was studied and the results were compared with those of the conventional HD in terms of extraction time, extraction efficiency, chemical composition, quality of the essential oils and cost of the operation. Microwave hydro-distillation was superior in terms of saving energy and extraction time (30 min , compared to 90 min in HD). Chromatography was used for quantity analysis of the essential oils composition. Quality of essential oil improved in MAHD method due to an increase of 17% in oxygenated compounds. Consequently, microwave hydro-distillation can be used as a substitute of traditional hydro-distillation.
This work presents the results of an international interlaboratory comparison on ex situ passive sampling in sediments. The main objectives were to map the state of the science in passively sampling sediments, identify sources of variability, provide recommendations and practica...
This work presents the results of an international interlaboratory comparison on ex situ passive sampling in sediments. The main objectives were to map the state of the science in passively sampling sediments, identify sources of variability, provide recommendations and practical...
ERIC Educational Resources Information Center
Midwestern Higher Education Compact, 2014
2014-01-01
This report portrays various performance indicators that are intended to facilitate an assessment of the postsecondary education system in North Dakota. Descriptive statistics are presented for North Dakota and five other comparison states as well as the nation. Comparison states were selected according to the degree of similarity of population…
ERIC Educational Resources Information Center
Midwestern Higher Education Compact, 2014
2014-01-01
This report portrays various performance indicators that are intended to facilitate an assessment of the postsecondary education system in South Dakota. Descriptive statistics are presented for South Dakota and five other comparison states as well as the nation. Comparison states were selected according to the degree of similarity of population…
Belsey, Natalie A; Cant, David J H; Minelli, Caterina; Araujo, Joyce R; Bock, Bernd; Brüner, Philipp; Castner, David G; Ceccone, Giacomo; Counsell, Jonathan D P; Dietrich, Paul M; Engelhard, Mark H; Fearn, Sarah; Galhardo, Carlos E; Kalbe, Henryk; Won Kim, Jeong; Lartundo-Rojas, Luis; Luftman, Henry S; Nunney, Tim S; Pseiner, Johannes; Smith, Emily F; Spampinato, Valentina; Sturm, Jacobus M; Thomas, Andrew G; Treacy, Jon P W; Veith, Lothar; Wagstaffe, Michael; Wang, Hai; Wang, Meiling; Wang, Yung-Chen; Werner, Wolfgang; Yang, Li; Shard, Alexander G
2016-10-27
We report the results of a VAMAS (Versailles Project on Advanced Materials and Standards) inter-laboratory study on the measurement of the shell thickness and chemistry of nanoparticle coatings. Peptide-coated gold particles were supplied to laboratories in two forms: a colloidal suspension in pure water and; particles dried onto a silicon wafer. Participants prepared and analyzed these samples using either X-ray photoelectron spectroscopy (XPS) or low energy ion scattering (LEIS). Careful data analysis revealed some significant sources of discrepancy, particularly for XPS. Degradation during transportation, storage or sample preparation resulted in a variability in thickness of 53 %. The calculation method chosen by XPS participants contributed a variability of 67 %. However, variability of 12 % was achieved for the samples deposited using a single method and by choosing photoelectron peaks that were not adversely affected by instrumental transmission effects. The study identified a need for more consistency in instrumental transmission functions and relative sensitivity factors, since this contributed a variability of 33 %. The results from the LEIS participants were more consistent, with variability of less than 10 % in thickness and this is mostly due to a common method of data analysis. The calculation was performed using a model developed for uniform, flat films and some participants employed a correction factor to account for the sample geometry, which appears warranted based upon a simulation of LEIS data from one of the participants and comparison to the XPS results.
Advanced Elemental and Isotopic Characterization of Atmospheric Aerosols
NASA Astrophysics Data System (ADS)
Shafer, M. M.; Schauer, J. J.; Park, J.
2001-12-01
Recent sampling and analytical developments advanced by the project team enable the detailed elemental and isotopic fingerprinting of extremely small masses of atmospheric aerosols. Historically, this type of characterization was rarely achieved due to limitations in analytical sensitivity and a lack of awareness concerning the potential for contamination. However, with the introduction of 3rd and 4th generation ICP-MS instrumentation and the application of state-of-the- art "clean-techniques", quantitative analysis of over 40 elements in sub-milligram samples can be realized. When coupled with an efficient and validated solubilization method, ICP-MS approaches provide distinct advantages in comparison with traditional methods; greatly enhanced detection limits, improved accuracy, and isotope resolution capability, to name a few. Importantly, the ICP-MS approach can readily be integrated with techniques which enable phase differentiation and chemical speciation information to be acquired. For example, selective chemical leaching can provide data on the association of metals with major phase-components, and oxidation state of certain metals. Critical information on metal-ligand stability can be obtained when electrochemical techniques, such as adsorptive cathodic stripping voltammetry (ACSV), are applied to these same extracts. Our research group is applying these techniques in a broad range of research projects to better understand the sources and distribution of trace metals in particulate matter in the atmosphere. Using examples from our research, including recent Pb and Sr isotope ratio work on Asian aerosols, we will illustrate the capabilities and applications of these new methods.
Life prediction technologies for aeronautical propulsion systems
NASA Technical Reports Server (NTRS)
Mcgaw, Michael A.
1990-01-01
Fatigue and fracture problems continue to occur in aeronautical gas turbine engines. Components whose useful life is limited by these failure modes include turbine hot-section blades, vanes, and disks. Safety considerations dictate that catastrophic failures be avoided, while economic considerations dictate that catastrophic failures be avoided, while economic considerations dictate that noncatastrophic failures occur as infrequently as possible. Therefore, the decision in design is making the tradeoff between engine performance and durability. LeRC has contributed to the aeropropulsion industry in the area of life prediction technology for over 30 years, developing creep and fatigue life prediction methodologies for hot-section materials. At the present time, emphasis is being placed on the development of methods capable of handling both thermal and mechanical fatigue under severe environments. Recent accomplishments include the development of more accurate creep-fatigue life prediction methods such as the total strain version of LeRC's strain-range partitioning (SRP) and the HOST-developed cyclic damage accumulation (CDA) model. Other examples include the development of a more accurate cumulative fatigue damage rule - the double damage curve approach (DDCA), which provides greatly improved accuracy in comparison with usual cumulative fatigue design rules. Other accomplishments in the area of high-temperature fatigue crack growth may also be mentioned. Finally, we are looking to the future and are beginning to do research on the advanced methods which will be required for development of advanced materials and propulsion systems over the next 10-20 years.
Identifying Degenerative Brain Disease Using Rough Set Classifier Based on Wavelet Packet Method.
Cheng, Ching-Hsue; Liu, Wei-Xiang
2018-05-28
Population aging has become a worldwide phenomenon, which causes many serious problems. The medical issues related to degenerative brain disease have gradually become a concern. Magnetic Resonance Imaging is one of the most advanced methods for medical imaging and is especially suitable for brain scans. From the literature, although the automatic segmentation method is less laborious and time-consuming, it is restricted in several specific types of images. In addition, hybrid techniques segmentation improves the shortcomings of the single segmentation method. Therefore, this study proposed a hybrid segmentation combined with rough set classifier and wavelet packet method to identify degenerative brain disease. The proposed method is a three-stage image process method to enhance accuracy of brain disease classification. In the first stage, this study used the proposed hybrid segmentation algorithms to segment the brain ROI (region of interest). In the second stage, wavelet packet was used to conduct the image decomposition and calculate the feature values. In the final stage, the rough set classifier was utilized to identify the degenerative brain disease. In verification and comparison, two experiments were employed to verify the effectiveness of the proposed method and compare with the TV-seg (total variation segmentation) algorithm, Discrete Cosine Transform, and the listing classifiers. Overall, the results indicated that the proposed method outperforms the listing methods.
The Need for a Shear Stress Calibration Standard
NASA Technical Reports Server (NTRS)
Scott, Michael A.
2004-01-01
By surveying current research of various micro-electro mechanical systems (MEMS) shear stress sensor development efforts we illustrate the wide variety of methods used to test and characterize these sensors. The different methods of testing these sensors make comparison of results difficult in some cases, and also this comparison is further complicated by the different formats used in reporting the results of these tests. The fact that making these comparisons can be so difficult at times clearly illustrates a need for standardized testing and reporting methodologies. This need indicates that the development of a national or international standard for the calibration of MEMS shear stress sensors should be undertaken. As a first step towards the development of this standard, two types of devices are compared and contrasted. The first type device is a laminar flow channel with two different versions considered: the first built with standard manufacturing techniques and the second with advanced precision manufacturing techniques. The second type of device is a new concept for creating a known shear stress consisting of a rotating wheel with the sensor mounted tangentially to the rim and positioned in close proximity to the rim. The shear stress generated by the flow at the sensor position is simply tau = (mu)r(omega)/h, where mu is the viscosity of the ambient gas, r the wheel radius, omega the angular velocity of the wheel, and h the width of the gap between the wheel rim and the sensor. Additionally, issues related to the development of a standard for shear stress calibration are identified and discussed.
Biological Embedding: Evaluation and Analysis of an Emerging Concept for Nursing Scholarship
Nist, Marliese Dion
2016-01-01
Aim The purpose of this paper is to report the analysis of the concept of biological embedding. Background Research that incorporates a life course perspective is becoming increasingly prominent in the health sciences. Biological embedding is a central concept in life course theory and may be important for nursing theories to enhance our understanding of health states in individuals and populations. Before the concept of biological embedding can be used in nursing theory and research, an analysis of the concept is required to advance it toward full maturity. Design Concept analysis. Data Sources PubMed, CINAHL and PsycINFO were searched for publications using the term ‘biological embedding’ or ‘biological programming’ and published through 2015. Methods An evaluation of the concept was first conducted to determine the concept’s level of maturity and was followed by a concept comparison, using the methods for concept evaluation and comparison described by Morse. Results A consistent definition of biological embedding – the process by which early life experience alters biological processes to affect adult health outcomes – was found throughout the literature. The concept has been used in several theories that describe the mechanisms through which biological embedding might occur and highlight its role in the development of health trajectories. Biological embedding is a partially mature concept, requiring concept comparison with an overlapping concept – biological programming – to more clearly establish the boundaries of biological embedding. Conclusions Biological embedding has significant potential for theory development and application in multiple academic disciplines, including nursing. PMID:27682606
Thiele, Maja; Madsen, Bjørn Stæhr; Hansen, Janne Fuglsang; Detlefsen, Sönke; Antonsen, Steen; Krag, Aleksander
2018-04-01
Alcohol is the leading cause of cirrhosis and liver-related mortality, but we lack serum markers to detect compensated disease. We compared the accuracy of the Enhanced Liver Fibrosis test (ELF), the FibroTest, liver stiffness measurements (made by transient elastography and 2-dimensional shear-wave elastography), and 6 indirect marker tests in detection of advanced liver fibrosis (Kleiner stage ≥F3). We performed a prospective study of 10 liver fibrosis markers (patented and not), all performed on the same day. Patients were recruited from primary centers (municipal alcohol rehabilitation, n = 128; 6% with advanced fibrosis) and secondary health care centers (hospital outpatient clinics, n = 161; 36% with advanced fibrosis) in the Region of Southern Denmark from 2013 through 2016. Biopsy-verified fibrosis stage was used as the reference standard. The primary aim was to validate ELF in detection of advanced fibrosis in patients with alcoholic liver disease recruited from primary and secondary health care centers, using the literature-based cutoff value of 10.5. Secondary aims were to assess the diagnostic accuracy of ELF for significant fibrosis and cirrhosis and to determine whether combinations of fibrosis markers increase diagnostic yield. The ELF identified patients with advanced liver fibrosis with an area under the receiver operating characteristic curve (AUROC) of 0.92 (95% confidence interval 0.89-0.96); findings did not differ significantly between patients from primary vs secondary care (P = .917). ELF more accurately identified patients with advanced liver fibrosis than indirect marker tests, but ELF and FibroTest had comparable diagnostic accuracies (AUROC of FibroTest, 0.90) (P = .209 for comparison with ELF). Results from the ELF and FibroTest did not differ significantly from those of liver stiffness measurement in intention-to-diagnose analyses (AUROC for transient elastography, 0.90), but did differ in the per-protocol analysis (AUROC for transient elastography, 0.97) (P = .521 and .004 for comparison with ELF). Adding a serum marker to transient elastography analysis did not increase accuracy. For patients in primary care, ELF values below 10.5 and FibroTest values below 0.58 had negative predictive values for advanced liver fibrosis of 98% and 94%, respectively. In a prospective, direct comparison of tests, ELF and FibroTest identified advanced liver fibrosis in alcoholic patients from primary and secondary care with high diagnostic accuracy (AUROC values of 0.90 or higher using biopsy as reference). Advanced fibrosis can be ruled out in primary health care patients based on an ELF value below 10.5 or a FibroTest value below 0.58. Copyright © 2018 AGA Institute. Published by Elsevier Inc. All rights reserved.
Sun, Hao; Guo, Jianbin; Wu, Shubiao; Liu, Fang; Dong, Renjie
2017-09-01
The volatile fatty acids (VFAs) concentration has been considered as one of the most sensitive process performance indicators in anaerobic digestion (AD) process. However, the accurate determination of VFAs concentration in AD processes normally requires advanced equipment and complex pretreatment procedures. A simplified method with fewer sample pretreatment procedures and improved accuracy is greatly needed, particularly for on-site application. This report outlines improvements to the Nordmann method, one of the most popular titrations used for VFA monitoring. The influence of ion and solid interfering subsystems in titrated samples on results accuracy was discussed. The total solid content in titrated samples was the main factor affecting accuracy in VFA monitoring. Moreover, a high linear correlation was established between the total solids contents and VFA measurement differences between the traditional Nordmann equation and gas chromatography (GC). Accordingly, a simplified titration method was developed and validated using a semi-continuous experiment of chicken manure anaerobic digestion with various organic loading rates. The good fitting of the results obtained by this method in comparison with GC results strongly supported the potential application of this method to VFA monitoring. Copyright © 2017. Published by Elsevier Ltd.
Stacul, Stefano; Squeglia, Nunziante
2018-02-15
A Boundary Element Method (BEM) approach was developed for the analysis of pile groups. The proposed method includes: the non-linear behavior of the soil by a hyperbolic modulus reduction curve; the non-linear response of reinforced concrete pile sections, also taking into account the influence of tension stiffening; the influence of suction by increasing the stiffness of shallow portions of soil and modeled using the Modified Kovacs model; pile group shadowing effect, modeled using an approach similar to that proposed in the Strain Wedge Model for pile groups analyses. The proposed BEM method saves computational effort compared to more sophisticated codes such as VERSAT-P3D, PLAXIS 3D and FLAC-3D, and provides reliable results using input data from a standard site investigation. The reliability of this method was verified by comparing results from data from full scale and centrifuge tests on single piles and pile groups. A comparison is presented between measured and computed data on a laterally loaded fixed-head pile group composed by reinforced concrete bored piles. The results of the proposed method are shown to be in good agreement with those obtained in situ.
2018-01-01
A Boundary Element Method (BEM) approach was developed for the analysis of pile groups. The proposed method includes: the non-linear behavior of the soil by a hyperbolic modulus reduction curve; the non-linear response of reinforced concrete pile sections, also taking into account the influence of tension stiffening; the influence of suction by increasing the stiffness of shallow portions of soil and modeled using the Modified Kovacs model; pile group shadowing effect, modeled using an approach similar to that proposed in the Strain Wedge Model for pile groups analyses. The proposed BEM method saves computational effort compared to more sophisticated codes such as VERSAT-P3D, PLAXIS 3D and FLAC-3D, and provides reliable results using input data from a standard site investigation. The reliability of this method was verified by comparing results from data from full scale and centrifuge tests on single piles and pile groups. A comparison is presented between measured and computed data on a laterally loaded fixed-head pile group composed by reinforced concrete bored piles. The results of the proposed method are shown to be in good agreement with those obtained in situ. PMID:29462857
Tooth-size discrepancy: A comparison between manual and digital methods
Correia, Gabriele Dória Cabral; Habib, Fernando Antonio Lima; Vogel, Carlos Jorge
2014-01-01
Introduction Technological advances in Dentistry have emerged primarily in the area of diagnostic tools. One example is the 3D scanner, which can transform plaster models into three-dimensional digital models. Objective This study aimed to assess the reliability of tooth size-arch length discrepancy analysis measurements performed on three-dimensional digital models, and compare these measurements with those obtained from plaster models. Material and Methods To this end, plaster models of lower dental arches and their corresponding three-dimensional digital models acquired with a 3Shape R700T scanner were used. All of them had lower permanent dentition. Four different tooth size-arch length discrepancy calculations were performed on each model, two of which by manual methods using calipers and brass wire, and two by digital methods using linear measurements and parabolas. Results Data were statistically assessed using Friedman test and no statistically significant differences were found between the two methods (P > 0.05), except for values found by the linear digital method which revealed a slight, non-significant statistical difference. Conclusions Based on the results, it is reasonable to assert that any of these resources used by orthodontists to clinically assess tooth size-arch length discrepancy can be considered reliable. PMID:25279529
New machine-learning algorithms for prediction of Parkinson's disease
NASA Astrophysics Data System (ADS)
Mandal, Indrajit; Sairam, N.
2014-03-01
This article presents an enhanced prediction accuracy of diagnosis of Parkinson's disease (PD) to prevent the delay and misdiagnosis of patients using the proposed robust inference system. New machine-learning methods are proposed and performance comparisons are based on specificity, sensitivity, accuracy and other measurable parameters. The robust methods of treating Parkinson's disease (PD) includes sparse multinomial logistic regression, rotation forest ensemble with support vector machines and principal components analysis, artificial neural networks, boosting methods. A new ensemble method comprising of the Bayesian network optimised by Tabu search algorithm as classifier and Haar wavelets as projection filter is used for relevant feature selection and ranking. The highest accuracy obtained by linear logistic regression and sparse multinomial logistic regression is 100% and sensitivity, specificity of 0.983 and 0.996, respectively. All the experiments are conducted over 95% and 99% confidence levels and establish the results with corrected t-tests. This work shows a high degree of advancement in software reliability and quality of the computer-aided diagnosis system and experimentally shows best results with supportive statistical inference.
Metagenome assembly through clustering of next-generation sequencing data using protein sequences.
Sim, Mikang; Kim, Jaebum
2015-02-01
The study of environmental microbial communities, called metagenomics, has gained a lot of attention because of the recent advances in next-generation sequencing (NGS) technologies. Microbes play a critical role in changing their environments, and the mode of their effect can be solved by investigating metagenomes. However, the difficulty of metagenomes, such as the combination of multiple microbes and different species abundance, makes metagenome assembly tasks more challenging. In this paper, we developed a new metagenome assembly method by utilizing protein sequences, in addition to the NGS read sequences. Our method (i) builds read clusters by using mapping information against available protein sequences, and (ii) creates contig sequences by finding consensus sequences through probabilistic choices from the read clusters. By using simulated NGS read sequences from real microbial genome sequences, we evaluated our method in comparison with four existing assembly programs. We found that our method could generate relatively long and accurate metagenome assemblies, indicating that the idea of using protein sequences, as a guide for the assembly, is promising. Copyright © 2015 Elsevier B.V. All rights reserved.
The heritability of the functional connectome is robust to common nonlinear registration methods
NASA Astrophysics Data System (ADS)
Hafzalla, George W.; Prasad, Gautam; Baboyan, Vatche G.; Faskowitz, Joshua; Jahanshad, Neda; McMahon, Katie L.; de Zubicaray, Greig I.; Wright, Margaret J.; Braskie, Meredith N.; Thompson, Paul M.
2016-03-01
Nonlinear registration algorithms are routinely used in brain imaging, to align data for inter-subject and group comparisons, and for voxelwise statistical analyses. To understand how the choice of registration method affects maps of functional brain connectivity in a sample of 611 twins, we evaluated three popular nonlinear registration methods: Advanced Normalization Tools (ANTs), Automatic Registration Toolbox (ART), and FMRIB's Nonlinear Image Registration Tool (FNIRT). Using both structural and functional MRI, we used each of the three methods to align the MNI152 brain template, and 80 regions of interest (ROIs), to each subject's T1-weighted (T1w) anatomical image. We then transformed each subject's ROIs onto the associated resting state functional MRI (rs-fMRI) scans and computed a connectivity network or functional connectome for each subject. Given the different degrees of genetic similarity between pairs of monozygotic (MZ) and same-sex dizygotic (DZ) twins, we used structural equation modeling to estimate the additive genetic influences on the elements of the function networks, or their heritability. The functional connectome and derived statistics were relatively robust to nonlinear registration effects.
ERIC Educational Resources Information Center
Buras, Kristen L.
2015-01-01
It is not uncommon to reference dire conditions in the South to make the nation appear more racially equitable and economically advanced by comparison. In this essay, I argue that the meanings and complexities surrounding commonplace disparagement of the South are not only troubling, but serve to advance the forms of race and class power…
Career Potential Among ROTC Enrollees: A Comparison of 1972 and 1973 Survey Results. Interim Report.
ERIC Educational Resources Information Center
Fisher, Allan H., Jr.; And Others
Research into the career intentions of Army, Navy and Air Force ROTC cadets showed that a majority were willing to stay and continue into the advanced program, even without financial aid. The proportion for Army enrollees was much lower than for Navy or Air Force enrollees. Almost half of all advanced cadets were undecided about staying on active…
s-Block Elements. Independent Learning Project for Advanced Chemistry (ILPAC). Unit I1.
ERIC Educational Resources Information Center
Inner London Education Authority (England).
This unit is one of 10 first year units produced by the Independent Learning Project for Advanced Chemistry (ILPAC). The unit, which consists of two sections and an appendix, focuses on the elements and compounds of Groups I and II (the s-block) of the periodic table. The groups are treated concurrently to note comparisons between groups and to…
NASA Astrophysics Data System (ADS)
Sivasubramaniam, Kiruba
This thesis makes advances in three dimensional finite element analysis of electrical machines and the quantification of their parameters and performance. The principal objectives of the thesis are: (1)the development of a stable and accurate method of nonlinear three-dimensional field computation and application to electrical machinery and devices; and (2)improvement in the accuracy of determination of performance parameters, particularly forces and torque computed from finite elements. Contributions are made in two general areas: a more efficient formulation for three dimensional finite element analysis which saves time and improves accuracy, and new post-processing techniques to calculate flux density values from a given finite element solution. A novel three-dimensional magnetostatic solution based on a modified scalar potential method is implemented. This method has significant advantages over the traditional total scalar, reduced scalar or vector potential methods. The new method is applied to a 3D geometry of an iron core inductor and a permanent magnet motor. The results obtained are compared with those obtained from traditional methods, in terms of accuracy and speed of computation. A technique which has been observed to improve force computation in two dimensional analysis using a local solution of Laplace's equation in the airgap of machines is investigated and a similar method is implemented in the three dimensional analysis of electromagnetic devices. A new integral formulation to improve force calculation from a smoother flux-density profile is also explored and implemented. Comparisons are made and conclusions drawn as to how much improvement is obtained and at what cost. This thesis also demonstrates the use of finite element analysis to analyze torque ripples due to rotor eccentricity in permanent magnet BLDC motors. A new method for analyzing torque harmonics based on data obtained from a time stepping finite element analysis of the machine is explored and implemented.
Exploration of Advanced Probabilistic and Stochastic Design Methods
NASA Technical Reports Server (NTRS)
Mavris, Dimitri N.
2003-01-01
The primary objective of the three year research effort was to explore advanced, non-deterministic aerospace system design methods that may have relevance to designers and analysts. The research pursued emerging areas in design methodology and leverage current fundamental research in the area of design decision-making, probabilistic modeling, and optimization. The specific focus of the three year investigation was oriented toward methods to identify and analyze emerging aircraft technologies in a consistent and complete manner, and to explore means to make optimal decisions based on this knowledge in a probabilistic environment. The research efforts were classified into two main areas. First, Task A of the grant has had the objective of conducting research into the relative merits of possible approaches that account for both multiple criteria and uncertainty in design decision-making. In particular, in the final year of research, the focus was on the comparison and contrasting between three methods researched. Specifically, these three are the Joint Probabilistic Decision-Making (JPDM) technique, Physical Programming, and Dempster-Shafer (D-S) theory. The next element of the research, as contained in Task B, was focused upon exploration of the Technology Identification, Evaluation, and Selection (TIES) methodology developed at ASDL, especially with regards to identification of research needs in the baseline method through implementation exercises. The end result of Task B was the documentation of the evolution of the method with time and a technology transfer to the sponsor regarding the method, such that an initial capability for execution could be obtained by the sponsor. Specifically, the results of year 3 efforts were the creation of a detailed tutorial for implementing the TIES method. Within the tutorial package, templates and detailed examples were created for learning and understanding the details of each step. For both research tasks, sample files and tutorials are attached in electronic form with the enclosed CD.
Evaluation of Three Field-Based Methods for Quantifying Soil Carbon
Izaurralde, Roberto C.; Rice, Charles W.; Wielopolski, Lucian; Ebinger, Michael H.; Reeves, James B.; Thomson, Allison M.; Francis, Barry; Mitra, Sudeep; Rappaport, Aaron G.; Etchevers, Jorge D.; Sayre, Kenneth D.; Govaerts, Bram; McCarty, Gregory W.
2013-01-01
Three advanced technologies to measure soil carbon (C) density (g C m−2) are deployed in the field and the results compared against those obtained by the dry combustion (DC) method. The advanced methods are: a) Laser Induced Breakdown Spectroscopy (LIBS), b) Diffuse Reflectance Fourier Transform Infrared Spectroscopy (DRIFTS), and c) Inelastic Neutron Scattering (INS). The measurements and soil samples were acquired at Beltsville, MD, USA and at Centro International para el Mejoramiento del Maíz y el Trigo (CIMMYT) at El Batán, Mexico. At Beltsville, soil samples were extracted at three depth intervals (0–5, 5–15, and 15–30 cm) and processed for analysis in the field with the LIBS and DRIFTS instruments. The INS instrument determined soil C density to a depth of 30 cm via scanning and stationary measurements. Subsequently, soil core samples were analyzed in the laboratory for soil bulk density (kg m−3), C concentration (g kg−1) by DC, and results reported as soil C density (kg m−2). Results from each technique were derived independently and contributed to a blind test against results from the reference (DC) method. A similar procedure was employed at CIMMYT in Mexico employing but only with the LIBS and DRIFTS instruments. Following conversion to common units, we found that the LIBS, DRIFTS, and INS results can be compared directly with those obtained by the DC method. The first two methods and the standard DC require soil sampling and need soil bulk density information to convert soil C concentrations to soil C densities while the INS method does not require soil sampling. We conclude that, in comparison with the DC method, the three instruments (a) showed acceptable performances although further work is needed to improve calibration techniques and (b) demonstrated their portability and their capacity to perform under field conditions. PMID:23383225
NASA Technical Reports Server (NTRS)
Dittmar, James H.
1989-01-01
The noise of advanced high speed propeller models measured in the NASA 8- by 6-foot wind tunnel has been compared with model propeller noise measured in another tunnel and with full-scale propeller noise measured in flight. Good agreement was obtained for the noise of a model counterrotation propeller tested in the 8- by 6-foot wind tunnel and in the acoustically treated test section of the Boeing Transonic Wind Tunnel. This good agreement indicates the relative validity of taking cruise noise data on a plate in the 8- by 6-foot wind tunnel compared with the free-field method in the Boeing tunnel. Good agreement was also obtained for both single rotation and counter-rotation model noise comparisons with full-scale propeller noise in flight. The good scale model to full-scale comparisons indicate both the validity of the 8- by 6-foot wind tunnel data and the ability to scale to full size. Boundary layer refraction on the plate provides a limitation to the measurement of forward arc noise in the 8- by 6-foot wind tunnel at the higher harmonics of the blade passing tone. The use of a validated boundary layer refraction model to adjust the data could remove this limitation.
NASA Technical Reports Server (NTRS)
Dittmar, James
1989-01-01
The noise of advanced high speed propeller models measured in the NASA 8- by 6-foot wind tunnel has been compared with model propeller noise measured in another tunnel and with full-scale propeller noise measured in flight. Good agreement was obtained for the noise of a model counterrotation propeller tested in the 8- by 6-foot wind tunnel and in the acoustically treated test section of the Boeing Transonic Wind Tunnel. This good agreement indicates the relative validity of taking cruise noise data on a plate in the 8- by 6-foot wind tunnel compared with the free-field method in the Boeing tunnel. Good agreement was also obtained for both single rotation and counter-rotation model noise comparisons with full-scale propeller noise in flight. The good scale model to full-scale comparisons indicate both the validity of the 8- by 6-foot wind tunnel data and the ability to scale to full size. Boundary layer refraction on the plate provides a limitation to the measurement of forward arc noise in the 8- by 6-foot wind tunnel at the higher harmonics of the blade passing tone. The sue of a validated boundary layer refraction model to adjust the data could remove this limitation.
Comparison between two non-contact techniques for art digitalization
NASA Astrophysics Data System (ADS)
Bianconi, F.; Catalucci, S.; Filippucci, M.; Marsili, R.; Moretti, M.; Rossi, G.; Speranzini, E.
2017-08-01
Many measurements techniques have been proposed for the “digitalization of objects”: structured light 3D scanner, laser scanner, high resolution camera, depth cam, thermal-cam, … Since the adoption of the European Agenda for Culture in 2007, heritage has been a priority for the Council’s work plans for culture, and cooperation at European level has advanced through the Open Method of Coordination. Political interest at EU level has steadily grown cultural and heritage stakeholders recently highlighted in the Declaration on a New Narrative for Europe: “Europe as a political body needs to recognize the value of Cultural Heritage”. Photomodelling is an innovative and extremely economical technique related to the conservation of Cultural Heritage, which leads to the creation of three-dimensional models starting from simple photographs. The aim of the research is to understand the full potential offered by this new technique and dedicated software, analysing the reliability of each instrument, with particular attention to freeware ones. An analytical comparison between photomodelling and structured light 3D scanner guarantees a first measure of the reliability of instruments, tested in the survey of several Umbrian heritage artefacts. The comparison between tests and reference models is explained using different algorithms and criteria, spatial, volumetric and superficial.
NASA Technical Reports Server (NTRS)
VanZante, Dale E.; Strazisar, Anthony J.; Wood, Jerry R,; Hathaway, Michael D.; Okiishi, Theodore H.
2000-01-01
The tip clearance flows of transonic compressor rotors are important because they have a significant impact on rotor and stage performance. While numerical simulations of these flows are quite sophisticated. they are seldom verified through rigorous comparisons of numerical and measured data because these kinds of measurements are rare in the detail necessary to be useful in high-speed machines. In this paper we compare measured tip clearance flow details (e.g. trajectory and radial extent) with corresponding data obtained from a numerical simulation. Recommendations for achieving accurate numerical simulation of tip clearance flows are presented based on this comparison. Laser Doppler Velocimeter (LDV) measurements acquired in a transonic compressor rotor, NASA Rotor 35, are used. The tip clearance flow field of this transonic rotor was simulated using a Navier-Stokes turbomachinery solver that incorporates an advanced k-epsilon turbulence model derived for flows that are not in local equilibrium. Comparison between measured and simulated results indicates that simulation accuracy is primarily dependent upon the ability of the numerical code to resolve important details of a wall-bounded shear layer formed by the relative motion between the over-tip leakage flow and the shroud wall. A simple method is presented for determining the strength of this shear layer.
An Expert System for Classifying Stars on the MK Spectral Classification System
NASA Astrophysics Data System (ADS)
Corbally, Christopher J.; Gray, R. O.
2013-01-01
We will describe an expert computer system designed to classify stellar spectra on the MK Spectral Classification system employing methods similar to those of humans who make direct comparison with the MK classification standards. Like an expert human classifier, MKCLASS first comes up with a rough spectral type, and then refines that type by direct comparison with MK standards drawn from a standards library using spectral criteria appropriate to the spectral class. Certain common spectral-type peculiarities can also be detected by the program. The program is also capable of identifying WD spectra and carbon stars and giving appropriate (but currently approximate) spectral types on the relevant systems. We will show comparisons between spectral types (including luminosity types) performed by MKCLASS and humans. The program currently is capable of competent classifications in the violet-green region, but plans are underway to extend the spectral criteria into the red and near-infrared regions. Two standard libraries with resolutions of 1.8 and 3.6Å are now available, but a higher-resolution standard library, using the new spectrograph on the Vatican Advanced Technology Telescope, is currently under preparation. Once that library is available, MKCLASS and the spectral libraries will be made available to the astronomical community.
Assessing FRET using spectral techniques.
Leavesley, Silas J; Britain, Andrea L; Cichon, Lauren K; Nikolaev, Viacheslav O; Rich, Thomas C
2013-10-01
Förster resonance energy transfer (FRET) techniques have proven invaluable for probing the complex nature of protein-protein interactions, protein folding, and intracellular signaling events. These techniques have traditionally been implemented with the use of one or more fluorescence band-pass filters, either as fluorescence microscopy filter cubes, or as dichroic mirrors and band-pass filters in flow cytometry. In addition, new approaches for measuring FRET, such as fluorescence lifetime and acceptor photobleaching, have been developed. Hyperspectral techniques for imaging and flow cytometry have also shown to be promising for performing FRET measurements. In this study, we have compared traditional (filter-based) FRET approaches to three spectral-based approaches: the ratio of acceptor-to-donor peak emission, linear spectral unmixing, and linear spectral unmixing with a correction for direct acceptor excitation. All methods are estimates of FRET efficiency, except for one-filter set and three-filter set FRET indices, which are included for consistency with prior literature. In the first part of this study, spectrofluorimetric data were collected from a CFP-Epac-YFP FRET probe that has been used for intracellular cAMP measurements. All comparisons were performed using the same spectrofluorimetric datasets as input data, to provide a relevant comparison. Linear spectral unmixing resulted in measurements with the lowest coefficient of variation (0.10) as well as accurate fits using the Hill equation. FRET efficiency methods produced coefficients of variation of less than 0.20, while FRET indices produced coefficients of variation greater than 8.00. These results demonstrate that spectral FRET measurements provide improved response over standard, filter-based measurements. Using spectral approaches, single-cell measurements were conducted through hyperspectral confocal microscopy, linear unmixing, and cell segmentation with quantitative image analysis. Results from these studies confirmed that spectral imaging is effective for measuring subcellular, time-dependent FRET dynamics and that additional fluorescent signals can be readily separated from FRET signals, enabling multilabel studies of molecular interactions. © 2013 International Society for Advancement of Cytometry. Copyright © 2013 International Society for Advancement of Cytometry.
NASA Astrophysics Data System (ADS)
Piao, Lin; Fu, Zuntao
2016-11-01
Cross-correlation between pairs of variables takes multi-time scale characteristic, and it can be totally different on different time scales (changing from positive correlation to negative one), e.g., the associations between mean air temperature and relative humidity over regions to the east of Taihang mountain in China. Therefore, how to correctly unveil these correlations on different time scales is really of great importance since we actually do not know if the correlation varies with scales in advance. Here, we compare two methods, i.e. Detrended Cross-Correlation Analysis (DCCA for short) and Pearson correlation, in quantifying scale-dependent correlations directly to raw observed records and artificially generated sequences with known cross-correlation features. Studies show that 1) DCCA related methods can indeed quantify scale-dependent correlations, but not Pearson method; 2) the correlation features from DCCA related methods are robust to contaminated noises, however, the results from Pearson method are sensitive to noise; 3) the scale-dependent correlation results from DCCA related methods are robust to the amplitude ratio between slow and fast components, while Pearson method may be sensitive to the amplitude ratio. All these features indicate that DCCA related methods take some advantages in correctly quantifying scale-dependent correlations, which results from different physical processes.
Piao, Lin; Fu, Zuntao
2016-11-09
Cross-correlation between pairs of variables takes multi-time scale characteristic, and it can be totally different on different time scales (changing from positive correlation to negative one), e.g., the associations between mean air temperature and relative humidity over regions to the east of Taihang mountain in China. Therefore, how to correctly unveil these correlations on different time scales is really of great importance since we actually do not know if the correlation varies with scales in advance. Here, we compare two methods, i.e. Detrended Cross-Correlation Analysis (DCCA for short) and Pearson correlation, in quantifying scale-dependent correlations directly to raw observed records and artificially generated sequences with known cross-correlation features. Studies show that 1) DCCA related methods can indeed quantify scale-dependent correlations, but not Pearson method; 2) the correlation features from DCCA related methods are robust to contaminated noises, however, the results from Pearson method are sensitive to noise; 3) the scale-dependent correlation results from DCCA related methods are robust to the amplitude ratio between slow and fast components, while Pearson method may be sensitive to the amplitude ratio. All these features indicate that DCCA related methods take some advantages in correctly quantifying scale-dependent correlations, which results from different physical processes.
Peavey, Erin; Vander Wyst, Kiley B
2017-10-01
This article provides critical examination and comparison of the conceptual meaning and underlying assumptions of the concepts evidence-based design (EBD) and research-informed design (RID) in order to facilitate practical use and theoretical development. In recent years, EBD has experienced broad adoption, yet it has been simultaneously critiqued for rigidity and misapplication. Many practitioners are gravitating to the term RID to describe their method of integrating knowledge into the design process. However, the term RID lacks a clear definition and the blurring of terms has the potential to weaken advances made integrating research into practice. Concept analysis methods from Walker and Avant were used to define the concepts for comparison. Conceptual definitions, process descriptions, examples (i.e., model cases), and methods of evaluation are offered for EBD and RID. Although EBD and RID share similarities in meaning, the two terms are distinct. When comparing evidence based (EB) and research informed, EB is a broad base of information types (evidence) that are narrowly applied (based), while the latter references a narrow slice of information (research) that is broadly applied (informed) to create an end product of design. Much of the confusion between the use of the concepts EBD and RID arises out of differing perspectives between the way practitioners and academics understand the underlying terms. The authors hope this article serves to generate thoughtful dialogue, which is essential to the development of a discipline, and look forward to the contribution of the readership.
NASA Astrophysics Data System (ADS)
Alexandroni, Guy; Zimmerman Moreno, Gali; Sochen, Nir; Greenspan, Hayit
2016-03-01
Recent advances in Diffusion Weighted Magnetic Resonance Imaging (DW-MRI) of white matter in conjunction with improved tractography produce impressive reconstructions of White Matter (WM) pathways. These pathways (fiber sets) often contain hundreds of thousands of fibers, or more. In order to make fiber based analysis more practical, the fiber set needs to be preprocessed to eliminate redundancies and to keep only essential representative fibers. In this paper we demonstrate and compare two distinctive frameworks for selecting this reduced set of fibers. The first framework entails pre-clustering the fibers using k-means, followed by Hierarchical Clustering and replacing each cluster with one representative. For the second clustering stage seven distance metrics were evaluated. The second framework is based on an efficient geometric approximation paradigm named coresets. Coresets present a new approach to optimization and have huge success especially in tasks requiring large computation time and/or memory. We propose a modified version of the coresets algorithm, Density Coreset. It is used for extracting the main fibers from dense datasets, leaving a small set that represents the main structures and connectivity of the brain. A novel approach, based on a 3D indicator structure, is used for comparing the frameworks. This comparison was applied to High Angular Resolution Diffusion Imaging (HARDI) scans of 4 healthy individuals. We show that among the clustering based methods, that cosine distance gives the best performance. In comparing the clustering schemes with coresets, Density Coreset method achieves the best performance.
Lovestead, Tara M; Burger, Jessica L; Schneider, Nico; Bruno, Thomas J
2016-12-15
Commercial and military aviation is faced with challenges that include high fuel costs, undesirable emissions, and supply chain insecurity that result from the reliance on petroleum-based feedstocks. The development of alternative gas turbine fuels from renewable resources will likely be part of addressing these issues. The United States has established a target for one billion gallons of renewable fuels to enter the supply chain by 2018. These alternative fuels will have to be very similar in properties, chemistry, and composition to existing fuels. To further this goal, the National Jet Fuel Combustion Program (a collaboration of multiple U.S. agencies under the auspices of the Federal Aviation Administration, FAA) is coordinating measurements on three reference gas turbine fuels to be used as a basis of comparison. These fuels are reference fuels with certain properties that are at the limits of experience. These fuels include a low viscosity, low flash point, high hydrogen content "best case" JP-8 (POSF 10264) fuel, a relatively high viscosity, high flash point, low hydrogen content "worst case" JP-5 (POSF 10259) fuel, and a Jet-A (POSF 10325) fuel with relatively average properties. A comprehensive speciation of these fuels is provided in this paper by use of high resolution gas chromatography/quadrupole time-of-flight - mass spectrometry (GC/QToF-MS), which affords unprecedented resolution and exact molecular formula capabilities. The volatility information as derived from the measurement of the advanced distillation curve temperatures, T k and T h , provides an approximation of the vapor liquid equilibrium and examination of the composition channels provides detailed insight into thermochemical data. A comprehensive understanding of the compositional and thermophysical data of gas turbine fuels is required not only for comparison but also for modeling of such complex mixtures, which will, in turn, aid in the development of new fuels with the goals of diversified feedstocks, decreased pollution, and increased efficiency.
Guo, Ming; Cao, Yunsong; Yang, Jingzhe; Zhang, Jingfeng
2016-10-01
The purpose of this study was to conduct network meta-analysis to assess drug resistances of the Food and Drug Administration-approved drugs for advanced renal cell carcinoma. Database searches were conducted to identify randomized controlled trials reporting results for eligible treatments. After searching for PubMed, MEDLINE, EMBASE, and ISI Web of Science, 22 studies (n = 7854 patients) were included for the comparison of drug resistance in the present meta-analysis. For overall present, the mean 6-month progression-free survival rates were 65.4%, 49.3%, 60.6%, 70.3%, 62.6%, 41.6%, 38.2%, 66.1%, 43.1%, and 17.9% for sunitinib, sorafenib, pazopanib, axitinib, bevacizumab plus interferon (IFN)-a, everolimus, temsirolimus, temsirolimus plus bevacizumab, IFN-a, and placebo, respectively. For indirect comparison, two combined therapies (bevacizumab plus IFN-a and temsirolimus plus bevacizumab) and sunitinib were of less ability of drug resistance. The risk ratio of sunitinib therapy was 3.64 (95% confidence interval [CI] [3.12, 4.25]), the risk ratio of temsirolimus plus bevacizumab therapy was 3.68 (95% CI [3.14, 4.33]), and the risk ratio of bevacizumab plus IFN-a therapy was 3.49 (95% CI [2.99, 4.06]). Our results support that combination of targeted therapies might be a novel strategy against advanced renal cell carcinomas.
NASA Technical Reports Server (NTRS)
Ardema, Mark D.
1995-01-01
This report summarizes the work entitled 'Advances in Hypersonic Vehicle Synthesis with Application to Studies of Advanced Thermal Protection Systems.' The effort was in two areas: (1) development of advanced methods of trajectory and propulsion system optimization; and (2) development of advanced methods of structural weight estimation. The majority of the effort was spent in the trajectory area.
Fluorescent kapakahines serve as non-toxic probes for live cell Golgi imaging.
Rocha, Danilo D; Espejo, Vinson R; Rainier, Jon D; La Clair, James J; Costa-Lotufo, Letícia V
2015-09-01
There is an ongoing need for fluorescent probes that specifically-target select organelles within mammalian cells. This study describes the development of probes for the selective labeling of the Golgi apparatus and offers applications for live cell and fixed cell imaging. The kapakahines, characterized by a common C(3)-N(1') dimeric tryptophan linkage, comprise a unique family of bioactive marine depsipeptide natural products. We describe the uptake and subcellular localization of fluorescently-labeled analogs of kapakahine E. Using confocal microscopy, we identify a rapid and selective localization within the Golgi apparatus. Comparison with commercial Golgi stains indicates a unique localization pattern, which differs from currently available materials, therein offering a new tool to monitor the Golgi in live cells without toxic side effects. This study identifies a fluorescent analog of kapakahine E that is rapidly uptaken in cells and localizes within the Golgi apparatus. The advance of microscopic methods is reliant on the parallel discovery of next generation molecular probes. This study describes the advance of stable and viable probe for staining the Golgi apparatus. Copyright © 2015 Elsevier Inc. All rights reserved.
Dual-Fuel Propulsion in Single-Stage Advanced Manned Launch System Vehicle
NASA Technical Reports Server (NTRS)
Lepsch, Roger A., Jr.; Stanley, Douglas O.; Unal, Resit
1995-01-01
As part of the United States Advanced Manned Launch System study to determine a follow-on, or complement, to the Space Shuttle, a reusable single-stage-to-orbit concept utilizing dual-fuel rocket propulsion has been examined. Several dual-fuel propulsion concepts were investigated. These include: a separate-engine concept combining Russian RD-170 kerosene-fueled engines with space shuttle main engine-derivative engines: the kerosene- and hydrogen-fueled Russian RD-701 engine; and a dual-fuel, dual-expander engine. Analysis to determine vehicle weight and size characteristics was performed using conceptual-level design techniques. A response-surface methodology for multidisciplinary design was utilized to optimize the dual-fuel vehicles with respect to several important propulsion-system and vehicle design parameters, in order to achieve minimum empty weight. The tools and methods employed in the analysis process are also summarized. In comparison with a reference hydrogen- fueled single-stage vehicle, results showed that the dual-fuel vehicles were from 10 to 30% lower in empty weight for the same payload capability, with the dual-expander engine types showing the greatest potential.
Comparison of various advanced oxidation processes for the degradation of 4-chloro-2 nitrophenol.
Saritha, P; Aparna, C; Himabindu, V; Anjaneyulu, Y
2007-11-19
In the present study an attempt is made efficiently to degrade USEPA listed 4-chloro-2-nitrophenol (4C-2-NP), widely available in bulk drug and pesticide wastes using various advanced oxidation processes (AOPs). A comparative assessment using various AOPs (UV, H(2)O(2,) UV/H(2)O(2), Fenton, UV/Fenton and UV/TiO(2)) was attempted after initial optimization studies, viz., varying pH, peroxide concentration, iron concentration, and TiO(2) loading. The degradation of the study compound was estimated using chemical oxygen demand (COD) reduction and compound reduction using spectrophotometric methods and further validated with high performance liquid chromatography (HPLC). The degradation trends followed the order: UV/Fenton > UV/TiO(2) > UV/H(2)O(2) > Fenton > H(2)O(2) > UV(.) It can be inferred from the studies that UV/Fenton was the most effective in partial mineralization of 4C-2-NP. However, lower costs were obtained with H(2)O(2). Kinetic constants were evaluated using first order equations to determine the rate constant K.
Hypothesis testing and earthquake prediction.
Jackson, D D
1996-04-30
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.
Hypothesis testing and earthquake prediction.
Jackson, D D
1996-01-01
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663
Comparison of AOPs for the removal of natural organic matter: performance and economic assessment.
Murray, C A; Parsons, S A
2004-01-01
Control of disinfection by-products during water treatment is primarily achieved by reducing the levels of organic precursor species prior to chlorination. Many waters contain natural organic matter at levels up to 15 mg L(-1); therefore it is necessary to have a range of control methods to support conventional coagulation. Advanced oxidation processes are such processes and in this paper the Fenton and photo-Fenton processes along with photocatalysis are assessed for their NOM removal potential. The performance of each process is shown to be dependent on pH and chemical dose as well as the initial NOM concentration. Under optimum conditions the processes achieved greater than 90% removal of DOC and UV254 absorbance. This removal led to the THMFP of the source water being reduced from 140 to below 10 microg L(-1), well below UK and US standards. An economic assessment of the processes revealed that currently such processes are not economic. With advances in technology and tightening of water quality standards these processes should become economically feasible options.
Centipod WEC, Advanced Controls, Resultant LCOE
McCall, Alan
2016-02-15
Project resultant LCOE model after implementation of MPC controller. Contains AEP, CBS, model documentation, and LCOE content model. This is meant for comparison with this project's baseline LCOE model.
ERIC Educational Resources Information Center
Turgut, Yildiz
2017-01-01
In view of the rapid advancement of technology, technological pedagogical content knowledge (TPACK) has been extensively studied. However, research on technological pedagogical content knowledge (TPACK) in teaching English appear to be scarce and addressed either pre-service or in-service teachers, but not their comparison. Additionally, although…
Tremblay, Gabriel; Chandiwana, David; Dolph, Mike; Hearnden, Jaclyn; Forsythe, Anna; Monaco, Mauricio
2018-01-01
Ribociclib (RIBO) and palbociclib (PALBO), combined with letrozole (LET), have been evaluated as treatments for hormone receptor-positive, human epidermal growth factor receptor 2-negative advanced breast cancer in separate Phase III randomized controlled trials (RCTs), but not head-to-head. Population differences can lead to biased results by classical indirect treatment comparison (ITC). Matching-adjusted indirect comparison (MAIC) aims to correct these differences. We compared RIBO and PALBO in hormone receptor-positive/human epidermal growth factor receptor 2-negative advanced breast cancer using MAIC. Patient-level data were available for RIBO (MONALEESA-2), while only published summary data were available for PALBO (PALOMA-2). Weights were assigned to MONALEESA-2 patient data such that mean baseline characteristics matched those reported for PALOMA-2; the resulting matched cohort was used in comparisons. Limited by the results reported in PALOMA-2, progression-free survival (PFS) was the primary comparison. Cox regression models were used to calculate adjusted hazard ratios (HRs) for PFS, before indirect treatment comparison (ITC) was performed with 95% confidence intervals. An exploratory analysis was performed similarly for overall survival using earlier PALBO data (PALOMA-1). Grade 3/4 adverse events were also compared. Racial characteristics, prior chemotherapy setting, and the extent of metastasis were the most imbalanced baseline characteristics. The unadjusted PFS HRs were 0.556 (0.429, 0.721) for RIBO+LET versus LET alone and 0.580 (0.460, 0.720) for PALBO+LET versus LET alone. MAIC adjustment resulted in an HR of 0.524 (0.406, 0.676) for RIBO+LET versus LET. PFS ITC using unadjusted trial data produced an HR of 0.959 (0.681, 1.350) for RIBO versus PALBO, or 0.904 (0.644, 1.268) with MAIC. Unadjusted overall survival HR of RIBO versus PALBO was 0.918 (0.492, 1.710); while exploratory MAIC was 0.839 (0.440, 1.598). ITC of grade 3/4 adverse events yielded a risk ratio of 0.806 (0.604, 1.076). MAIC was performed for RIBO and PALBO in the absence of a head-to-head trial: though not statistically significant, the results favored RIBO.
NASA Astrophysics Data System (ADS)
Greenwald, Thomas J.; Stephens, Graeme L.; Vonder Haar, Thomas H.; Jackson, Darren L.
1993-10-01
A method of remotely sensing integrated cloud liquid water over the oceans using spaceborne passive measurements from the special sensor microwave/imager (SSM/I) is described. The technique is comprised of a simple physical model that uses the 19.35- and 37-GHz channels of the SSM/I. The most comprehensive validation to date of cloud liquid water estimated from satellites is presented. This is accomplished through a comparison to independent ground-based microwave radiometer measurements of liquid water on San Nicolas Island, over the North Sea, and on Kwajalein and Saipan Islands in the western Pacific. In areas of marine stratocumulus clouds off the coast of California a further comparison is made to liquid water inferred from advanced very high resolution radiometer (AVHRR) visible reflectance measurements. The results are also compared qualitatively with near-coincident satellite imagery and with other existing microwave methods in selected regions. These comparisons indicate that the liquid water amounts derived from the simple scheme are consistent with the ground-based measurements for nonprecipitating cloud systems in the subtropics and middle to high latitudes. The comparison in the tropics, however, was less conclusive. Nevertheless, the retrieval method appears to have general applicability over most areas of the global oceans. An observational measure of the minimum uncertainty in the retrievals is determined in a limited number of known cloud-free areas, where the liquid water amounts are found to have a low variability of 0.016 kg m-2. A simple sensitivity and error analysis suggests that the liquid water estimates have a theoretical relative error typically ranging from about 25% to near 40% depending on the atmospheric/surface conditions and on the amount of liquid water present in the cloud. For the global oceans as a whole the average cloud liquid water is determined to be about 0.08 kg m-2. The major conclusion of this paper is that reasonably accurate amounts of cloud liquid water can be retrieved from SSM/I observations for nonprecipitating cloud systems, particularly in areas of persistent stratocumulus clouds, with less accurate retrievals in tropical regions.
Turboprop Cargo Aircraft Systems study, phase 1
NASA Technical Reports Server (NTRS)
Muehlbauer, J. C.; Hewell, J. G., Jr.; Lindenbaum, S. P.; Randall, C. C.; Searle, N.; Stone, F. R., Jr.
1980-01-01
The effects of advanced propellers (propfan) on aircraft direct operating costs, fuel consumption, and noiseprints were determined. A comparison of three aircraft selected from the results with competitive turbofan aircraft shows that advanced turboprop aircraft offer these potential benefits, relative to advanced turbofan aircraft: 21 percent fuel saving, 26 percent higher fuel efficiency, 15 percent lower DOCs, and 25 percent shorter field lengths. Fuel consumption for the turboprop is nearly 40 percent less than for current commercial turbofan aircraft. Aircraft with both types of propulsion satisfy current federal noise regulations. Advanced turboprop aircraft have smaller noiseprints at 90 EPNdB than advanced turbofan aircraft, but large noiseprints at 70 and 80 EPNdB levels, which are usually suggested as quietness goals. Accelerated development of advanced turboprops is strongly recommended to permit early attainment of the potential fuel saving. Several areas of work are identified which may produce quieter turboprop aircraft.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dowdy, M.; Burke, A.; Schneider, H.
Fuel economy, exhaust emissions, multifuel capability, advanced materials and cost/manufacturability for both conventional and advanced alternative power systems were assessed. To insure valid comparisons of vehicles with alternative power systems, the concept of an Otto-Engine-Equivalent (OEE) vehicle was utilized. Each engine type was sized to provide equivalent vehicle performance. Sensitivity to different performance criteria was evaluated. Fuel economy projections are made for each engine type considering both the legislated emission standards and possible future emissions requirements.
ERIC Educational Resources Information Center
Hansen, Kristine; Reeve, Suzanne; Gonzalez, Jennifer; Sudweeks, Richard R.; Hatch, Gary L.; Esplin, Patricia; Bradshaw, William S.
2006-01-01
This study was conducted to obtain empirical data to inform policy decisions about exempting incoming students from a first-year composition (FYC) course on the basis of Advanced Placement (AP) English exam scores. It examined the effect of avoiding first-year writing on the writing abilities of sophomore undergraduates. Two three-page writing…
NASA Technical Reports Server (NTRS)
Morris, S. J., Jr.
1979-01-01
Performance estimation, weights, and scaling laws for an eight-blade highly loaded propeller combined with an advanced turboshaft engine are presented. The data are useful for planned aircraft mission studies using the turboprop propulsion system. Comparisons are made between the performance of the 1990+ technology turboprop propulsion system and the performance of both a current technology turbofan and an 1990+ technology turbofan.
Wang, Weiya; Tang, Yuan; Li, Jinnan; Jiang, Lili; Jiang, Yong; Su, Xueying
2015-02-01
Surgical resections or tumor biopsies are often not available for patients with late-stage non-small cell lung cancer (NSCLC). Cytological specimens, such as malignant pleural effusion (MPE) cell blocks, are critical for molecular testing. Currently, diagnostic methods to identify anaplastic lymphoma kinase (ALK) rearrangements include fluorescence in situ hybridization (FISH), real-time reverse transcriptase-polymerase chain reaction (RT-PCR), and immunohistochemistry (IHC). In the current study, the authors compared Ventana ALK IHC assays and ALK FISH to detect ALK rearrangements in MPE cell blocks from patients with advanced NSCLC. The ALK IHC assay and ALK FISH were performed on 63 MPE cell blocks. RT-PCR analysis was performed as additional validation in cases in which a discrepancy was observed between the IHC assay and FISH results. The Ventana ALK IHC assay was found to be informative for all 63 samples, and 8 cases were positive. Fifty-eight cases were interpretable for FISH detection, and 6 were positive. The concordance between IHC and FISH was 100% among the 58 cases. Of the 5 uninterpretable ALK FISH cases, 2 cases and 3 cases, respectively, were ALK IHC positive and negative. One of the 2 ALK IHC-positive cases also demonstrated a positive result in the RT-PCR assay and the patient benefited from crizotinib treatment. MPE cell blocks can be used successfully for the detection of ALK rearrangement when tumor tissue is not available. The Ventana ALK IHC assay is an effective screening method for ALK rearrangement in MPE cell blocks from patients with advanced NSCLC, demonstrating high agreement with FISH results. © 2014 American Cancer Society.
Ryan, J E; Warrier, S K; Lynch, A C; Ramsay, R G; Phillips, W A; Heriot, A G
2016-03-01
Approximately 20% of patients treated with neoadjuvant chemoradiotherapy (nCRT) for locally advanced rectal cancer achieve a pathological complete response (pCR) while the remainder derive the benefit of improved local control and downstaging and a small proportion show a minimal response. The ability to predict which patients will benefit would allow for improved patient stratification directing therapy to those who are likely to achieve a good response, thereby avoiding ineffective treatment in those unlikely to benefit. A systematic review of the English language literature was conducted to identify pathological factors, imaging modalities and molecular factors that predict pCR following chemoradiotherapy. PubMed, MEDLINE and Cochrane Database searches were conducted with the following keywords and MeSH search terms: 'rectal neoplasm', 'response', 'neoadjuvant', 'preoperative chemoradiation', 'tumor response'. After review of title and abstracts, 85 articles addressing the prediction of pCR were selected. Clear methods to predict pCR before chemoradiotherapy have not been defined. Clinical and radiological features of the primary cancer have limited ability to predict response. Molecular profiling holds the greatest potential to predict pCR but adoption of this technology will require greater concordance between cohorts for the biomarkers currently under investigation. At present no robust markers of the prediction of pCR have been identified and the topic remains an area for future research. This review critically evaluates existing literature providing an overview of the methods currently available to predict pCR to nCRT for locally advanced rectal cancer. The review also provides a comprehensive comparison of the accuracy of each modality. Colorectal Disease © 2015 The Association of Coloproctology of Great Britain and Ireland.
ERIC Educational Resources Information Center
Syed, Mahbubur Rahman, Ed.
2009-01-01
The emerging field of advanced distance education delivers academic courses across time and distance, allowing educators and students to participate in a convenient learning method. "Methods and Applications for Advancing Distance Education Technologies: International Issues and Solutions" demonstrates communication technologies, intelligent…
All you need is shape: Predicting shear banding in sand with LS-DEM
NASA Astrophysics Data System (ADS)
Kawamoto, Reid; Andò, Edward; Viggiani, Gioacchino; Andrade, José E.
2018-02-01
This paper presents discrete element method (DEM) simulations with experimental comparisons at multiple length scales-underscoring the crucial role of particle shape. The simulations build on technological advances in the DEM furnished by level sets (LS-DEM), which enable the mathematical representation of the surface of arbitrarily-shaped particles such as grains of sand. We show that this ability to model shape enables unprecedented capture of the mechanics of granular materials across scales ranging from macroscopic behavior to local behavior to particle behavior. Specifically, the model is able to predict the onset and evolution of shear banding in sands, replicating the most advanced high-fidelity experiments in triaxial compression equipped with sequential X-ray tomography imaging. We present comparisons of the model and experiment at an unprecedented level of quantitative agreement-building a one-to-one model where every particle in the more than 53,000-particle array has its own avatar or numerical twin. Furthermore, the boundary conditions of the experiment are faithfully captured by modeling the membrane effect as well as the platen displacement and tilting. The results show a computational tool that can give insight into the physics and mechanics of granular materials undergoing shear deformation and failure, with computational times comparable to those of the experiment. One quantitative measure that is extracted from the LS-DEM simulations that is currently not available experimentally is the evolution of three dimensional force chains inside and outside of the shear band. We show that the rotations on the force chains are correlated to the rotations in stress principal directions.
Measuring respiration rates in marine fish larvae: challenges and advances.
Peck, M A; Moyano, M
2016-01-01
Metabolic costs can be extremely high in marine fish larvae and gaining reliable estimates of the effects of intrinsic and extrinsic factors on those costs is important to understand environmental constraints on early growth and survival. This review provides an historical perspective of measurements of larval marine fish respiration (O2 consumption) including the methods (Winkler, manometric, polarographic, paramagnetic and optodes) and systems (closed system to intermittent-flow) used. This study compares and systematically reviews the results (metabolic rates, ontogenetic changes and taxonomic differences) obtained from 59 studies examining 53 species from 30 families. Standard (anaesthetized or darkness), routine and active respiration rates were reported in 14, 94 and 8% of the studies and much more work has been performed on larvae of temperate (88%) compared with tropical (9%) and polar (3%) species. More than 35% of the studies have been published since 2000 owing to both advances in oxygen sensors and the growing emphasis on understanding physiological effects of environmental change. Common protocols are needed to facilitate cross-taxa comparisons such as the effect of temperature (Q10 : 1·47-3·47), body mass (slope of allometric changes in O2 consumption rate from 0·5 to 1·3) and activity level on metabolic costs as measured via respiration rate. A set of recommendations is provided that will make it easier for researchers to design measurement systems, to judge the reliability of measurements and to make inter-comparisons among studies and species. © 2016 The Fisheries Society of the British Isles.
NASA Astrophysics Data System (ADS)
Krypiak-Gregorczyk, Anna; Wielgosz, Pawel; Borkowski, Andrzej; Schmidt, Michael; Erdogan, Eren; Goss, Andreas
2017-04-01
Since electromagnetic measurements show dispersive characteristics, accurate modelling of the ionospheric electron content plays an important role for positioning and navigation applications to mitigate the effect of the ionospheric disturbances. Knowledge about the ionosphere contributes to a better understanding of space weather events as well as to forecast these events to enable protective measures in advance for electronic systems and satellite missions. In the last decades, advances in satellite technologies, data analysis techniques and models together with a rapidly growing number of analysis centres allow modelling the ionospheric electron content with an unprecedented accuracy in (near) real-time. In this sense, the representation of electron content variations in time and space with spline basis functions has gained practical importance in global and regional ionosphere modelling. This is due to their compact support and their flexibility to handle unevenly distributed observations and data gaps. In this contribution, the performances of two ionosphere models from UWM and DGFI-TUM, which are developed using spline functions are evaluated. The VTEC model of DGFI-TUM is based on tensor products of trigonometric B-spline functions in longitude and polynomial B-spline functions in latitude for a global representation. The UWM model uses two dimensional planar thin plate spline (TPS) with the Universal Transverse Mercator representation of ellipsoidal coordinates. In order to provide a smooth VTEC model, the TPS minimizes both, the squared norm of the Hessian matrix and deviations between data points and the model. In the evaluations, the differenced STEC analysis method and Jason-2 altimetry comparisons are applied.
Setting health research priorities using the CHNRI method: IV. Key conceptual advances
Rudan, Igor
2016-01-01
Introduction Child Health and Nutrition Research Initiative (CHNRI) started as an initiative of the Global Forum for Health Research in Geneva, Switzerland. Its aim was to develop a method that could assist priority setting in health research investments. The first version of the CHNRI method was published in 2007–2008. The aim of this paper was to summarize the history of the development of the CHNRI method and its key conceptual advances. Methods The guiding principle of the CHNRI method is to expose the potential of many competing health research ideas to reduce disease burden and inequities that exist in the population in a feasible and cost–effective way. Results The CHNRI method introduced three key conceptual advances that led to its increased popularity in comparison to other priority–setting methods and processes. First, it proposed a systematic approach to listing a large number of possible research ideas, using the “4D” framework (description, delivery, development and discovery research) and a well–defined “depth” of proposed research ideas (research instruments, avenues, options and questions). Second, it proposed a systematic approach for discriminating between many proposed research ideas based on a well–defined context and criteria. The five “standard” components of the context are the population of interest, the disease burden of interest, geographic limits, time scale and the preferred style of investing with respect to risk. The five “standard” criteria proposed for prioritization between research ideas are answerability, effectiveness, deliverability, maximum potential for disease burden reduction and the effect on equity. However, both the context and the criteria can be flexibly changed to meet the specific needs of each priority–setting exercise. Third, it facilitated consensus development through measuring collective optimism on each component of each research idea among a larger group of experts using a simple scoring system. This enabled the use of the knowledge of many experts in the field, “visualising” their collective opinion and presenting the list of many research ideas with their ranks, based on an intuitive score that ranges between 0 and 100. Conclusions Two recent reviews showed that the CHNRI method, an approach essentially based on “crowdsourcing”, has become the dominant approach to setting health research priorities in the global biomedical literature over the past decade. With more than 50 published examples of implementation to date, it is now widely used in many international organisations for collective decision–making on health research priorities. The applications have been helpful in promoting better balance between investments in fundamental research, translation research and implementation research. PMID:27418959
Trends in Social Science: The Impact of Computational and Simulative Models
NASA Astrophysics Data System (ADS)
Conte, Rosaria; Paolucci, Mario; Cecconi, Federico
This paper discusses current progress in the computational social sciences. Specifically, it examines the following questions: Are the computational social sciences exhibiting positive or negative developments? What are the roles of agent-based models and simulation (ABM), network analysis, and other "computational" methods within this dynamic? (Conte, The necessity of intelligent agents in social simulation, Advances in Complex Systems, 3(01n04), 19-38, 2000; Conte 2010; Macy, Annual Review of Sociology, 143-166, 2002). Are there objective indicators of scientific growth that can be applied to different scientific areas, allowing for comparison among them? In this paper, some answers to these questions are presented and discussed. In particular, comparisons among different disciplines in the social and computational sciences are shown, taking into account their respective growth trends in the number of publication citations over the last few decades (culled from Google Scholar). After a short discussion of the methodology adopted, results of keyword-based queries are presented, unveiling some unexpected local impacts of simulation on the takeoff of traditionally poorly productive disciplines.
NASA Technical Reports Server (NTRS)
1976-01-01
Ten advanced energy conversion systems for central-station, based-load electric power generation using coal and coal-derived fuels which were studied by NASA are presented. Various contractors were selected by competitive bidding to study these systems. A comparative evaluation is provided of the contractor results on both a system-by-system and an overall basis. Ground rules specified by NASA, such as coal specifications, fuel costs, labor costs, method of cost comparison, escalation and interest during construction, fixed charges, emission standards, and environmental conditions, are presented. Each system discussion includes the potential advantages of the system, the scope of each contractor's analysis, typical schematics of systems, comparison of cost of electricity and efficiency for each contractor, identification and reconciliation of differences, identification of future improvements, and discussion of outside comments. Considerations common to all systems, such as materials and furnaces, are also discussed. Results of selected in-house analyses are presented, in addition to contractor data. The results for all systems are then compared.
Integrating artificial and human intelligence into tablet production process.
Gams, Matjaž; Horvat, Matej; Ožek, Matej; Luštrek, Mitja; Gradišek, Anton
2014-12-01
We developed a new machine learning-based method in order to facilitate the manufacturing processes of pharmaceutical products, such as tablets, in accordance with the Process Analytical Technology (PAT) and Quality by Design (QbD) initiatives. Our approach combines the data, available from prior production runs, with machine learning algorithms that are assisted by a human operator with expert knowledge of the production process. The process parameters encompass those that relate to the attributes of the precursor raw materials and those that relate to the manufacturing process itself. During manufacturing, our method allows production operator to inspect the impacts of various settings of process parameters within their proven acceptable range with the purpose of choosing the most promising values in advance of the actual batch manufacture. The interaction between the human operator and the artificial intelligence system provides improved performance and quality. We successfully implemented the method on data provided by a pharmaceutical company for a particular product, a tablet, under development. We tested the accuracy of the method in comparison with some other machine learning approaches. The method is especially suitable for analyzing manufacturing processes characterized by a limited amount of data.