Sample records for component test methods

  1. 16 CFR 1508.5 - Component spacing test method for § 1508.4(b).

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Component spacing test method for § 1508.4(b). 1508.5 Section 1508.5 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR FULL-SIZE BABY CRIBS § 1508.5 Component spacing test method for...

  2. Control system health test system and method

    DOEpatents

    Hoff, Brian D.; Johnson, Kris W.; Akasam, Sivaprasad; Baker, Thomas M.

    2006-08-15

    A method is provided for testing multiple elements of a work machine, including a control system, a component, a sub-component that is influenced by operations of the component, and a sensor that monitors a characteristic of the sub-component. In one embodiment, the method is performed by the control system and includes sending a command to the component to adjust a first parameter associated with an operation of the component. Also, the method includes detecting a sensor signal from the sensor reflecting a second parameter associated with a characteristic of the sub-component and determining whether the second parameter is acceptable based on the command. The control system may diagnose at least one of the elements of the work machine when the second parameter of the sub-component is not acceptable.

  3. Multishaker modal testing

    NASA Technical Reports Server (NTRS)

    Craig, R. R., Jr.

    1985-01-01

    A component mode synthesis method for damped structures was developed and modal test methods were explored which could be employed to determine the relevant parameters required by the component mode synthesis method. Research was conducted on the following topics: (1) Development of a generalized time-domain component mode synthesis technique for damped systems; (2) Development of a frequency-domain component mode synthesis method for damped systems; and (3) Development of a system identification algorithm applicable to general damped systems. Abstracts are presented of the major publications which have been previously issued on these topics.

  4. Verification of International Space Station Component Leak Rates by Helium Accumulation Method

    NASA Technical Reports Server (NTRS)

    Underwood, Steve D.; Smith, Sherry L.

    2003-01-01

    Discovery of leakage on several International Space Station U.S. Laboratory Module ammonia system quick disconnects (QDs) led to the need for a process to quantify total leakage without removing the QDs from the system. An innovative solution was proposed allowing quantitative leak rate measurement at ambient external pressure without QD removal. The method utilizes a helium mass spectrometer configured in the detector probe mode to determine helium leak rates inside a containment hood installed on the test component. The method was validated through extensive developmental testing. Test results showed the method was viable, accurate and repeatable for a wide range of leak rates. The accumulation method has been accepted by NASA and is currently being used by Boeing Huntsville, Boeing Kennedy Space Center and Boeing Johnson Space Center to test welds and valves and will be used by Alenia to test the Cupola. The method has been used in place of more expensive vacuum chamber testing which requires removing the test component from the system.

  5. Method and apparatus for using magneto-acoustic remanence to determine embrittlement

    NASA Technical Reports Server (NTRS)

    Allison, Sidney G. (Inventor); Namkung, Min (Inventor); Yost, William T. (Inventor); Cantrell, John H. (Inventor)

    1992-01-01

    A method and apparatus for testing steel components for temperature embrittlement uses magneto-acoustic emission to nondestructively evaluate the component are presented. Acoustic emission signals occur more frequently at higher levels in embrittled components. A pair of electromagnets are used to create magnetic induction in the test component. Magneto-acoustic emission signals may be generated by applying an AC current to the electromagnets. The acoustic emission signals are analyzed to provide a comparison between a component known to be unembrittled and a test component. Magnetic remanence is determined by applying a DC current to the electromagnets and then by turning the magnets off and observing the residual magnetic induction.

  6. A HPLC-DAD method for the simultaneous determination of five marker components in the traditional herbal medicine Bangpungtongsung-san

    PubMed Central

    Weon, Jin Bae; Yang, Hye Jin; Ma, Jin Yeul; Ma, Choong Je

    2011-01-01

    Background: Bangpungtongsung-san, one of the traditional herbal medicines, was known to be a prescription for obesity. Objective: For the simultaneous determination of five components (paeoniflorin, 6-gingerol, decursin, geniposide, and glycyrrhizin) in Bangpungtongsung-san, a high-performance liquid chromatography with diode-array detector method was established. Materials and Methods: To develop the method, a reverse phase column, DIONEX C 18 (5 μm, 120 µ, 4.6 mm × 150 mm) was used. The mobile phase consisted of methanol and water using a gradient elution. The UV wavelength was set at 230, 240, and 254 nm. Method validation was accomplished by linearity, precision test, and recovery test. Results: All calibration curves of components showed good linearity (R 2 > 0.9959). The limit of detection (LOD) and limit of quantification (LOQ) ranged from 0.01 to 0.17 μg/ml and 0.04 to 0.53 μg/ml, respectively. The relative standard deviations (RSD) value of precision test, intraday and interday tests were less than 0.43% and 1.26%. In the recovery test, results of accuracy ranged from 95.27% to 107.70% with RSD values less than 2.21%. Conclusion: This developed method was applied to the commercial Bangpungtongsung-san sample and the five marker components were separated effectively without interference of any peaks of components. PMID:21472081

  7. 16 CFR 1509.6 - Component-spacing test method.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Component-spacing test method. 1509.6 Section 1509.6 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT... applied to the wedge perpendicular to the plane of the crib side. ...

  8. Considering Horn's Parallel Analysis from a Random Matrix Theory Point of View.

    PubMed

    Saccenti, Edoardo; Timmerman, Marieke E

    2017-03-01

    Horn's parallel analysis is a widely used method for assessing the number of principal components and common factors. We discuss the theoretical foundations of parallel analysis for principal components based on a covariance matrix by making use of arguments from random matrix theory. In particular, we show that (i) for the first component, parallel analysis is an inferential method equivalent to the Tracy-Widom test, (ii) its use to test high-order eigenvalues is equivalent to the use of the joint distribution of the eigenvalues, and thus should be discouraged, and (iii) a formal test for higher-order components can be obtained based on a Tracy-Widom approximation. We illustrate the performance of the two testing procedures using simulated data generated under both a principal component model and a common factors model. For the principal component model, the Tracy-Widom test performs consistently in all conditions, while parallel analysis shows unpredictable behavior for higher-order components. For the common factor model, including major and minor factors, both procedures are heuristic approaches, with variable performance. We conclude that the Tracy-Widom procedure is preferred over parallel analysis for statistically testing the number of principal components based on a covariance matrix.

  9. Techniques employed by the NASA White Sands Test Facility to ensure oxygen system component safety

    NASA Technical Reports Server (NTRS)

    Stradling, J. S.; Pippen, D. L.; Frye, G. W.

    1983-01-01

    Methods of ascertaining the safety and suitability of a variety of oxygen system components are discussed. Additionally, qualification and batch control requirements for soft goods in oxygen systems are presented. Current oxygen system component qualification test activities in progress at White Sands Test Facility are described.

  10. Agreement between clinical and laboratory methods assessing tonic and cross-link components of accommodation and vergence.

    PubMed

    Neveu, Pascaline; Priot, Anne-Emmanuelle; Philippe, Matthieu; Fuchs, Philippe; Roumes, Corinne

    2015-09-01

    Several tests are available to optometrists for investigating accommodation and vergence. This study sought to investigate the agreement between clinical and laboratory methods and to clarify which components are actually measured when tonic and cross-link of accommodation and vergence are assessed. Tonic vergence, tonic accommodation, accommodative vergence (AC/A) and vergence accommodation (CA/C) were measured using several tests. Clinical tests were compared to the laboratory assessment, the latter being regarded as an absolute reference. The repeatability of each test and the degree of agreement between the tests were quantified using Bland-Altman analysis. The values obtained for each test were found to be stable across repetitions; however, in most cases, significant differences were observed between tests supposed to measure the same oculomotor component. Tonic and cross-link components cannot be easily assessed because proximal and instrumental responses interfere with the assessment. Other components interfere with oculomotor assessment. Specifically, accommodative divergence interferes with tonic vergence estimation and the type of accommodation considered in the AC/A ratio affects its magnitude. Results on clinical tonic accommodation and clinical CA/C show that further investigation is needed to clarify the limitations associated with the use of difference of Gaussian as visual targets to open the accommodative loop. Although different optometric tests of accommodation and vergence rely on the same basic principles, the results of this study indicate that clinical and laboratory methods actually involve distinct components. These differences, which are induced by methodological choices, must be taken into account, when comparing studies or when selecting a test to investigate a particular oculomotor component. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.

  11. EMC analysis of MOS-1

    NASA Astrophysics Data System (ADS)

    Ishizawa, Y.; Abe, K.; Shirako, G.; Takai, T.; Kato, H.

    The electromagnetic compatibility (EMC) control method, system EMC analysis method, and system test method which have been applied to test the components of the MOS-1 satellite are described. The merits and demerits of the problem solving, specification, and system approaches to EMC control are summarized, and the data requirements of the SEMCAP (specification and electromagnetic compatibility analysis program) computer program for verifying the EMI safety margin of the components are sumamrized. Examples of EMC design are mentioned, and the EMC design process and selection method for EMC critical points are shown along with sample EMC test results.

  12. A Method to Estimate Fabric Particle Penetration Performance

    DTIC Science & Technology

    2014-09-08

    may be needed to improve the correlation between wind tunnel component sleeve tests and bench top swatch test. The ability to predict multi-layered...within the fabric/component gap may be needed to improve the correlation between wind tunnel component sleeve tests and bench top swatch test...impermeable garment . Heat stress becomes a major problem with this approach however, as normal physiological heat loss mechanisms (especially sweat

  13. Validation of Helicopter Gear Condition Indicators Using Seeded Fault Tests

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula; Brandon, E. Bruce

    2013-01-01

    A "seeded fault test" in support of a rotorcraft condition based maintenance program (CBM), is an experiment in which a component is tested with a known fault while health monitoring data is collected. These tests are performed at operating conditions comparable to operating conditions the component would be exposed to while installed on the aircraft. Performance of seeded fault tests is one method used to provide evidence that a Health Usage Monitoring System (HUMS) can replace current maintenance practices required for aircraft airworthiness. Actual in-service experience of the HUMS detecting a component fault is another validation method. This paper will discuss a hybrid validation approach that combines in service-data with seeded fault tests. For this approach, existing in-service HUMS flight data from a naturally occurring component fault will be used to define a component seeded fault test. An example, using spiral bevel gears as the targeted component, will be presented. Since the U.S. Army has begun to develop standards for using seeded fault tests for HUMS validation, the hybrid approach will be mapped to the steps defined within their Aeronautical Design Standard Handbook for CBM. This paper will step through their defined processes, and identify additional steps that may be required when using component test rig fault tests to demonstrate helicopter CI performance. The discussion within this paper will provide the reader with a better appreciation for the challenges faced when defining a seeded fault test for HUMS validation.

  14. Magneto acoustic emission apparatus for testing materials for embrittlement

    NASA Technical Reports Server (NTRS)

    Allison, Sidney G. (Inventor); Min, Namkung (Inventor); Yost, William T. (Inventor); Cantrell, John H. (Inventor)

    1990-01-01

    A method and apparatus for testing steel components for temper embrittlement uses magneto-acoustic emission to nondestructively evaluate the component. Acoustic emission signals occur more frequently at higher levels in embrittled components. A pair of electromagnets are used to create magnetic induction in the test component. Magneto-acoustic emission signals may be generated by applying an ac current to the electromagnets. The acoustic emission signals are analyzed to provide a comparison between a component known to be unembrittled and a test component. Magnetic remanence is determined by applying a dc current to the electromagnets, then turning the magnets off and observing the residual magnetic induction.

  15. The Method Effect in Communicative Testing.

    ERIC Educational Resources Information Center

    Canale, Michael

    1981-01-01

    A focus on test validity includes a consideration of the way a test measures that which it proposes to test; in other words, the validity of a test depends on method as well as content. This paper examines three areas of concern: (1) some features of communication that test method should reflect, (2) the main components of method, and (3) some…

  16. Shuttle filter study. Volume 2: Contaminant generation and sensitivity studies

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Contaminant generation studies were conducted at the component level using two different methods, radioactive tracer technique and gravimetric analysis test procedure. Both of these were reduced to practice during this program. In the first of these methods, radioactively tagged components typical of those used in spacecraft were studied to determine their contaminant generation characteristics under simulated operating conditions. Because the purpose of the work was: (1) to determine the types and quantities of contaminants generated; and (2) to evaluate improved monitoring and detection schemes, no attempt was made to evaluate or qualify specific components. The components used in this test program were therefore not flight hardware items. Some of them had been used in previous tests; some were obsolete; one was an experimental device. In addition to the component tests, various materials of interest to contaminant and filtration studies were irradiated and evaluated for use as autotracer materials. These included test dusts, plastics, valve seat materials, and bearing cage materials.

  17. Methods for assessing wall interference in the 2- by 2-foot adaptive-wall wind tunnel

    NASA Technical Reports Server (NTRS)

    Schairer, E. T.

    1986-01-01

    Discussed are two methods for assessing two-dimensional wall interference in the adaptive-wall test section of the NASA Ames 2 x 2-Foot Transonic Wind Tunnel: (1) a method for predicting free-air conditions near the walls of the test section (adaptive-wall methods); and (2) a method for estimating wall-induced velocities near the model (correction methods), both of which methods are based on measurements of either one or two components of flow velocity near the walls of the test section. Each method is demonstrated using simulated wind tunnel data and is compared with other methods of the same type. The two-component adaptive-wall and correction methods were found to be preferable to the corresponding one-component methods because: (1) they are more sensitive to, and give a more complete description of, wall interference; (2) they require measurements at fewer locations; (3) they can be used to establish free-stream conditions; and (4) they are independent of a description of the model and constants of integration.

  18. Vector wind profile gust model

    NASA Technical Reports Server (NTRS)

    Adelfang, S. I.

    1981-01-01

    To enable development of a vector wind gust model suitable for orbital flight test operations and trade studies, hypotheses concerning the distributions of gust component variables were verified. Methods for verification of hypotheses that observed gust variables, including gust component magnitude, gust length, u range, and L range, are gamma distributed and presented. Observed gust modulus has been drawn from a bivariate gamma distribution that can be approximated with a Weibull distribution. Zonal and meridional gust components are bivariate gamma distributed. An analytical method for testing for bivariate gamma distributed variables is presented. Two distributions for gust modulus are described and the results of extensive hypothesis testing of one of the distributions are presented. The validity of the gamma distribution for representation of gust component variables is established.

  19. A new class of high-G and long-duration shock testing machines

    NASA Astrophysics Data System (ADS)

    Rastegar, Jahangir

    2018-03-01

    Currently available methods and systems for testing components for survival and performance under shock loading suffer from several shortcomings for use to simulate high-G acceleration events with relatively long duration. Such events include most munitions firing and target impact, vehicular accidents, drops from relatively high heights, air drops, impact between machine components, and other similar events. In this paper, a new class of shock testing machines are presented that can be used to subject components to be tested to high-G acceleration pulses of prescribed amplitudes and relatively long durations. The machines provide for highly repeatable testing of components. The components are mounted on an open platform for ease of instrumentation and video recording of their dynamic behavior during shock loading tests.

  20. The t-CWT: a new ERP detection and quantification method based on the continuous wavelet transform and Student's t-statistics.

    PubMed

    Bostanov, Vladimir; Kotchoubey, Boris

    2006-12-01

    This study was aimed at developing a method for extraction and assessment of event-related brain potentials (ERP) from single-trials. This method should be applicable in the assessment of single persons' ERPs and should be able to handle both single ERP components and whole waveforms. We adopted a recently developed ERP feature extraction method, the t-CWT, for the purposes of hypothesis testing in the statistical assessment of ERPs. The t-CWT is based on the continuous wavelet transform (CWT) and Student's t-statistics. The method was tested in two ERP paradigms, oddball and semantic priming, by assessing individual-participant data on a single-trial basis, and testing the significance of selected ERP components, P300 and N400, as well as of whole ERP waveforms. The t-CWT was also compared to other univariate and multivariate ERP assessment methods: peak picking, area computation, discrete wavelet transform (DWT) and principal component analysis (PCA). The t-CWT produced better results than all of the other assessment methods it was compared with. The t-CWT can be used as a reliable and powerful method for ERP-component detection and testing of statistical hypotheses concerning both single ERP components and whole waveforms extracted from either single persons' or group data. The t-CWT is the first such method based explicitly on the criteria of maximal statistical difference between two average ERPs in the time-frequency domain and is particularly suitable for ERP assessment of individual data (e.g. in clinical settings), but also for the investigation of small and/or novel ERP effects from group data.

  1. Estimation procedures to measure and monitor failure rates of components during thermal-vacuum testing

    NASA Technical Reports Server (NTRS)

    Williams, R. E.; Kruger, R.

    1980-01-01

    Estimation procedures are described for measuring component failure rates, for comparing the failure rates of two different groups of components, and for formulating confidence intervals for testing hypotheses (based on failure rates) that the two groups perform similarly or differently. Appendix A contains an example of an analysis in which these methods are applied to investigate the characteristics of two groups of spacecraft components. The estimation procedures are adaptable to system level testing and to monitoring failure characteristics in orbit.

  2. Component-based target recognition inspired by human vision

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Agyepong, Kwabena

    2009-05-01

    In contrast with machine vision, human can recognize an object from complex background with great flexibility. For example, given the task of finding and circling all cars (no further information) in a picture, you may build a virtual image in mind from the task (or target) description before looking at the picture. Specifically, the virtual car image may be composed of the key components such as driver cabin and wheels. In this paper, we propose a component-based target recognition method by simulating the human recognition process. The component templates (equivalent to the virtual image in mind) of the target (car) are manually decomposed from the target feature image. Meanwhile, the edges of the testing image can be extracted by using a difference of Gaussian (DOG) model that simulates the spatiotemporal response in visual process. A phase correlation matching algorithm is then applied to match the templates with the testing edge image. If all key component templates are matched with the examining object, then this object is recognized as the target. Besides the recognition accuracy, we will also investigate if this method works with part targets (half cars). In our experiments, several natural pictures taken on streets were used to test the proposed method. The preliminary results show that the component-based recognition method is very promising.

  3. Determination of Parachute Joint Factors using Seam and Joint Testing

    NASA Technical Reports Server (NTRS)

    Mollmann, Catherine

    2015-01-01

    This paper details the methodology for determining the joint factor for all parachute components. This method has been successfully implemented on the Capsule Parachute Assembly System (CPAS) for the NASA Orion crew module for use in determining the margin of safety for each component under peak loads. Also discussed are concepts behind the joint factor and what drives the loss of material strength at joints. The joint factor is defined as a "loss in joint strength...relative to the basic material strength" that occurs when "textiles are connected to each other or to metals." During the CPAS engineering development phase, a conservative joint factor of 0.80 was assumed for each parachute component. In order to refine this factor and eliminate excess conservatism, a seam and joint testing program was implemented as part of the structural validation. This method split each of the parachute structural joints into discrete tensile tests designed to duplicate the loading of each joint. Breaking strength data collected from destructive pull testing was then used to calculate the joint factor in the form of an efficiency. Joint efficiency is the percentage of the base material strength that remains after degradation due to sewing or interaction with other components; it is used interchangeably with joint factor in this paper. Parachute materials vary in type-mainly cord, tape, webbing, and cloth -which require different test fixtures and joint sample construction methods. This paper defines guidelines for designing and testing samples based on materials and test goals. Using the test methodology and analysis approach detailed in this paper, the minimum joint factor for each parachute component can be formulated. The joint factors can then be used to calculate the design factor and margin of safety for that component, a critical part of the design verification process.

  4. Improvements in High Speed, High Resolution Dynamic Digital Image Correlation for Experimental Evaluation of Composite Drive System Components

    NASA Technical Reports Server (NTRS)

    Kohlman, Lee W.; Ruggeri, Charles R.; Roberts, Gary D.; Handschuh, Robert Frederick

    2013-01-01

    Composite materials have the potential to reduce the weight of rotating drive system components. However, these components are more complex to design and evaluate than static structural components in part because of limited ability to acquire deformation and failure initiation data during dynamic tests. Digital image correlation (DIC) methods have been developed to provide precise measurements of deformation and failure initiation for material test coupons and for structures under quasi-static loading. Attempts to use the same methods for rotating components (presented at the AHS International 68th Annual Forum in 2012) are limited by high speed camera resolution, image blur, and heating of the structure by high intensity lighting. Several improvements have been made to the system resulting in higher spatial resolution, decreased image noise, and elimination of heating effects. These improvements include the use of a high intensity synchronous microsecond pulsed LED lighting system, different lenses, and changes in camera configuration. With these improvements, deformation measurements can be made during rotating component tests with resolution comparable to that which can be achieved in static tests

  5. Improvements in High Speed, High Resolution Dynamic Digital Image Correlation for Experimental Evaluation of Composite Drive System Components

    NASA Technical Reports Server (NTRS)

    Kohlman, Lee; Ruggeri, Charles; Roberts, Gary; Handshuh, Robert

    2013-01-01

    Composite materials have the potential to reduce the weight of rotating drive system components. However, these components are more complex to design and evaluate than static structural components in part because of limited ability to acquire deformation and failure initiation data during dynamic tests. Digital image correlation (DIC) methods have been developed to provide precise measurements of deformation and failure initiation for material test coupons and for structures under quasi-static loading. Attempts to use the same methods for rotating components (presented at the AHS International 68th Annual Forum in 2012) are limited by high speed camera resolution, image blur, and heating of the structure by high intensity lighting. Several improvements have been made to the system resulting in higher spatial resolution, decreased image noise, and elimination of heating effects. These improvements include the use of a high intensity synchronous microsecond pulsed LED lighting system, different lenses, and changes in camera configuration. With these improvements, deformation measurements can be made during rotating component tests with resolution comparable to that which can be achieved in static tests.

  6. [Testing method research for key performance indicator of imaging acousto-optic tunable filter (AOTF)].

    PubMed

    Hu, Shan-Zhou; Chen, Fen-Fei; Zeng, Li-Bo; Wu, Qiong-Shui

    2013-01-01

    Imaging AOTF is an important optical filter component for new spectral imaging instruments developed in recent years. The principle of imaging AOTF component was demonstrated, and a set of testing methods for some key performances were studied, such as diffraction efficiency, wavelength shift with temperature, homogeneity in space for diffraction efficiency, imaging shift, etc.

  7. Vibration Testing of Electrical Cables to Quantify Loads at Tie-Down Locations

    NASA Technical Reports Server (NTRS)

    Dutson, Joseph D.

    2013-01-01

    The standard method for defining static equivalent structural load factors for components is based on Mile s equation. Unless test data is available, 5% critical damping is assumed for all components when calculating loads. Application of this method to electrical cable tie-down hardware often results in high loads, which often exceed the capability of typical tie-down options such as cable ties and P-clamps. Random vibration testing of electrical cables was used to better understand the factors that influence component loads: natural frequency, damping, and mass participation. An initial round of vibration testing successfully identified variables of interest, checked out the test fixture and instrumentation, and provided justification for removing some conservatism in the standard method. Additional testing is planned that will include a larger range of cable sizes for the most significant contributors to load as variables to further refine loads at cable tie-down points. Completed testing has provided justification to reduce loads at cable tie-downs by 45% with additional refinement based on measured cable natural frequencies.

  8. Method for Reducing Pumping Damage to Blood

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor); Akkerman, James W. (Inventor); Aber, Gregory S. (Inventor); VanDamm, George Arthur (Inventor); Bacak, James W. (Inventor); Svejkovsky, Robert J. (Inventor); Benkowski, Robert J. (Inventor)

    1997-01-01

    Methods are provided for minimizing damage to blood in a blood pump wherein the blood pump comprises a plurality of pump components that may affect blood damage such as clearance between pump blades and housing, number of impeller blades, rounded or flat blade edges, variations in entrance angles of blades, impeller length, and the like. The process comprises selecting a plurality of pump components believed to affect blood damage such as those listed herein before. Construction variations for each of the plurality of pump components are then selected. The pump components and variations are preferably listed in a matrix for easy visual comparison of test results. Blood is circulated through a pump configuration to test each variation of each pump component. After each test, total blood damage is determined for the blood pump. Preferably each pump component variation is tested at least three times to provide statistical results and check consistency of results. The least hemolytic variation for each pump component is preferably selected as an optimized component. If no statistical difference as to blood damage is produced for a variation of a pump component, then the variation that provides preferred hydrodynamic performance is selected. To compare the variation of pump components such as impeller and stator blade geometries, the preferred embodiment of the invention uses a stereolithography technique for realizing complex shapes within a short time period.

  9. Life Prediction/Reliability Data of Glass-Ceramic Material Determined for Radome Applications

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Gyekenyesi, John P.

    2002-01-01

    Brittle materials, ceramics, are candidate materials for a variety of structural applications for a wide range of temperatures. However, the process of slow crack growth, occurring in any loading configuration, limits the service life of structural components. Therefore, it is important to accurately determine the slow crack growth parameters required for component life prediction using an appropriate test methodology. This test methodology also should be useful in determining the influence of component processing and composition variables on the slow crack growth behavior of newly developed or existing materials, thereby allowing the component processing and composition to be tailored and optimized to specific needs. Through the American Society for Testing and Materials (ASTM), the authors recently developed two test methods to determine the life prediction parameters of ceramics. The two test standards, ASTM 1368 for room temperature and ASTM C 1465 for elevated temperatures, were published in the 2001 Annual Book of ASTM Standards, Vol. 15.01. Briefly, the test method employs constant stress-rate (or dynamic fatigue) testing to determine flexural strengths as a function of the applied stress rate. The merit of this test method lies in its simplicity: strengths are measured in a routine manner in flexure at four or more applied stress rates with an appropriate number of test specimens at each applied stress rate. The slow crack growth parameters necessary for life prediction are then determined from a simple relationship between the strength and the applied stress rate. Extensive life prediction testing was conducted at the NASA Glenn Research Center using the developed ASTM C 1368 test method to determine the life prediction parameters of a glass-ceramic material that the Navy will use for radome applications.

  10. Novel method for detecting the hadronic component of extensive air showers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gromushkin, D. M., E-mail: DMGromushkin@mephi.ru; Volchenko, V. I.; Petrukhin, A. A.

    2015-05-15

    A novel method for studying the hadronic component of extensive air showers (EAS) is proposed. The method is based on recording thermal neutrons accompanying EAS with en-detectors that are sensitive to two EAS components: an electromagnetic (e) component and a hadron component in the form of neutrons (n). In contrast to hadron calorimeters used in some arrays, the proposed method makes it possible to record the hadronic component over the whole area of the array. The efficiency of a prototype array that consists of 32 en-detectors was tested for a long time, and some parameters of the neutron EAS componentmore » were determined.« less

  11. Correlation of ground tests and analyses of a dynamically scaled Space Station model configuration

    NASA Technical Reports Server (NTRS)

    Javeed, Mehzad; Edighoffer, Harold H.; Mcgowan, Paul E.

    1993-01-01

    Verification of analytical models through correlation with ground test results of a complex space truss structure is demonstrated. A multi-component, dynamically scaled space station model configuration is the focus structure for this work. Previously established test/analysis correlation procedures are used to develop improved component analytical models. Integrated system analytical models, consisting of updated component analytical models, are compared with modal test results to establish the accuracy of system-level dynamic predictions. Design sensitivity model updating methods are shown to be effective for providing improved component analytical models. Also, the effects of component model accuracy and interface modeling fidelity on the accuracy of integrated model predictions is examined.

  12. On testing an unspecified function through a linear mixed effects model with multiple variance components

    PubMed Central

    Wang, Yuanjia; Chen, Huaihou

    2012-01-01

    Summary We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 108 simulations) and asymptotic approximation may be unreliable and conservative. PMID:23020801

  13. On testing an unspecified function through a linear mixed effects model with multiple variance components.

    PubMed

    Wang, Yuanjia; Chen, Huaihou

    2012-12-01

    We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 10(8) simulations) and asymptotic approximation may be unreliable and conservative. © 2012, The International Biometric Society.

  14. Testing procedures for carbon fiber reinforced plastic components

    NASA Technical Reports Server (NTRS)

    Gosse, H. J.; Kaitatzidi, M.; Roth, S.

    1977-01-01

    Tests for studying the basic material are considered and quality control investigations involving preimpregnated materials (prepreg) are discussed. Attention is given to the prepreg area weight, the fiber area weight of prepregs, the resin content, volatile components, the effective thickness, resin flow, the resistance to bending strain, tensile strength, and shear strength. A description of tests conducted during the manufacturing process is also presented, taking into account X-ray methods, approaches of neutron radiography, ultrasonic procedures, resonance methods and impedance studies.

  15. Segments from red blood cell units should not be used for quality testing.

    PubMed

    Kurach, Jayme D R; Hansen, Adele L; Turner, Tracey R; Jenkins, Craig; Acker, Jason P

    2014-02-01

    Nondestructive testing of blood components could permit in-process quality control and reduce discards. Tubing segments, generated during red blood cell (RBC) component production, were tested to determine their suitability as a sample source for quality testing. Leukoreduced RBC components were produced from whole blood (WB) by two different methods: WB filtration and buffy coat (BC). Components and their corresponding segments were tested on Days 5 and 42 of hypothermic storage (HS) for spun hematocrit (Hct), hemoglobin (Hb) content, percentage hemolysis, hematologic indices, and adenosine triphosphate concentration to determine whether segment quality represents unit quality. Segment samples overestimated hemolysis on Days 5 and 42 of HS in both BC- and WB filtration-produced RBCs (p < 0.001 for all). Hct and Hb levels in the segments were also significantly different from the units at both time points for both production methods (p < 0.001 for all). Indeed, for all variables tested different results were obtained from segment and unit samples, and these differences were not consistent across production methods. The quality of samples from tubing segments is not representative of the quality of the corresponding RBC unit. Segments are not suitable surrogates with which to assess RBC quality. © 2013 American Association of Blood Banks.

  16. Re-Tooling the Agency's Engineering Predictive Practices for Durability and Damage Tolerance

    NASA Technical Reports Server (NTRS)

    Piascik, Robert S.; Knight, Norman F., Jr.

    2017-01-01

    Over the past decade, the Agency has placed less emphasis on testing and has increasingly relied on computational methods to assess durability and damage tolerance (D&DT) behavior when evaluating design margins for fracture-critical components. With increased emphasis on computational D&DT methods as the standard practice, it is paramount that capabilities of these methods are understood, the methods are used within their technical limits, and validation by well-designed tests confirms understanding. The D&DT performance of a component is highly dependent on parameters in the neighborhood of the damage. This report discusses D&DT method vulnerabilities.

  17. Proposal of a calculation method to determine the structural components' contribution on the deceleration of a passenger compartment based on the energy-derivative method.

    PubMed

    Nagasaka, Kei; Mizuno, Koji; Ito, Daisuke; Saida, Naoya

    2017-05-29

    In car crashes, the passenger compartment deceleration significantly influences the occupant loading. Hence, it is important to consider how each structural component deforms in order to control the passenger compartment deceleration. In frontal impact tests, the passenger compartment deceleration depends on the energy absorption property of the front structures. However, at this point in time there are few papers describing the components' quantitative contributions on the passenger compartment deceleration. Generally, the cross-sectional force is used to examine each component's contribution to passenger compartment deceleration. However, it is difficult to determine each component's contribution based on the cross-sectional forces, especially within segments of the individual members itself such as the front rails, because the force is transmitted continuously and the cross-sectional forces remain the same through the component. The deceleration of a particle can be determined from the derivative of the kinetic energy. Using this energy-derivative method, the contribution of each component on the passenger compartment deceleration can be determined. Using finite element (FE) car models, this method was applied for full-width and offset impact tests. This method was also applied to evaluate the deceleration of the powertrain. The finite impulse response (FIR) coefficient of the vehicle deceleration (input) and the driver chest deceleration (output) was calculated from Japan New Car Assessment Program (JNCAP) tests. These were applied to the component's contribution on the vehicle deceleration in FE analysis, and the component's contribution to the deceleration of the driver's chest was determined. The sum of the contribution of each component coincides with the passenger compartment deceleration in all types of impacts; therefore, the validity of this method was confirmed. In the full-width impact, the contribution of the crush box was large in the initial phases, and the contribution of the passenger compartment was large in the final phases. For the powertrain deceleration, the crush box had a positive contribution and the passenger compartment had a negative contribution. In the offset test, the contribution of the honeycomb and the passenger compartment deformation to the passenger compartment deceleration was large. Based on the FIR analysis, the passenger compartment deformation contributed the most to the chest deceleration of the driver dummy in the full-width impact. Based on the energy-derivative method, the contribution of the components' deformation to deceleration of the passenger compartment can be calculated for various types of crash configurations more easily, directly, and quantitatively than by using conventional methods. In addition, by combining the energy-derivative method and FIR, each structure's contribution to the occupant deceleration can be obtained. The energy-derivative method is useful in investigating how the deceleration develops from component deformations and also in designing deceleration curves for various impact configurations.

  18. 7 CFR 3201.8 - Determining life cycle costs, environmental and health benefits, and performance.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Test Method for Determining Aerobic Biodegradation of Plastic Materials Under Controlled Composting Conditions”; (2) D5864“Standard Test Method for Determining the Aerobic Aquatic Biodegradation of Lubricants...“Standard Test Method for Determining the Aerobic Aquatic Biodegradation of Lubricants or Their Components...

  19. 7 CFR 3201.8 - Determining life cycle costs, environmental and health benefits, and performance.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Test Method for Determining Aerobic Biodegradation of Plastic Materials Under Controlled Composting Conditions”; (2) D5864“Standard Test Method for Determining the Aerobic Aquatic Biodegradation of Lubricants...“Standard Test Method for Determining the Aerobic Aquatic Biodegradation of Lubricants or Their Components...

  20. 7 CFR 3201.8 - Determining life cycle costs, environmental and health benefits, and performance.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Test Method for Determining Aerobic Biodegradation of Plastic Materials Under Controlled Composting Conditions”; (2) D5864“Standard Test Method for Determining the Aerobic Aquatic Biodegradation of Lubricants...“Standard Test Method for Determining the Aerobic Aquatic Biodegradation of Lubricants or Their Components...

  1. Implementation of Leak Test Methods for the International Space Station (ISS) Elements, Systems and Components

    NASA Technical Reports Server (NTRS)

    Underwood, Steve; Lvovsky, Oleg

    2007-01-01

    The International Space Station (ISS has Qualification and Acceptance Environmental Test Requirements document, SSP 41172 that includes many environmental tests such as Thermal vacuum & Cycling, Depress/Repress, Sinusoidal, Random, and Acoustic Vibration, Pyro Shock, Acceleration, Humidity, Pressure, Electromatic Interference (EMI)/Electromagnetic Compatibility (EMCO), etc. This document also includes (13) leak test methods for Pressure Integrity Verification of the ISS Elements, Systems, and Components. These leak test methods are well known, however, the test procedure for specific leak test method shall be written and implemented paying attention to the important procedural steps/details that, if omitted or deviated, could impact the quality of the final product and affect the crew safety. Such procedural steps/details for different methods include, but not limited to: - Sequence of testing, f or example, pressurization and submersion steps for Method I (Immersion); - Stabilization of the mass spectrometer leak detector outputs fo r Method II (vacuum Chamber or Bell jar); - Proper data processing an d taking a conservative approach while making predictions for on-orbit leakage rate for Method III(Pressure Change); - Proper Calibration o f the mass spectrometer leak detector for all the tracer gas (mostly Helium) Methods such as Method V (Detector Probe), Method VI (Hood), Method VII (Tracer Probe), Method VIII(Accumulation); - Usage of visibl ility aides for Method I (Immersion), Method IV (Chemical Indicator), Method XII (Foam/Liquid Application), and Method XIII (Hydrostatic/Visual Inspection); While some methods could be used for the total leaka ge (either internal-to-external or external-to-internal) rate requirement verification (Vacuum Chamber, Pressure Decay, Hood, Accumulation), other methods shall be used only as a pass/fail test for individual joints (e.g., welds, fittings, and plugs) or for troubleshooting purposes (Chemical Indicator, Detector Probe, Tracer Probe, Local Vacuum Chamber, Foam/Liquid Application, and Hydrostatic/Visual Inspection). Any isolation of SSP 41172 requirements have led to either retesting of hardware or accepting a risk associated with the potential system or component pressure integrity problem during flight.

  2. Vibroacoustic test plan evaluation: Parameter variation study

    NASA Technical Reports Server (NTRS)

    Stahle, C. V.; Gongloef, H. R.

    1976-01-01

    Statistical decision models are shown to provide a viable method of evaluating the cost effectiveness of alternate vibroacoustic test plans and the associated test levels. The methodology developed provides a major step toward the development of a realistic tool to quantitatively tailor test programs to specific payloads. Testing is considered at the no test, component, subassembly, or system level of assembly. Component redundancy and partial loss of flight data are considered. Most and probabilistic costs are considered, and incipient failures resulting from ground tests are treated. Optimums defining both component and assembly test levels are indicated for the modified test plans considered. modeling simplifications must be considered in interpreting the results relative to a particular payload. New parameters introduced were a no test option, flight by flight failure probabilities, and a cost to design components for higher vibration requirements. Parameters varied were the shuttle payload bay internal acoustic environment, the STS launch cost, the component retest/repair cost, and the amount of redundancy in the housekeeping section of the payload reliability model.

  3. Comparison of batch and column tests for the elution of artificial turf system components.

    PubMed

    Krüger, O; Kalbe, U; Berger, W; Nordhauβ, K; Christoph, G; Walzel, H-P

    2012-12-18

    Synthetic athletic tracks and turf areas for outdoor sporting grounds may release contaminants due to the chemical composition of some components. A primary example is that of zinc from reused scrap tires (main constituent, styrene butadiene rubber, SBR), which might be harmful to the environment. Thus, methods for the risk assessment of those materials are required. Laboratory leaching methods like batch and column tests are widely used to examine the soil-groundwater pathway. We tested several components for artificial sporting grounds with batch tests at a liquid to solid (LS) ratio of 2 L/kg and column tests with an LS up to 26.5 L/kg. We found a higher zinc release in the batch test eluates for all granules, ranging from 15% higher to 687% higher versus data from column tests for SBR granules. Accompanying parameters, especially the very high turbidity of one ethylene propylene diene monomer rubber (EPDM) or thermoplastic elastomer (TPE) eluates, reflect the stronger mechanical stress of batch testing. This indicates that batch test procedures might not be suitable for the risk assessment of synthetic sporting ground components. Column tests, on the other hand, represent field conditions more closely and allow for determination of time-dependent contaminants release.

  4. Construction and component testing of TAMU3, a 14 Tesla stress-managed Nb3Sn model dipole

    NASA Astrophysics Data System (ADS)

    Holik, Eddie Frank, III; Benson, Chris; Blackburn, Raymond; Diaczenko, Nick; Elliott, Timothy; Jaisle, Andrew; McInturff, A.; McIntyre, P.; Sattarov, Akhdiyor

    2012-06-01

    We report the construction and testing of components of TAMU3, a 14 Tesla Nb3Sn block-coil dipole. A primary goal in developing this model dipole is to test a method of stress management in which Lorentz stress is intercepted within the coil assembly and bypassed so that it cannot accumulate to a level that would cause strain degradation in the superconducting windings. Details of the fabrication, tooling, and results of construction and magnet component testing will be presented.

  5. Improved Measurement of Dispersion in an Optical Fiber

    NASA Technical Reports Server (NTRS)

    Huang, Shouhua; Le, Thanh; Maleki, Lute

    2004-01-01

    An improved method of measuring chromatic dispersion in an optical fiber or other device affords a lower (relative to prior such methods) limit of measurable dispersion. This method is a modified version of the amplitude-modulation (AM) method, which is one of the prior methods. In comparison with the other prior methods, the AM method is less complex. However, the AM method is limited to dispersion levels . 160 ps/nm and cannot be used to measure the symbol of the dispersion. In contrast, the present modified version of the AM method can be used to measure the symbol of the symbol of the dispersion and affords a measurement range from about 2 ps/nm to several thousand ps/nm with a resolution of 0.27 ps/nm or finer. The figure schematically depicts the measurement apparatus. The source of light for the measurement is a laser, the wavelength of which is monitored by an optical spectrum analyzer. A light-component analyzer amplitude-modulates the light with a scanning radio-frequency signal. The modulated light is passed through a buffer (described below) and through the device under test (e.g., an optical fiber, the dispersion of which one seeks to measure), then back to the light-component analyzer for spectrum analysis. Dispersion in the device under test gives rise to phase shifts among the carrier and the upper and lower sideband components of the modulated signal. These phase shifts affect the modulation-frequency component of the output of a photodetector exposed to the signal that emerges from the device under test. One of the effects is that this component goes to zero periodically as the modulation frequency is varied.

  6. A novel dynamic mechanical testing technique for reverse shoulder replacements.

    PubMed

    Dabirrahmani, Danè; Bokor, Desmond; Appleyard, Richard

    2014-04-01

    In vitro mechanical testing of orthopedic implants provides information regarding their mechanical performance under simulated biomechanical conditions. Current in vitro component stability testing methods for reverse shoulder implants are based on anatomical shoulder designs, which do not capture the dynamic nature of these loads. With glenoid component loosening as one of the most prevalent modes of failure in reverse shoulder replacements, it is important to establish a testing protocol with a more realistic loading regime. This paper introduces a novel method of mechanically testing reverse shoulder implants, using more realistic load magnitudes and vectors, than is currently practiced. Using a custom made jig setup within an Instron mechanical testing system, it is possible to simulate the change in magnitude and direction of the joint load during arm abduction. This method is a step towards a more realistic testing protocol for measuring reverse shoulder implant stability.

  7. Sensitivity test of derivative matrix isopotential synchronous fluorimetry and least squares fitting methods.

    PubMed

    Makkai, Géza; Buzády, Andrea; Erostyák, János

    2010-01-01

    Determination of concentrations of spectrally overlapping compounds has special difficulties. Several methods are available to calculate the constituents' concentrations in moderately complex mixtures. A method which can provide information about spectrally hidden components in mixtures is very useful. Two methods powerful in resolving spectral components are compared in this paper. The first method tested is the Derivative Matrix Isopotential Synchronous Fluorimetry (DMISF). It is based on derivative analysis of MISF spectra, which are constructed using isopotential trajectories in the Excitation-Emission Matrix (EEM) of background solution. For DMISF method, a mathematical routine fitting the 3D data of EEMs was developed. The other method tested uses classical Least Squares Fitting (LSF) algorithm, wherein Rayleigh- and Raman-scattering bands may lead to complications. Both methods give excellent sensitivity and have advantages against each other. Detection limits of DMISF and LSF have been determined at very different concentration and noise levels.

  8. A review of polymer electrolyte membrane fuel cell durability test protocols

    NASA Astrophysics Data System (ADS)

    Yuan, Xiao-Zi; Li, Hui; Zhang, Shengsheng; Martin, Jonathan; Wang, Haijiang

    Durability is one of the major barriers to polymer electrolyte membrane fuel cells (PEMFCs) being accepted as a commercially viable product. It is therefore important to understand their degradation phenomena and analyze degradation mechanisms from the component level to the cell and stack level so that novel component materials can be developed and novel designs for cells/stacks can be achieved to mitigate insufficient fuel cell durability. It is generally impractical and costly to operate a fuel cell under its normal conditions for several thousand hours, so accelerated test methods are preferred to facilitate rapid learning about key durability issues. Based on the US Department of Energy (DOE) and US Fuel Cell Council (USFCC) accelerated test protocols, as well as degradation tests performed by researchers and published in the literature, we review degradation test protocols at both component and cell/stack levels (driving cycles), aiming to gather the available information on accelerated test methods and degradation test protocols for PEMFCs, and thereby provide practitioners with a useful toolbox to study durability issues. These protocols help prevent the prolonged test periods and high costs associated with real lifetime tests, assess the performance and durability of PEMFC components, and ensure that the generated data can be compared.

  9. Quantification and Statistical Analysis Methods for Vessel Wall Components from Stained Images with Masson's Trichrome

    PubMed Central

    Hernández-Morera, Pablo; Castaño-González, Irene; Travieso-González, Carlos M.; Mompeó-Corredera, Blanca; Ortega-Santana, Francisco

    2016-01-01

    Purpose To develop a digital image processing method to quantify structural components (smooth muscle fibers and extracellular matrix) in the vessel wall stained with Masson’s trichrome, and a statistical method suitable for small sample sizes to analyze the results previously obtained. Methods The quantification method comprises two stages. The pre-processing stage improves tissue image appearance and the vessel wall area is delimited. In the feature extraction stage, the vessel wall components are segmented by grouping pixels with a similar color. The area of each component is calculated by normalizing the number of pixels of each group by the vessel wall area. Statistical analyses are implemented by permutation tests, based on resampling without replacement from the set of the observed data to obtain a sampling distribution of an estimator. The implementation can be parallelized on a multicore machine to reduce execution time. Results The methods have been tested on 48 vessel wall samples of the internal saphenous vein stained with Masson’s trichrome. The results show that the segmented areas are consistent with the perception of a team of doctors and demonstrate good correlation between the expert judgments and the measured parameters for evaluating vessel wall changes. Conclusion The proposed methodology offers a powerful tool to quantify some components of the vessel wall. It is more objective, sensitive and accurate than the biochemical and qualitative methods traditionally used. The permutation tests are suitable statistical techniques to analyze the numerical measurements obtained when the underlying assumptions of the other statistical techniques are not met. PMID:26761643

  10. Testing variance components by two jackknife methods

    USDA-ARS?s Scientific Manuscript database

    The jacknife method, a resampling technique, has been widely used for statistical tests for years. The pseudo value based jacknife method (defined as pseudo jackknife method) is commonly used to reduce the bias for an estimate; however, sometimes it could result in large variaion for an estmimate a...

  11. Design of automata theory of cubical complexes with applications to diagnosis and algorithmic description

    NASA Technical Reports Server (NTRS)

    Roth, J. P.

    1972-01-01

    Methods for development of logic design together with algorithms for failure testing, a method for design of logic for ultra-large-scale integration, extension of quantum calculus to describe the functional behavior of a mechanism component-by-component and to computer tests for failures in the mechanism using the diagnosis algorithm, and the development of an algorithm for the multi-output 2-level minimization problem are discussed.

  12. The influence of various test plans on mission reliability. [for Shuttle Spacelab payloads

    NASA Technical Reports Server (NTRS)

    Stahle, C. V.; Gongloff, H. R.; Young, J. P.; Keegan, W. B.

    1977-01-01

    Methods have been developed for the evaluation of cost effective vibroacoustic test plans for Shuttle Spacelab payloads. The shock and vibration environments of components have been statistically represented, and statistical decision theory has been used to evaluate the cost effectiveness of five basic test plans with structural test options for two of the plans. Component, subassembly, and payload testing have been performed for each plan along with calculations of optimum test levels and expected costs. The tests have been ranked according to both minimizing expected project costs and vibroacoustic reliability. It was found that optimum costs may vary up to $6 million with the lowest plan eliminating component testing and maintaining flight vibration reliability via subassembly tests at high acoustic levels.

  13. Modified cleaning method for biomineralized components

    NASA Astrophysics Data System (ADS)

    Tsutsui, Hideto; Jordan, Richard W.

    2018-02-01

    The extraction and concentration of biomineralized components from sediment or living materials is time consuming and laborious and often involves steps that remove either the calcareous or siliceous part, in addition to organic matter. However, a relatively quick and easy method using a commercial cleaning fluid for kitchen drains, sometimes combined with a kerosene soaking step, can produce remarkable results. In this study, the method is applied to sediments and living materials bearing calcareous (e.g., coccoliths, foraminiferal tests, holothurian ossicles, ichthyoliths, and fish otoliths) and siliceous (e.g., diatom valves, silicoflagellate skeletons, and sponge spicules) components. The method preserves both components in the same sample, without etching or partial dissolution, but is not applicable to unmineralized components such as dinoflagellate thecae, tintinnid loricae, pollen, or plant fragments.

  14. Sand and Dust Testing of Wheeled and Tracked Vehicles and Stationary Equipment

    DTIC Science & Technology

    2009-11-18

    early stages of development of an automotive system or stationary equipment there may be components and sub-systems that need to be tested for...suitability to operate in dust before final prototypes are available for end item or system-level testing. At this stage of development, component or...Method 510, or TOP 1-2-621 does not include functional operation during active testing. Additionally, testing by using either of these two documents is

  15. Check-Standard Testing Across Multiple Transonic Wind Tunnels with the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    Deloach, Richard

    2012-01-01

    This paper reports the result of an analysis of wind tunnel data acquired in support of the Facility Analysis Verification & Operational Reliability (FAVOR) project. The analysis uses methods referred to collectively at Langley Research Center as the Modern Design of Experiments (MDOE). These methods quantify the total variance in a sample of wind tunnel data and partition it into explained and unexplained components. The unexplained component is further partitioned in random and systematic components. This analysis was performed on data acquired in similar wind tunnel tests executed in four different U.S. transonic facilities. The measurement environment of each facility was quantified and compared.

  16. 40 CFR 53.52 - Leak check test.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent Methods for PM 2.5 or PM 10-2.5 § 53.52... to include the facility, including components, instruments, operator controls, a written procedure...

  17. Scaling Methods for Simulating Aircraft In-Flight Icing Encounters

    NASA Technical Reports Server (NTRS)

    Anderson, David N.; Ruff, Gary A.

    1997-01-01

    This paper discusses scaling methods which permit the use of subscale models in icing wind tunnels to simulate natural flight in icing. Natural icing conditions exist when air temperatures are below freezing but cloud water droplets are super-cooled liquid. Aircraft flying through such clouds are susceptible to the accretion of ice on the leading edges of unprotected components such as wings, tailplane and engine inlets. To establish the aerodynamic penalties of such ice accretion and to determine what parts need to be protected from ice accretion (by heating, for example), extensive flight and wind-tunnel testing is necessary for new aircraft and components. Testing in icing tunnels is less expensive than flight testing, is safer, and permits better control of the test conditions. However, because of limitations on both model size and operating conditions in wind tunnels, it is often necessary to perform tests with either size or test conditions scaled. This paper describes the theoretical background to the development of icing scaling methods, discusses four methods, and presents results of tests to validate them.

  18. Resampling and Distribution of the Product Methods for Testing Indirect Effects in Complex Models

    ERIC Educational Resources Information Center

    Williams, Jason; MacKinnon, David P.

    2008-01-01

    Recent advances in testing mediation have found that certain resampling methods and tests based on the mathematical distribution of 2 normal random variables substantially outperform the traditional "z" test. However, these studies have primarily focused only on models with a single mediator and 2 component paths. To address this limitation, a…

  19. Computed tomography (CT) as a nondestructive test method used for composite helicopter components

    NASA Astrophysics Data System (ADS)

    Oster, Reinhold

    1991-09-01

    The first components of primary helicopter structures to be made of glass fiber reinforced plastics were the main and tail rotor blades of the Bo105 and BK 117 helicopters. These blades are now successfully produced in series. New developments in rotor components, e.g., the rotor blade technology of the Bo108 and PAH2 programs, make use of very complex fiber reinforced structures to achieve simplicity and strength. Computer tomography was found to be an outstanding nondestructive test method for examining the internal structure of components. A CT scanner generates x-ray attenuation measurements which are used to produce computer reconstructed images of any desired part of an object. The system images a range of flaws in composites in a number of views and planes. Several CT investigations and their results are reported taking composite helicopter components as an example.

  20. Computed Tomography (CT) as a nondestructive test method used for composite helicopter components

    NASA Astrophysics Data System (ADS)

    Oster, Reinhold

    The first components of primary helicopter structures to be made of glass fiber reinforced plastics were the main and tail rotor blades of the Bo105 and BK117 helicopters. These blades are now successfully produced in series. New developments in rotor components, e.g. the rotor blade technology of the Bo108 and PAH2 programs, make use of very complex fiber reinforced structures to achieve simplicity and strength. Computer tomography was found to be an outstanding nondestructive test method for examining the internal structure of components. A CT scanner generates x-ray attenuation measurements which are used to produce computer reconstructed images of any desired part of an object. The system images a range of flaws in composites in a number of views and planes. Several CT investigations and their results are reported taking composite helicopter components as an example.

  1. Testing the performance of pure spectrum resolution from Raman hyperspectral images of differently manufactured pharmaceutical tablets.

    PubMed

    Vajna, Balázs; Farkas, Attila; Pataki, Hajnalka; Zsigmond, Zsolt; Igricz, Tamás; Marosi, György

    2012-01-27

    Chemical imaging is a rapidly emerging analytical method in pharmaceutical technology. Due to the numerous chemometric solutions available, characterization of pharmaceutical samples with unknown components present has also become possible. This study compares the performance of current state-of-the-art curve resolution methods (multivariate curve resolution-alternating least squares, positive matrix factorization, simplex identification via split augmented Lagrangian and self-modelling mixture analysis) in the estimation of pure component spectra from Raman maps of differently manufactured pharmaceutical tablets. The batches of different technologies differ in the homogeneity level of the active ingredient, thus, the curve resolution methods are tested under different conditions. An empirical approach is shown to determine the number of components present in a sample. The chemometric algorithms are compared regarding the number of detected components, the quality of the resolved spectra and the accuracy of scores (spectral concentrations) compared to those calculated with classical least squares, using the true pure component (reference) spectra. It is demonstrated that using appropriate multivariate methods, Raman chemical imaging can be a useful tool in the non-invasive characterization of unknown (e.g. illegal or counterfeit) pharmaceutical products. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. 40 CFR 53.52 - Leak check test.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent Methods for PM2.5 or PM10â2.5 § 53.52... to include the facility, including components, instruments, operator controls, a written procedure...

  3. 40 CFR 53.52 - Leak check test.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent Methods for PM2.5 or PM10â2.5 § 53.52... to include the facility, including components, instruments, operator controls, a written procedure...

  4. 40 CFR 53.52 - Leak check test.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent Methods for PM2.5 or PM10â2.5 § 53.52... to include the facility, including components, instruments, operator controls, a written procedure...

  5. Development of dynamic Bayesian models for web application test management

    NASA Astrophysics Data System (ADS)

    Azarnova, T. V.; Polukhin, P. V.; Bondarenko, Yu V.; Kashirina, I. L.

    2018-03-01

    The mathematical apparatus of dynamic Bayesian networks is an effective and technically proven tool that can be used to model complex stochastic dynamic processes. According to the results of the research, mathematical models and methods of dynamic Bayesian networks provide a high coverage of stochastic tasks associated with error testing in multiuser software products operated in a dynamically changing environment. Formalized representation of the discrete test process as a dynamic Bayesian model allows us to organize the logical connection between individual test assets for multiple time slices. This approach gives an opportunity to present testing as a discrete process with set structural components responsible for the generation of test assets. Dynamic Bayesian network-based models allow us to combine in one management area individual units and testing components with different functionalities and a direct influence on each other in the process of comprehensive testing of various groups of computer bugs. The application of the proposed models provides an opportunity to use a consistent approach to formalize test principles and procedures, methods used to treat situational error signs, and methods used to produce analytical conclusions based on test results.

  6. Split-cross-bridge resistor for testing for proper fabrication of integrated circuits

    NASA Technical Reports Server (NTRS)

    Buehler, M. G. (Inventor)

    1985-01-01

    An electrical testing structure and method is described whereby a test structure is fabricated on a large scale integrated circuit wafer along with the circuit components and has a van der Pauw cross resistor in conjunction with a bridge resistor and a split bridge resistor, the latter having two channels each a line width wide, corresponding to the line width of the wafer circuit components, and with the two channels separated by a space equal to the line spacing of the wafer circuit components. The testing structure has associated voltage and current contact pads arranged in a two by four array for conveniently passing currents through the test structure and measuring voltages at appropriate points to calculate the sheet resistance, line width, line spacing, and line pitch of the circuit components on the wafer electrically.

  7. Ballistic Resistance of Armored Passenger Vehicles: Test Protocols and Quality Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeffrey M. Lacy; Robert E. Polk

    2005-07-01

    This guide establishes a test methodology for determining the overall ballistic resistance of the passenger compartment of assembled nontactical armored passenger vehicles (APVs). Because ballistic testing of every piece of every component of an armored vehicle is impractical, if not impossible, this guide describes a testing scheme based on statistical sampling of exposed component surface areas. Results from the test of the sampled points are combined to form a test score that reflects the probability of ballistic penetration into the passenger compartment of the vehicle.

  8. Component spectra extraction from terahertz measurements of unknown mixtures.

    PubMed

    Li, Xian; Hou, D B; Huang, P J; Cai, J H; Zhang, G X

    2015-10-20

    The aim of this work is to extract component spectra from unknown mixtures in the terahertz region. To that end, a method, hard modeling factor analysis (HMFA), was applied to resolve terahertz spectral matrices collected from the unknown mixtures. This method does not require any expertise of the user and allows the consideration of nonlinear effects such as peak variations or peak shifts. It describes the spectra using a peak-based nonlinear mathematic model and builds the component spectra automatically by recombination of the resolved peaks through correlation analysis. Meanwhile, modifications on the method were made to take the features of terahertz spectra into account and to deal with the artificial baseline problem that troubles the extraction process of some terahertz spectra. In order to validate the proposed method, simulated wideband terahertz spectra of binary and ternary systems and experimental terahertz absorption spectra of amino acids mixtures were tested. In each test, not only the number of pure components could be correctly predicted but also the identified pure spectra had a good similarity with the true spectra. Moreover, the proposed method associated the molecular motions with the component extraction, making the identification process more physically meaningful and interpretable compared to other methods. The results indicate that the HMFA method with the modifications can be a practical tool for identifying component terahertz spectra in completely unknown mixtures. This work reports the solution to this kind of problem in the terahertz region for the first time, to the best of the authors' knowledge, and represents a significant advance toward exploring physical or chemical mechanisms of unknown complex systems by terahertz spectroscopy.

  9. Leap-frog-based BPM (LF-BPM) method for solving nanophotonic structures

    NASA Astrophysics Data System (ADS)

    Ayoub, Ahmad B.; Swillam, Mohamed A.

    2018-02-01

    In this paper, we propose an efficient approach to solve the BPM equation. By splitting the complex field into real and imaginary parts, the method is proved to be at least 30% faster than the conventional BPM. This method was tested on several optical components to test the accuracy.

  10. Revealing Optical Components

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The Optical Vector Analyzer (OVA) 1550 significantly reduces the time and cost of testing sophisticated optical components. The technology grew from the research Luna Technologies' Dr. Mark Froggatt conducted on optical fiber strain measurement while working at Langley Research Center. Dr. Froggatt originally developed the technology for non- destructive evaluation testing at Langley. The new technique can provide 10,000 independent strain measurements while adding less than 10 grams to the weight of the vehicle. The OVA is capable of complete linear characterization of single-mode optical components used in high- bit-rate applications. The device can test most components over their full range in less than 30 seconds, compared to the more than 20 minutes required by other testing methods. The dramatically shortened measurement time results in increased efficiency in final acceptance tests of optical devices, and the comprehensive data produced by the instrument adds considerable value for component consumers. The device eliminates manufacturing bottlenecks, while reducing labor costs and wasted materials during production.

  11. Test-Anchored Vibration Response Predictions for an Acoustically Energized Curved Orthogrid Panel with Mounted Components

    NASA Technical Reports Server (NTRS)

    Frady, Gregory P.; Duvall, Lowery D.; Fulcher, Clay W. G.; Laverde, Bruce T.; Hunt, Ronald A.

    2011-01-01

    A rich body of vibroacoustic test data was recently generated at Marshall Space Flight Center for a curved orthogrid panel typical of launch vehicle skin structures. Several test article configurations were produced by adding component equipment of differing weights to the flight-like vehicle panel. The test data were used to anchor computational predictions of a variety of spatially distributed responses including acceleration, strain and component interface force. Transfer functions relating the responses to the input pressure field were generated from finite element based modal solutions and test-derived damping estimates. A diffuse acoustic field model was employed to describe the assumed correlation of phased input sound pressures across the energized panel. This application demonstrates the ability to quickly and accurately predict a variety of responses to acoustically energized skin panels with mounted components. Favorable comparisons between the measured and predicted responses were established. The validated models were used to examine vibration response sensitivities to relevant modeling parameters such as pressure patch density, mesh density, weight of the mounted component and model form. Convergence metrics include spectral densities and cumulative root-mean squared (RMS) functions for acceleration, velocity, displacement, strain and interface force. Minimum frequencies for response convergence were established as well as recommendations for modeling techniques, particularly in the early stages of a component design when accurate structural vibration requirements are needed relatively quickly. The results were compared with long-established guidelines for modeling accuracy of component-loaded panels. A theoretical basis for the Response/Pressure Transfer Function (RPTF) approach provides insight into trends observed in the response predictions and confirmed in the test data. The software modules developed for the RPTF method can be easily adapted for quick replacement of the diffuse acoustic field with other pressure field models; for example a turbulent boundary layer (TBL) model suitable for vehicle ascent. Wind tunnel tests have been proposed to anchor the predictions and provide new insight into modeling approaches for this type of environment. Finally, component vibration environments for design were developed from the measured and predicted responses and compared with those derived from traditional techniques such as Barrett scaling methods for unloaded and component-loaded panels.

  12. Characterization of DUT impedance in immunity test setups

    NASA Astrophysics Data System (ADS)

    Hassanpour Razavi, Seyyed Ali; Frei, Stephan

    2016-09-01

    Several immunity test procedures for narrowband radiated electromagnetic energy are available for automotive components. The ISO 11452 series describes the most commonly used test methods. The absorber line shielded enclosure (ALSE) is often considered as the most reliable method. However, testing with the bulk current injection (BCI) can be done with less efforts and is often preferred. As the test setup in both procedures is quite similar, there were several trials for finding appropriate modifications to the BCI in order to increase the matching to the ALSE. However, the lack of knowledge regarding the impedance of the tested component, makes it impossible to find the equivalent current to be injected by the BCI and a good match cannot be achieved. In this paper, three approaches are proposed to estimate the termination impedance indirectly by using different current probes.

  13. A Program to Improve the Triangulated Surface Mesh Quality Along Aircraft Component Intersections

    NASA Technical Reports Server (NTRS)

    Cliff, Susan E.

    2005-01-01

    A computer program has been developed for improving the quality of unstructured triangulated surface meshes in the vicinity of component intersections. The method relies solely on point removal and edge swapping for improving the triangulations. It can be applied to any lifting surface component such as a wing, canard or horizontal tail component intersected with a fuselage, or it can be applied to a pylon that is intersected with a wing, fuselage or nacelle. The lifting surfaces or pylon are assumed to be aligned in the axial direction with closed trailing edges. The method currently maintains salient edges only at leading and trailing edges of the wing or pylon component. This method should work well for any shape of fuselage that is free of salient edges at the intersection. The method has been successfully demonstrated on a total of 125 different test cases that include both blunt and sharp wing leading edges. The code is targeted for use in the automated environment of numerical optimization where geometric perturbations to individual components can be critical to the aerodynamic performance of a vehicle. Histograms of triangle aspect ratios are reported to assess the quality of the triangles attached to the intersection curves before and after application of the program. Large improvements to the quality of the triangulations were obtained for the 125 test cases; the quality was sufficient for use with an automated tetrahedral mesh generation program that is used as part of an aerodynamic shape optimization method.

  14. Computational Simulations and the Scientific Method

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Wood, Bill

    2005-01-01

    As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

  15. Solid explosive plane-wave lenses pressed-to-shape with dies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olinger, B.

    2007-11-01

    Solid-explosive plane-wave lenses 1", 2" and 4¼" in diameter have been mass-produced from components pressed-to-shape with aluminum dies. The method used to calculate the contour between the solid plane-wave lens components pressed-to-shape with the dies is explained. The steps taken to press, machine, and assemble the lenses are described. The method of testing the lenses, the results of those tests, and the corrections to the dies are reviewed. The work on the ½", 8", and 12" diameter lenses is also discussed.

  16. Preliminary vibration, acoustic, and shock design and test criteria for components on the Lightweight External Tank (LWT)

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The Space Shuttle LWT is divided into zones and subzones. Zones are designated primarily to assist in determining the applicable specifications. A subzone (general Specification) is available for use when the location of the component is known but component design and weight are not well defined. When the location, weight, and mounting configuration of the component are known, specifications for appropriate subzone weight ranges are available. Along with the specifications are vibration, acoustic, shock, transportation, handling, and acceptance test requirements and procedures. A method of selecting applicable vibration, acoustic, and shock specifications is presented.

  17. 49 CFR 180.203 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... atmosphere) and free of components that will adversely react with the cylinder (e.g. chemical stress... pressure. The volumetric expansion test is conducted using the water jacket or direct expansion methods: (1) Water jacket method means a volumetric expansion test to determine a cylinder's total and permanent...

  18. Impact design methods for ceramic components in gas turbine engines

    NASA Technical Reports Server (NTRS)

    Song, J.; Cuccio, J.; Kington, H.

    1991-01-01

    Methods currently under development to design ceramic turbine components with improved impact resistance are presented. Two different modes of impact damage are identified and characterized, i.e., structural damage and local damage. The entire computation is incorporated into the EPIC computer code. Model capability is demonstrated by simulating instrumented plate impact and particle impact tests.

  19. Sediment Toxicity Testing

    EPA Science Inventory

    Sediment toxicity testing has become a fundamental component of regulatory frameworks for assessing the risks posed by contaminated sediments and for development of chemical sediment quality guidelines. Over the past two decades, sediment toxicity testing methods have advanced co...

  20. Novel ways to explore surgical interventions in randomised controlled trials: applying case study methodology in the operating theatre.

    PubMed

    Blencowe, Natalie S; Blazeby, Jane M; Donovan, Jenny L; Mills, Nicola

    2015-12-28

    Multi-centre randomised controlled trials (RCTs) in surgery are challenging. It is particularly difficult to establish standards of surgery and ensure that interventions are delivered as intended. This study developed and tested methods for identifying the key components of surgical interventions and standardising interventions within RCTs. Qualitative case studies of surgical interventions were undertaken within the internal pilot phase of a surgical RCT for obesity (the By-Band study). Each case study involved video data capture and non-participant observation of gastric bypass surgery in the operating theatre and interviews with surgeons. Methods were developed to transcribe and synchronise data from video recordings with observational data to identify key intervention components, which were then explored in the interviews with surgeons. Eight qualitative case studies were undertaken. A novel combination of video data capture, observation and interview data identified variations in intervention delivery between surgeons and centres. Although surgeons agreed that the most critical intervention component was the size and shape of the gastric pouch, there was no consensus regarding other aspects of the procedure. They conceded that evidence about the 'best way' to perform bypass was lacking and, combined with the pragmatic nature of the By-Band study, agreed that strict standardisation of bypass might not be required. This study has developed and tested methods for understanding how surgical interventions are designed and delivered delivered in RCTs. Applying these methods more widely may help identify key components of interventions to be delivered by surgeons in trials, enabling monitoring of key components and adherence to the protocol. These methods are now being tested in the context of other surgical RCTs. Current Controlled Trials ISRCTN00786323 , 05/09/2011.

  1. Characterisation of the joining zone of serially arranged hybrid semi-finished components

    NASA Astrophysics Data System (ADS)

    Behrens, B.-A.; Chugreev, A.; Matthias, T.

    2018-05-01

    Forming of already joined semi-finished products is an innovative approach to manufacture components which are well-adapted to external loads. This approach results in an economically and ecologically improved production by the targeted use of high-quality materials in component areas, which undergo high stresses. One possible production method for hybrid semi-finished products is friction welding. This welding method allows for the production of hybrid semi-finished products made of aluminium and steel as well as steel and steel. In this paper, the thermomechanical tensile and shear stresses causing a failure of the joined zone are experimentally determined through tension tests. These tests are performed with specimens whose joint zones are aligned with different angles to the load direction.

  2. Spectrophotometric Determination of Phenolic Antioxidants in the Presence of Thiols and Proteins.

    PubMed

    Avan, Aslı Neslihan; Demirci Çekiç, Sema; Uzunboy, Seda; Apak, Reşat

    2016-08-12

    Development of easy, practical, and low-cost spectrophotometric methods is required for the selective determination of phenolic antioxidants in the presence of other similar substances. As electron transfer (ET)-based total antioxidant capacity (TAC) assays generally measure the reducing ability of antioxidant compounds, thiols and phenols cannot be differentiated since they are both responsive to the probe reagent. In this study, three of the most common TAC determination methods, namely cupric ion reducing antioxidant capacity (CUPRAC), 2,2'-azinobis(3-ethylbenzothiazoline-6-sulfonic acid) diammonium salt/trolox equivalent antioxidant capacity (ABTS/TEAC), and ferric reducing antioxidant power (FRAP), were tested for the assay of phenolics in the presence of selected thiol and protein compounds. Although the FRAP method is almost non-responsive to thiol compounds individually, surprising overoxidations with large positive deviations from additivity were observed when using this method for (phenols + thiols) mixtures. Among the tested TAC methods, CUPRAC gave the most additive results for all studied (phenol + thiol) and (phenol + protein) mixtures with minimal relative error. As ABTS/TEAC and FRAP methods gave small and large deviations, respectively, from additivity of absorbances arising from these components in mixtures, mercury(II) compounds were added to stabilize the thiol components in the form of Hg(II)-thiol complexes so as to enable selective spectrophotometric determination of phenolic components. This error compensation was most efficient for the FRAP method in testing (thiols + phenols) mixtures.

  3. An overview of the refinements and improvements to the USEPA’s sediment toxicity methods for freshwater sediment

    EPA Science Inventory

    Sediment toxicity tests are used for contaminated sediments, chemical registration, and water quality criteria evaluations and can be a core component of ecological risk assessments at contaminated sediments sites. Standard methods for conducting sediment toxicity tests have been...

  4. Testing methods and techniques: Testing electrical and electronic devices: A compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The methods, techniques, and devices used in testing various electrical and electronic apparatus are presented. The items described range from semiconductor package leak detectors to automatic circuit analyzer and antenna simulators for system checkout. In many cases the approaches can result in considerable cost savings and improved quality control. The testing of various electronic components, assemblies, and systems; the testing of various electrical devices; and the testing of cables and connectors are explained.

  5. Detecting epistasis with the marginal epistasis test in genetic mapping studies of quantitative traits

    PubMed Central

    Zeng, Ping; Mukherjee, Sayan; Zhou, Xiang

    2017-01-01

    Epistasis, commonly defined as the interaction between multiple genes, is an important genetic component underlying phenotypic variation. Many statistical methods have been developed to model and identify epistatic interactions between genetic variants. However, because of the large combinatorial search space of interactions, most epistasis mapping methods face enormous computational challenges and often suffer from low statistical power due to multiple test correction. Here, we present a novel, alternative strategy for mapping epistasis: instead of directly identifying individual pairwise or higher-order interactions, we focus on mapping variants that have non-zero marginal epistatic effects—the combined pairwise interaction effects between a given variant and all other variants. By testing marginal epistatic effects, we can identify candidate variants that are involved in epistasis without the need to identify the exact partners with which the variants interact, thus potentially alleviating much of the statistical and computational burden associated with standard epistatic mapping procedures. Our method is based on a variance component model, and relies on a recently developed variance component estimation method for efficient parameter inference and p-value computation. We refer to our method as the “MArginal ePIstasis Test”, or MAPIT. With simulations, we show how MAPIT can be used to estimate and test marginal epistatic effects, produce calibrated test statistics under the null, and facilitate the detection of pairwise epistatic interactions. We further illustrate the benefits of MAPIT in a QTL mapping study by analyzing the gene expression data of over 400 individuals from the GEUVADIS consortium. PMID:28746338

  6. 7 CFR 2902.8 - Determining life cycle costs, environmental and health benefits, and performance.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... for Determining Aerobic Biodegradation of Plastic Materials Under Controlled Composting Conditions”; (2) D5864“Standard Test Method for Determining the Aerobic Aquatic Biodegradation of Lubricants or...“Standard Test Method for Determining the Aerobic Aquatic Biodegradation of Lubricants or Their Components...

  7. Radar, target and ranging

    NASA Astrophysics Data System (ADS)

    1984-09-01

    This Test Operations Procedure (TOP) provides conventional test methods employing conventional test instrumentation for testing conventional radars. Single tests and subtests designed to test radar components, transmitters, receivers, antennas, etc., and system performance are conducted with single item instruments such as meters, generators, attenuators, counters, oscillators, plotters, etc., and with adequate land areas for conducting field tests.

  8. The Effect of Multidimensional Motivation Interventions on Cognitive and Behavioral Components of Motivation: Testing Martin's Model

    PubMed Central

    Pooragha Roodbarde, Fatemeh; Talepasand, Siavash; Rahimian Boogar, Issac

    2017-01-01

    Objective: The present study aimed at examining the effect of multidimensional motivation interventions based on Martin's model on cognitive and behavioral components of motivation. Method: The research design was prospective with pretest, posttest, and follow-up, and 2 experimental groups. In this study, 90 students (45 participants in the experimental group and 45 in the control group) constituted the sample of the study, and they were selected by available sampling method. Motivation interventions were implemented for fifteen 60-minute sessions 3 times a week, which lasted for about 2 months. Data were analyzed using repeated measures multivariate variance analysis test. Results: The findings revealed that multidimensional motivation interventions resulted in a significant increase in the scores of cognitive components such as self-efficacy, mastery goal, test anxiety, and feeling of lack of control, and behavioral components such as task management. The results of one-month follow-up indicated the stability of the created changes in test anxiety and cognitive strategies; however, no significant difference was found between the 2 groups at the follow-up in self-efficacy, mastery goals, source of control, and motivation. Conclusion: The research evidence indicated that academic motivation is a multidimensional component and is affected by cognitive and behavioral factors; therefore, researchers, teachers, and other authorities should attend to these factors to increase academic motivation. PMID:28659984

  9. The Effect of Multidimensional Motivation Interventions on Cognitive and Behavioral Components of Motivation: Testing Martin's Model.

    PubMed

    Pooragha Roodbarde, Fatemeh; Talepasand, Siavash; Rahimian Boogar, Issac

    2017-04-01

    Objective: The present study aimed at examining the effect of multidimensional motivation interventions based on Martin's model on cognitive and behavioral components of motivation. Method: The research design was prospective with pretest, posttest, and follow-up, and 2 experimental groups. In this study, 90 students (45 participants in the experimental group and 45 in the control group) constituted the sample of the study, and they were selected by available sampling method. Motivation interventions were implemented for fifteen 60-minute sessions 3 times a week, which lasted for about 2 months. Data were analyzed using repeated measures multivariate variance analysis test. Results: The findings revealed that multidimensional motivation interventions resulted in a significant increase in the scores of cognitive components such as self-efficacy, mastery goal, test anxiety, and feeling of lack of control, and behavioral components such as task management. The results of one-month follow-up indicated the stability of the created changes in test anxiety and cognitive strategies; however, no significant difference was found between the 2 groups at the follow-up in self-efficacy, mastery goals, source of control, and motivation. Conclusion: The research evidence indicated that academic motivation is a multidimensional component and is affected by cognitive and behavioral factors; therefore, researchers, teachers, and other authorities should attend to these factors to increase academic motivation.

  10. Research on the Fatigue Flexural Performance of RC Beams Attacked by Salt Spray

    NASA Astrophysics Data System (ADS)

    Mao, Jiang-hong; Xu, Fang-yuan; Jin, Wei-liang; Zhang, Jun; Wu, Xi-xi; Chen, Cai-sheng

    2018-04-01

    The fatigue flexural performance of RC beams attacked by salt spray was studied. A testing method involving electro osmosis, electrical accelerated corrosion and salt spray was proposed. This corrosion process method effectively simulates real-world salt spray and fatigue loading exerted by RC components on sea bridges. Four RC beams that have different stress amplitudes were tested. It is found that deterioration by corrosion and fatigue loading reduces the fatigue life of the RC and decreases the ability of deformation. The fatigue life and deflection ability could be reduced by increasing the stress amplitude and the corrosion duration time. The test result demonstrates that this experimental method can couple corrosion deterioration and fatigue loading reasonably. This procedure may be applied to evaluate the fatigue life and concrete durability of RC components located in a natural salt spray environment.

  11. Reliability analysis of component of affination centrifugal 1 machine by using reliability engineering

    NASA Astrophysics Data System (ADS)

    Sembiring, N.; Ginting, E.; Darnello, T.

    2017-12-01

    Problems that appear in a company that produces refined sugar, the production floor has not reached the level of critical machine availability because it often suffered damage (breakdown). This results in a sudden loss of production time and production opportunities. This problem can be solved by Reliability Engineering method where the statistical approach to historical damage data is performed to see the pattern of the distribution. The method can provide a value of reliability, rate of damage, and availability level, of an machine during the maintenance time interval schedule. The result of distribution test to time inter-damage data (MTTF) flexible hose component is lognormal distribution while component of teflon cone lifthing is weibull distribution. While from distribution test to mean time of improvement (MTTR) flexible hose component is exponential distribution while component of teflon cone lifthing is weibull distribution. The actual results of the flexible hose component on the replacement schedule per 720 hours obtained reliability of 0.2451 and availability 0.9960. While on the critical components of teflon cone lifthing actual on the replacement schedule per 1944 hours obtained reliability of 0.4083 and availability 0.9927.

  12. Testing military grade magnetics (transformers, inductors and coils).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    Engineers and designers are constantly searching for test methods to qualify or 'prove-in' new designs. In the High Reliability world of military parts, design test, qualification tests, in process tests and product characteristic tests, become even more important. The use of in process and function tests has been adopted as a way of demonstrating that parts will operate correctly and survive its 'use' environments. This paper discusses various types of tests to qualify the magnetic components - the current carrying capability of coils, a next assembly 'as used' test, a corona test and inductance at temperature test. Each of thesemore » tests addresses a different potential failure on a component. The entire process from design to implementation is described.« less

  13. Contact Thermocouple Methodology and Evaluation for Temperature Measurement in the Laboratory

    NASA Technical Reports Server (NTRS)

    Brewer, Ethan J.; Pawlik, Ralph J.; Krause, David L.

    2013-01-01

    Laboratory testing of advanced aerospace components very often requires highly accurate temperature measurement and control devices, as well as methods to precisely analyze and predict the performance of such components. Analysis of test articles depends on accurate measurements of temperature across the specimen. Where possible, this task is accomplished using many thermocouples welded directly to the test specimen, which can produce results with great precision. However, it is known that thermocouple spot welds can initiate deleterious cracks in some materials, prohibiting the use of welded thermocouples. Such is the case for the nickel-based superalloy MarM-247, which is used in the high temperature, high pressure heater heads for the Advanced Stirling Converter component of the Advanced Stirling Radioisotope Generator space power system. To overcome this limitation, a method was developed that uses small diameter contact thermocouples to measure the temperature of heater head test articles with the same level of accuracy as welded thermocouples. This paper includes a brief introduction and a background describing the circumstances that compelled the development of the contact thermocouple measurement method. Next, the paper describes studies performed on contact thermocouple readings to determine the accuracy of results. It continues on to describe in detail the developed measurement method and the evaluation of results produced. A further study that evaluates the performance of different measurement output devices is also described. Finally, a brief conclusion and summary of results is provided.

  14. Observational Tests of Recent MHD Turbulence Perspectives

    NASA Technical Reports Server (NTRS)

    Ghosh, Sanjoy; Guhathakurta, M. (Technical Monitor)

    2001-01-01

    This grant seeks to analyze the Heliospheric Missions data to test current theories on the angular dependence (with respect to mean magnetic field direction) of magnetohydrodynamic (MHD) turbulence in the solar wind. Solar wind turbulence may be composed of two or more dynamically independent components. Such components include magnetic pressure-balanced structures, velocity shears, quasi-2D turbulence, and slab (Alfven) waves. We use a method, developed during the first two years of this grant, for extracting the individual reduced spectra of up to three separate turbulence components from a single spacecraft time series. The method has been used on ISEE-3 data, Pioneer Venus Orbiter, Ulysses, and Voyager data samples. The correlation of fluctuations as a function of angle between flow direction and magnetic-field direction is the focus of study during the third year.

  15. Automated Classification and Removal of EEG Artifacts With SVM and Wavelet-ICA.

    PubMed

    Sai, Chong Yeh; Mokhtar, Norrima; Arof, Hamzah; Cumming, Paul; Iwahashi, Masahiro

    2018-05-01

    Brain electrical activity recordings by electroencephalography (EEG) are often contaminated with signal artifacts. Procedures for automated removal of EEG artifacts are frequently sought for clinical diagnostics and brain-computer interface applications. In recent years, a combination of independent component analysis (ICA) and discrete wavelet transform has been introduced as standard technique for EEG artifact removal. However, in performing the wavelet-ICA procedure, visual inspection or arbitrary thresholding may be required for identifying artifactual components in the EEG signal. We now propose a novel approach for identifying artifactual components separated by wavelet-ICA using a pretrained support vector machine (SVM). Our method presents a robust and extendable system that enables fully automated identification and removal of artifacts from EEG signals, without applying any arbitrary thresholding. Using test data contaminated by eye blink artifacts, we show that our method performed better in identifying artifactual components than did existing thresholding methods. Furthermore, wavelet-ICA in conjunction with SVM successfully removed target artifacts, while largely retaining the EEG source signals of interest. We propose a set of features including kurtosis, variance, Shannon's entropy, and range of amplitude as training and test data of SVM to identify eye blink artifacts in EEG signals. This combinatorial method is also extendable to accommodate multiple types of artifacts present in multichannel EEG. We envision future research to explore other descriptive features corresponding to other types of artifactual components.

  16. Experimental Observations for Determining the Maximum Torque Values to Apply to Composite Components Mechanically Joined With Fasteners (MSFC Center Director's Discretionary Fund Final Report, Proj. 03-13}

    NASA Technical Reports Server (NTRS)

    Thomas, F. P.

    2006-01-01

    Aerospace structures utilize innovative, lightweight composite materials for exploration activities. These structural components, due to various reasons including size limitations, manufacturing facilities, contractual obligations, or particular design requirements, will have to be joined. The common methodologies for joining composite components are the adhesively bonded and mechanically fastened joints and, in certain instances, both methods are simultaneously incorporated into the design. Guidelines and recommendations exist for engineers to develop design criteria and analyze and test composites. However, there are no guidelines or recommendations based on analysis or test data to specify a torque or torque range to apply to metallic mechanical fasteners used to join composite components. Utilizing the torque tension machine at NASA s Marshall Space Flight Center, an initial series of tests were conducted to determine the maximum torque that could be applied to a composite specimen. Acoustic emissions were used to nondestructively assess the specimens during the tests and thermographic imaging after the tests.

  17. Disentangling giant component and finite cluster contributions in sparse random matrix spectra.

    PubMed

    Kühn, Reimer

    2016-04-01

    We describe a method for disentangling giant component and finite cluster contributions to sparse random matrix spectra, using sparse symmetric random matrices defined on Erdős-Rényi graphs as an example and test bed. Our methods apply to sparse matrices defined in terms of arbitrary graphs in the configuration model class, as long as they have finite mean degree.

  18. Validation of electronic structure methods for isomerization reactions of large organic molecules.

    PubMed

    Luo, Sijie; Zhao, Yan; Truhlar, Donald G

    2011-08-14

    In this work the ISOL24 database of isomerization energies of large organic molecules presented by Huenerbein et al. [Phys. Chem. Chem. Phys., 2010, 12, 6940] is updated, resulting in the new benchmark database called ISOL24/11, and this database is used to test 50 electronic model chemistries. To accomplish the update, the very expensive and highly accurate CCSD(T)-F12a/aug-cc-pVDZ method is first exploited to investigate a six-reaction subset of the 24 reactions, and by comparison of various methods with the benchmark, MCQCISD-MPW is confirmed to be of high accuracy. The final ISOL24/11 database is composed of six reaction energies calculated by CCSD(T)-F12a/aug-cc-pVDZ and 18 calculated by MCQCISD-MPW. We then tested 40 single-component density functionals (both local and hybrid), eight doubly hybrid functionals, and two other methods against ISOL24/11. It is found that the SCS-MP3/CBS method, which is used as benchmark for the original ISOL24, has an MUE of 1.68 kcal mol(-1), which is close to or larger than some of the best tested DFT methods. Using the new benchmark, we find ωB97X-D and MC3MPWB to be the best single-component and doubly hybrid functionals respectively, with PBE0-D3 and MC3MPW performing almost as well. The best single-component density functionals without molecular mechanics dispersion-like terms are M08-SO, M08-HX, M05-2X, and M06-2X. The best single-component density functionals without Hartree-Fock exchange are M06-L-D3 when MM terms are included and M06-L when they are not.

  19. Spectrophotometric Determination of Phenolic Antioxidants in the Presence of Thiols and Proteins

    PubMed Central

    Avan, Aslı Neslihan; Demirci Çekiç, Sema; Uzunboy, Seda; Apak, Reşat

    2016-01-01

    Development of easy, practical, and low-cost spectrophotometric methods is required for the selective determination of phenolic antioxidants in the presence of other similar substances. As electron transfer (ET)-based total antioxidant capacity (TAC) assays generally measure the reducing ability of antioxidant compounds, thiols and phenols cannot be differentiated since they are both responsive to the probe reagent. In this study, three of the most common TAC determination methods, namely cupric ion reducing antioxidant capacity (CUPRAC), 2,2′-azinobis(3-ethylbenzothiazoline-6-sulfonic acid) diammonium salt/trolox equivalent antioxidant capacity (ABTS/TEAC), and ferric reducing antioxidant power (FRAP), were tested for the assay of phenolics in the presence of selected thiol and protein compounds. Although the FRAP method is almost non-responsive to thiol compounds individually, surprising overoxidations with large positive deviations from additivity were observed when using this method for (phenols + thiols) mixtures. Among the tested TAC methods, CUPRAC gave the most additive results for all studied (phenol + thiol) and (phenol + protein) mixtures with minimal relative error. As ABTS/TEAC and FRAP methods gave small and large deviations, respectively, from additivity of absorbances arising from these components in mixtures, mercury(II) compounds were added to stabilize the thiol components in the form of Hg(II)-thiol complexes so as to enable selective spectrophotometric determination of phenolic components. This error compensation was most efficient for the FRAP method in testing (thiols + phenols) mixtures. PMID:27529232

  20. FUELS IN SOIL TEST KIT: FIELD USE OF DIESEL DOG SOIL TEST KITS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unknown

    2001-05-31

    Western Research Institute (WRI) is commercializing Diesel Dog Portable Soil Test Kits for performing analysis of fuel-contaminated soils in the field. The technology consists of a method developed by WRI (U.S. Patents 5,561,065 and 5,976,883) and hardware developed by WRI that allows the method to be performed in the field (patent pending). The method is very simple and does not require the use of highly toxic reagents. The aromatic components in a soil extract are measured by absorption at 254 nm with a field-portable photometer. WRI added significant value to the technology by taking the method through the American Societymore » for Testing and Materials (ASTM) approval and validation processes. The method is designated ASTM Method D-5831-96, Standard Test Method for Screening Fuels in Soils. This ASTM designation allows the method to be used for federal compliance activities. In FY 99, twenty-five preproduction kits were successfully constructed in cooperation with CF Electronics, Inc., of Laramie, Wyoming. The kit components work well and the kits are fully operational. In the calendar year 2000, kits were provided to the following entities who agreed to participate as FY 99 and FY 00 JSR (Jointly Sponsored Research) cosponsors and use the kits as opportunities arose for field site work: Wyoming Department of Environmental Quality (DEQ) (3 units), F.E. Warren Air Force Base, Gradient Corporation, The Johnson Company (2 units), IT Corporation (2 units), TRC Environmental Corporation, Stone Environmental, ENSR, Action Environmental, Laco Associates, Barenco, Brown and Caldwell, Dames and Moore Lebron LLP, Phillips Petroleum, GeoSyntek, and the State of New Mexico. By early 2001, ten kits had been returned to WRI following the six-month evaluation period. On return, the components of all ten kits were fully functional. The kits were upgraded with circuit modifications, new polyethylene foam inserts, and updated instruction manuals.« less

  1. Test-Anchored Vibration Response Predictions for an Acoustically Energized Curved Orthogrid Panel with Mounted Components

    NASA Technical Reports Server (NTRS)

    Frady, Gregory P.; Duvall, Lowery D.; Fulcher, Clay W. G.; Laverde, Bruce T.; Hunt, Ronald A.

    2011-01-01

    rich body of vibroacoustic test data was recently generated at Marshall Space Flight Center for component-loaded curved orthogrid panels typical of launch vehicle skin structures. The test data were used to anchor computational predictions of a variety of spatially distributed responses including acceleration, strain and component interface force. Transfer functions relating the responses to the input pressure field were generated from finite element based modal solutions and test-derived damping estimates. A diffuse acoustic field model was applied to correlate the measured input sound pressures across the energized panel. This application quantifies the ability to quickly and accurately predict a variety of responses to acoustically energized skin panels with mounted components. Favorable comparisons between the measured and predicted responses were established. The validated models were used to examine vibration response sensitivities to relevant modeling parameters such as pressure patch density, mesh density, weight of the mounted component and model form. Convergence metrics include spectral densities and cumulative root-mean squared (RMS) functions for acceleration, velocity, displacement, strain and interface force. Minimum frequencies for response convergence were established as well as recommendations for modeling techniques, particularly in the early stages of a component design when accurate structural vibration requirements are needed relatively quickly. The results were compared with long-established guidelines for modeling accuracy of component-loaded panels. A theoretical basis for the Response/Pressure Transfer Function (RPTF) approach provides insight into trends observed in the response predictions and confirmed in the test data. The software developed for the RPTF method allows easy replacement of the diffuse acoustic field with other pressure fields such as a turbulent boundary layer (TBL) model suitable for vehicle ascent. Structural responses using a TBL model were demonstrated, and wind tunnel tests have been proposed to anchor the predictions and provide new insight into modeling approaches for this environment. Finally, design load factors were developed from the measured and predicted responses and compared with those derived from traditional techniques such as historical Mass Acceleration Curves and Barrett scaling methods for acreage and component-loaded panels.

  2. A Residual Mass Ballistic Testing Method to Compare Armor Materials or Components (Residual Mass Ballistic Testing Method)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benjamin Langhorst; Thomas M Lillo; Henry S Chu

    2014-05-01

    A statistics based ballistic test method is presented for use when comparing multiple groups of test articles of unknown relative ballistic perforation resistance. The method is intended to be more efficient than many traditional methods for research and development testing. To establish the validity of the method, it is employed in this study to compare test groups of known relative ballistic performance. Multiple groups of test articles were perforated using consistent projectiles and impact conditions. Test groups were made of rolled homogeneous armor (RHA) plates and differed in thickness. After perforation, each residual projectile was captured behind the target andmore » its mass was measured. The residual masses measured for each test group were analyzed to provide ballistic performance rankings with associated confidence levels. When compared to traditional V50 methods, the residual mass (RM) method was found to require fewer test events and be more tolerant of variations in impact conditions.« less

  3. Improving Papanicolaou test quality and reducing medical errors by using Toyota production system methods.

    PubMed

    Raab, Stephen S; Andrew-Jaja, Carey; Condel, Jennifer L; Dabbs, David J

    2006-01-01

    The objective of the study was to determine whether the Toyota production system process improves Papanicolaou test quality and patient safety. An 8-month nonconcurrent cohort study that included 464 case and 639 control women who had a Papanicolaou test was performed. Office workflow was redesigned using Toyota production system methods by introducing a 1-by-1 continuous flow process. We measured the frequency of Papanicolaou tests without a transformation zone component, follow-up and Bethesda System diagnostic frequency of atypical squamous cells of undetermined significance, and diagnostic error frequency. After the intervention, the percentage of Papanicolaou tests lacking a transformation zone component decreased from 9.9% to 4.7% (P = .001). The percentage of Papanicolaou tests with a diagnosis of atypical squamous cells of undetermined significance decreased from 7.8% to 3.9% (P = .007). The frequency of error per correlating cytologic-histologic specimen pair decreased from 9.52% to 7.84%. The introduction of the Toyota production system process resulted in improved Papanicolaou test quality.

  4. Correaltion of full-scale drag predictions with flight measurements on the C-141A aircraft. Phase 2: Wind tunnel test, analysis, and prediction techniques. Volume 1: Drag predictions, wind tunnel data analysis and correlation

    NASA Technical Reports Server (NTRS)

    Macwilkinson, D. G.; Blackerby, W. T.; Paterson, J. H.

    1974-01-01

    The degree of cruise drag correlation on the C-141A aircraft is determined between predictions based on wind tunnel test data, and flight test results. An analysis of wind tunnel tests on a 0.0275 scale model at Reynolds number up to 3.05 x 1 million/MAC is reported. Model support interference corrections are evaluated through a series of tests, and fully corrected model data are analyzed to provide details on model component interference factors. It is shown that predicted minimum profile drag for the complete configuration agrees within 0.75% of flight test data, using a wind tunnel extrapolation method based on flat plate skin friction and component shape factors. An alternative method of extrapolation, based on computed profile drag from a subsonic viscous theory, results in a prediction four percent lower than flight test data.

  5. Comparison of three-dimensional fluorescence analysis methods for predicting formation of trihalomethanes and haloacetic acids.

    PubMed

    Peleato, Nicolás M; Andrews, Robert C

    2015-01-01

    This work investigated the application of several fluorescence excitation-emission matrix analysis methods as natural organic matter (NOM) indicators for use in predicting the formation of trihalomethanes (THMs) and haloacetic acids (HAAs). Waters from four different sources (two rivers and two lakes) were subjected to jar testing followed by 24hr disinfection by-product formation tests using chlorine. NOM was quantified using three common measures: dissolved organic carbon, ultraviolet absorbance at 254 nm, and specific ultraviolet absorbance as well as by principal component analysis, peak picking, and parallel factor analysis of fluorescence spectra. Based on multi-linear modeling of THMs and HAAs, principle component (PC) scores resulted in the lowest mean squared prediction error of cross-folded test sets (THMs: 43.7 (μg/L)(2), HAAs: 233.3 (μg/L)(2)). Inclusion of principle components representative of protein-like material significantly decreased prediction error for both THMs and HAAs. Parallel factor analysis did not identify a protein-like component and resulted in prediction errors similar to traditional NOM surrogates as well as fluorescence peak picking. These results support the value of fluorescence excitation-emission matrix-principal component analysis as a suitable NOM indicator in predicting the formation of THMs and HAAs for the water sources studied. Copyright © 2014. Published by Elsevier B.V.

  6. Testing for intracycle determinism in pseudoperiodic time series.

    PubMed

    Coelho, Mara C S; Mendes, Eduardo M A M; Aguirre, Luis A

    2008-06-01

    A determinism test is proposed based on the well-known method of the surrogate data. Assuming predictability to be a signature of determinism, the proposed method checks for intracycle (e.g., short-term) determinism in the pseudoperiodic time series for which standard methods of surrogate analysis do not apply. The approach presented is composed of two steps. First, the data are preprocessed to reduce the effects of seasonal and trend components. Second, standard tests of surrogate analysis can then be used. The determinism test is applied to simulated and experimental pseudoperiodic time series and the results show the applicability of the proposed test.

  7. Reliability Study of Solder Paste Alloy for the Improvement of Solder Joint at Surface Mount Fine-Pitch Components.

    PubMed

    Rahman, Mohd Nizam Ab; Zubir, Noor Suhana Mohd; Leuveano, Raden Achmad Chairdino; Ghani, Jaharah A; Mahmood, Wan Mohd Faizal Wan

    2014-12-02

    The significant increase in metal costs has forced the electronics industry to provide new materials and methods to reduce costs, while maintaining customers' high-quality expectations. This paper considers the problem of most electronic industries in reducing costly materials, by introducing a solder paste with alloy composition tin 98.3%, silver 0.3%, and copper 0.7%, used for the construction of the surface mount fine-pitch component on a Printing Wiring Board (PWB). The reliability of the solder joint between electronic components and PWB is evaluated through the dynamic characteristic test, thermal shock test, and Taguchi method after the printing process. After experimenting with the dynamic characteristic test and thermal shock test with 20 boards, the solder paste was still able to provide a high-quality solder joint. In particular, the Taguchi method is used to determine the optimal control parameters and noise factors of the Solder Printer (SP) machine, that affects solder volume and solder height. The control parameters include table separation distance, squeegee speed, squeegee pressure, and table speed of the SP machine. The result shows that the most significant parameter for the solder volume is squeegee pressure (2.0 mm), and the solder height is the table speed of the SP machine (2.5 mm/s).

  8. Reliability Study of Solder Paste Alloy for the Improvement of Solder Joint at Surface Mount Fine-Pitch Components

    PubMed Central

    Rahman, Mohd Nizam Ab.; Zubir, Noor Suhana Mohd; Leuveano, Raden Achmad Chairdino; Ghani, Jaharah A.; Mahmood, Wan Mohd Faizal Wan

    2014-01-01

    The significant increase in metal costs has forced the electronics industry to provide new materials and methods to reduce costs, while maintaining customers’ high-quality expectations. This paper considers the problem of most electronic industries in reducing costly materials, by introducing a solder paste with alloy composition tin 98.3%, silver 0.3%, and copper 0.7%, used for the construction of the surface mount fine-pitch component on a Printing Wiring Board (PWB). The reliability of the solder joint between electronic components and PWB is evaluated through the dynamic characteristic test, thermal shock test, and Taguchi method after the printing process. After experimenting with the dynamic characteristic test and thermal shock test with 20 boards, the solder paste was still able to provide a high-quality solder joint. In particular, the Taguchi method is used to determine the optimal control parameters and noise factors of the Solder Printer (SP) machine, that affects solder volume and solder height. The control parameters include table separation distance, squeegee speed, squeegee pressure, and table speed of the SP machine. The result shows that the most significant parameter for the solder volume is squeegee pressure (2.0 mm), and the solder height is the table speed of the SP machine (2.5 mm/s). PMID:28788270

  9. Development of a statistically proven injection molding method for reaction bonded silicon nitride, sintering reaction bonded silicon nitride, and sintered silicon nitride

    NASA Astrophysics Data System (ADS)

    Steiner, Matthias

    A statistically proven, series injection molding technique for ceramic components was developed for the construction of engines and gas turbines. The flow behavior of silicon injection-molding materials was characterized and improved. Hot-isostatic-pressing reaction bonded silicon nitride (HIPRBSN) was developed. A nondestructive component evaluation method was developed. An injection molding line for HIPRBSN engine components precombustion chamber, flame spreader, and valve guide was developed. This line allows the production of small series for engine tests.

  10. Spatial contrast sensitivity - Effects of age, test-retest, and psychophysical method

    NASA Technical Reports Server (NTRS)

    Higgins, Kent E.; Jaffe, Myles J.; Caruso, Rafael C.; Demonasterio, Francisco M.

    1988-01-01

    Two different psychophysical methods were used to test the spatial contrast sensitivity in normal subjects from five age groups. The method of adjustment showed a decline in sensitivity with increasing age at all spatial frequencies, while the forced-choice procedure showed an age-related decline predominantly at high spatial frequencies. It is suggested that a neural component is responsible for this decline.

  11. A new quasi-relativistic approach for density functional theory based on the normalized elimination of the small component

    NASA Astrophysics Data System (ADS)

    Filatov, Michael; Cremer, Dieter

    2002-01-01

    A recently developed variationally stable quasi-relativistic method, which is based on the low-order approximation to the method of normalized elimination of the small component, was incorporated into density functional theory (DFT). The new method was tested for diatomic molecules involving Ag, Cd, Au, and Hg by calculating equilibrium bond lengths, vibrational frequencies, and dissociation energies. The method is easy to implement into standard quantum chemical programs and leads to accurate results for the benchmark systems studied.

  12. Primer Stepper Motor Nomenclature, Definition, Performance and Recommended Test Methods

    NASA Technical Reports Server (NTRS)

    Starin, Scott; Shea, Cutter

    2014-01-01

    There has been an unfortunate lack of standardization of the terms and components of stepper motor performance, requirements definition, application of torque margin and implementation of test methods. This paper will address these inconsistencies and discuss in detail the implications of performance parameters, affects of load inertia, control electronics, operational resonances and recommended test methods. Additionally, this paper will recommend parameters for defining and specifying stepper motor actuators. A useful description of terms as well as consolidated equations and recommended requirements is included.

  13. Testing methods and techniques: Environmental testing: A compilation

    NASA Technical Reports Server (NTRS)

    1971-01-01

    Various devices and techniques are described for testing hardware and components in four special environments: low temperature, high temperature, high pressure, and vibration. Items ranging from an automatic calibrator for pressure transducers to a fixture for testing the susceptibility of materials to ignition by electric spark are included.

  14. 78 FR 77646 - Proposed Information Collection; Comment Request; 2014 Census Site Test

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-24

    ... pre-notification containing instructions about how to respond to the test online. Some households will... Adaptive Design Strategies portion will test a method of managing data collection by dynamically adapting... methodology. The objectives of this component of the test are to: Design and develop software solutions...

  15. Major advances in testing of dairy products: milk component and dairy product attribute testing.

    PubMed

    Barbano, D M; Lynch, J M

    2006-04-01

    Milk component analysis is relatively unusual in the field of quantitative analytical chemistry because an analytical test result determines the allocation of very large amounts of money between buyers and sellers of milk. Therefore, there is high incentive to develop and refine these methods to achieve a level of analytical performance rarely demanded of most methods or laboratory staff working in analytical chemistry. In the last 25 yr, well-defined statistical methods to characterize and validate analytical method performance combined with significant improvements in both the chemical and instrumental methods have allowed achievement of improved analytical performance for payment testing. A shift from marketing commodity dairy products to the development, manufacture, and marketing of value added dairy foods for specific market segments has created a need for instrumental and sensory approaches and quantitative data to support product development and marketing. Bringing together sensory data from quantitative descriptive analysis and analytical data from gas chromatography olfactometry for identification of odor-active compounds in complex natural dairy foods has enabled the sensory scientist and analytical chemist to work together to improve the consistency and quality of dairy food flavors.

  16. [Establishment of simultaneous measurement method of 8 salivary components using urinary test paper and clinical evaluation of oral environment].

    PubMed

    Yuuki, Kenji; Tsukasaki, Hiroaki; Kawawa, Tadaharu; Shiba, Akihiko; Shiba, Kiyoko

    2008-07-01

    Clinical findings were compared with glucose, protein, albumin, bilirubin, creatinine, pH, occult blood, ketone body, nitrite, and white blood cells contained in whole saliva to investigate the components that most markedly reflect the periodontal condition. The subjects were staff of the Prosthodontics Department, Showa University, and patients who visited for dental treatments (57 subjects in total). At the first time, saliva samples were gargled with 1.5 ml of distilled water for 15 seconds and collected by spitting out into a paper cup. At the second time, saliva samples were collected by the same method. At the third time, saliva samples after chewing paraffin gum for 60 seconds were collected by spitting out into a paper cup. Thus whole saliva collecting that was divided on three times. After sampling, 8 mul of the saliva sample was dripped in reagent sticks for the 10 items of urinary test paper and the reflectance was measured using a specific reflectometer. In the periodontal tissue evaluation, the degree of alveolar bone resorption, probing value, and tooth mobility and the presence or absence of lesions in the root furcation were examined and classified into 4 ranks. The mean values in each periodontal disease rank and correlation between the periodontal disease ranks and the components were statistically analyzed. Bilirubin and ketone body were not measurable. The components density of the 8 items was increased as the periodontal disease rank increased. Regarding the correlation between the periodontal disease ranks and the components, high correlations were noted for protein, albumin, creatinine, pH, and white blood cells. The simultaneous measurement method of 8 salivary components using test paper may be very useful for the diagnosis of periodontal disease of abutment teeth.

  17. Clean Water Act Analytical Methods

    EPA Pesticide Factsheets

    EPA publishes laboratory analytical methods (test procedures) that are used by industries and municipalities to analyze the chemical, physical and biological components of wastewater and other environmental samples required by the Clean Water Act.

  18. A Bayesian Network Based Global Sensitivity Analysis Method for Identifying Dominant Processes in a Multi-physics Model

    NASA Astrophysics Data System (ADS)

    Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.

    2016-12-01

    Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.

  19. Reduced-gravity environment hardware demonstrations of a prototype miniaturized flow cytometer and companion microfluidic mixing technology.

    PubMed

    Phipps, William S; Yin, Zhizhong; Bae, Candice; Sharpe, Julia Z; Bishara, Andrew M; Nelson, Emily S; Weaver, Aaron S; Brown, Daniel; McKay, Terri L; Griffin, DeVon; Chan, Eugene Y

    2014-11-13

    Until recently, astronaut blood samples were collected in-flight, transported to earth on the Space Shuttle, and analyzed in terrestrial laboratories. If humans are to travel beyond low Earth orbit, a transition towards space-ready, point-of-care (POC) testing is required. Such testing needs to be comprehensive, easy to perform in a reduced-gravity environment, and unaffected by the stresses of launch and spaceflight. Countless POC devices have been developed to mimic laboratory scale counterparts, but most have narrow applications and few have demonstrable use in an in-flight, reduced-gravity environment. In fact, demonstrations of biomedical diagnostics in reduced gravity are limited altogether, making component choice and certain logistical challenges difficult to approach when seeking to test new technology. To help fill the void, we are presenting a modular method for the construction and operation of a prototype blood diagnostic device and its associated parabolic flight test rig that meet the standards for flight-testing onboard a parabolic flight, reduced-gravity aircraft. The method first focuses on rig assembly for in-flight, reduced-gravity testing of a flow cytometer and a companion microfluidic mixing chip. Components are adaptable to other designs and some custom components, such as a microvolume sample loader and the micromixer may be of particular interest. The method then shifts focus to flight preparation, by offering guidelines and suggestions to prepare for a successful flight test with regard to user training, development of a standard operating procedure (SOP), and other issues. Finally, in-flight experimental procedures specific to our demonstrations are described.

  20. Reduced-gravity Environment Hardware Demonstrations of a Prototype Miniaturized Flow Cytometer and Companion Microfluidic Mixing Technology

    PubMed Central

    Bae, Candice; Sharpe, Julia Z.; Bishara, Andrew M.; Nelson, Emily S.; Weaver, Aaron S.; Brown, Daniel; McKay, Terri L.; Griffin, DeVon; Chan, Eugene Y.

    2014-01-01

    Until recently, astronaut blood samples were collected in-flight, transported to earth on the Space Shuttle, and analyzed in terrestrial laboratories. If humans are to travel beyond low Earth orbit, a transition towards space-ready, point-of-care (POC) testing is required. Such testing needs to be comprehensive, easy to perform in a reduced-gravity environment, and unaffected by the stresses of launch and spaceflight. Countless POC devices have been developed to mimic laboratory scale counterparts, but most have narrow applications and few have demonstrable use in an in-flight, reduced-gravity environment. In fact, demonstrations of biomedical diagnostics in reduced gravity are limited altogether, making component choice and certain logistical challenges difficult to approach when seeking to test new technology. To help fill the void, we are presenting a modular method for the construction and operation of a prototype blood diagnostic device and its associated parabolic flight test rig that meet the standards for flight-testing onboard a parabolic flight, reduced-gravity aircraft. The method first focuses on rig assembly for in-flight, reduced-gravity testing of a flow cytometer and a companion microfluidic mixing chip. Components are adaptable to other designs and some custom components, such as a microvolume sample loader and the micromixer may be of particular interest. The method then shifts focus to flight preparation, by offering guidelines and suggestions to prepare for a successful flight test with regard to user training, development of a standard operating procedure (SOP), and other issues. Finally, in-flight experimental procedures specific to our demonstrations are described. PMID:25490614

  1. Construction of a system for single-cell transgene induction in Caenorhabditis elegans using a pulsed infrared laser

    PubMed Central

    Churgin, Matthew A.; He, Liping; Murray, John I.; Fang-Yen, Christopher

    2014-01-01

    The spatial and temporal control of transgene expression is an important tool in C. elegans biology. We previously described a method for evoking gene expression in arbitrary cells by using a focused pulsed infrared laser to induce a heat shock response (Churgin et al 2013). Here we describe detailed methods for building and testing a system for performing single-cell heat shock. Steps include setting up the laser and associated components, coupling the laser beam to a microscope, and testing heat shock protocols. All steps can be carried out using readily available off-the-shelf components. PMID:24835576

  2. Environmental test planning, selection and standardization aids available

    NASA Technical Reports Server (NTRS)

    Copeland, E. H.; Foley, J. T.

    1968-01-01

    Requirements for instrumentation, equipment, and methods to be used in conducting environmental tests on components intended for use by a wide variety of technical personnel of different educational backgrounds, experience, and interests is announced.

  3. Helium Mass Spectrometer Leak Detection: A Method to Quantify Total Measurement Uncertainty

    NASA Technical Reports Server (NTRS)

    Mather, Janice L.; Taylor, Shawn C.

    2015-01-01

    In applications where leak rates of components or systems are evaluated against a leak rate requirement, the uncertainty of the measured leak rate must be included in the reported result. However, in the helium mass spectrometer leak detection method, the sensitivity, or resolution, of the instrument is often the only component of the total measurement uncertainty noted when reporting results. To address this shortfall, a measurement uncertainty analysis method was developed that includes the leak detector unit's resolution, repeatability, hysteresis, and drift, along with the uncertainty associated with the calibration standard. In a step-wise process, the method identifies the bias and precision components of the calibration standard, the measurement correction factor (K-factor), and the leak detector unit. Together these individual contributions to error are combined and the total measurement uncertainty is determined using the root-sum-square method. It was found that the precision component contributes more to the total uncertainty than the bias component, but the bias component is not insignificant. For helium mass spectrometer leak rate tests where unit sensitivity alone is not enough, a thorough evaluation of the measurement uncertainty such as the one presented herein should be performed and reported along with the leak rate value.

  4. The development of pyro shock test requirements for Viking Lander Capsule components

    NASA Technical Reports Server (NTRS)

    Barrett, S.

    1975-01-01

    The procedure used to derive component-level pyro shock specifications for the Viking Lander Capsule (VLC) is described. Effects of shock path distance and mechanical joints between the device and the point at which the environment is to be estimated are accounted for in the method. The validity of the prediction technique was verified by a series of shock tests on a full-scale structural model of the lander body.

  5. The development and testing of the thermal break divertor monoblock target design delivering 20 MW m-2 heat load capability

    NASA Astrophysics Data System (ADS)

    Fursdon, M.; Barrett, T.; Domptail, F.; Evans, Ll M.; Luzginova, N.; Greuner, N. H.; You, J.-H.; Li, M.; Richou, M.; Gallay, F.; Visca, E.

    2017-12-01

    The design and development of a novel plasma facing component (for fusion power plants) is described. The component uses the existing ‘monoblock’ construction which consists of a tungsten ‘block’ joined via a copper interlayer to a through CuCrZr cooling pipe. In the new concept the interlayer stiffness and conductivity properties are tuned so that stress in the principal structural element of the component (the cooling pipe) is reduced. Following initial trials with off-the-shelf materials, the concept was realized by machined features in an otherwise solid copper interlayer. The shape and distribution of the features were tuned by finite element analyses subject to ITER structural design criterion in-vessel components (SDC-IC) design rules. Proof of concept mock-ups were manufactured using a two stage brazing process verified by tomography and micrographic inspection. Full assemblies were inspected using ultrasound and thermographic (SATIR) test methods at ENEA and CEA respectively. High heat flux tests using IPP’s GLADIS facility showed that 200 cycles at 20 MW m-2 and five cycles at 25 MW m-2 could be sustained without apparent component damage. Further testing and component development is planned.

  6. [Analytic methods for seed models with genotype x environment interactions].

    PubMed

    Zhu, J

    1996-01-01

    Genetic models with genotype effect (G) and genotype x environment interaction effect (GE) are proposed for analyzing generation means of seed quantitative traits in crops. The total genetic effect (G) is partitioned into seed direct genetic effect (G0), cytoplasm genetic of effect (C), and maternal plant genetic effect (Gm). Seed direct genetic effect (G0) can be further partitioned into direct additive (A) and direct dominance (D) genetic components. Maternal genetic effect (Gm) can also be partitioned into maternal additive (Am) and maternal dominance (Dm) genetic components. The total genotype x environment interaction effect (GE) can also be partitioned into direct genetic by environment interaction effect (G0E), cytoplasm genetic by environment interaction effect (CE), and maternal genetic by environment interaction effect (GmE). G0E can be partitioned into direct additive by environment interaction (AE) and direct dominance by environment interaction (DE) genetic components. GmE can also be partitioned into maternal additive by environment interaction (AmE) and maternal dominance by environment interaction (DmE) genetic components. Partitions of genetic components are listed for parent, F1, F2 and backcrosses. A set of parents, their reciprocal F1 and F2 seeds is applicable for efficient analysis of seed quantitative traits. MINQUE(0/1) method can be used for estimating variance and covariance components. Unbiased estimation for covariance components between two traits can also be obtained by the MINQUE(0/1) method. Random genetic effects in seed models are predictable by the Adjusted Unbiased Prediction (AUP) approach with MINQUE(0/1) method. The jackknife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects, which can be further used in a t-test for parameter. Unbiasedness and efficiency for estimating variance components and predicting genetic effects are tested by Monte Carlo simulations.

  7. Additive Manufacturing Thermal Performance Testing of Single Channel GRCop-84 SLM Components

    NASA Technical Reports Server (NTRS)

    Garcia, Chance P.; Cross, Matthew

    2014-01-01

    The surface finish found on components manufactured by sinter laser manufacturing (SLM) is rougher (0.013 - 0.0006 inches) than parts made using traditional fabrication methods. Internal features and passages built into SLM components do not readily allow for roughness reduction processes. Alternatively, engineering literature suggests that the roughness of a surface can enhance thermal performance within a pressure drop regime. To further investigate the thermal performance of SLM fabricated pieces, several GRCop-84 SLM single channel components were tested using a thermal conduction rig at MSFC. A 20 kW power source running at 25% duty cycle and 25% power level applied heat to each component while varying water flow rates between 2.1 - 6.2 gallons/min (GPM) at a supply pressure of 550 to 700 psi. Each test was allowed to reach quasi-steady state conditions where pressure, temperature, and thermal imaging data were recorded. Presented in this work are the heat transfer responses compared to a traditional machined OHFC Copper test section. An analytical thermal model was constructed to anchor theoretical models with the empirical data.

  8. University of Virginia infrared sensor experiment (UVIRSE)

    NASA Astrophysics Data System (ADS)

    Dawson, Jeffrey R.; Bell, Meredith A.; Powers, Michael C.; Laufer, Gabriel

    2001-03-01

    A suite consisting of an infrared sensor, optical sensors and a video camera are prepared for launch by a group of students at University of Virginia (UVA) and James Madison University (JMU). The sensors are a first step in the development of a Gas Filter Correlation Radiometer (GFCR) that will detect stratospheric methane (CH4) when flown on sub-orbital sounding rockets and/or from the hypersonic X-34 reusable launch vehicle. The current payload has a threefold purpose: (a) to provide space heritage to a thermoelectrically cooled mercury cadmium telluride sensor, (b) to demonstrate methods for correlating the IR reading of the sensor with ground topography, and (c) to flight test all the payload components that will become part of the sub- orbital methane GFCR sensor. Once completed the system will serve as host to other undergraduate research design projects that require space environment, microgravity, or remote sensing capabilities. The payload components have been received and tested, and the supporting structure has been designed and built. Data from previous rocket flights was used to analyze the environmental strains placed on the experiment and components. Payload components are being integrated and tested as a system to ensure functionality in the flight environment. This includes thermal testing for individual components, vibration testing from individual components and overall payload, and load testing of the external structure. Launch is scheduled for Spring 2001.

  9. A Full-Envelope Air Data Calibration and Three-Dimensional Wind Estimation Method Using Global Output-Error Optimization and Flight-Test Techniques

    NASA Technical Reports Server (NTRS)

    Taylor, Brian R.

    2012-01-01

    A novel, efficient air data calibration method is proposed for aircraft with limited envelopes. This method uses output-error optimization on three-dimensional inertial velocities to estimate calibration and wind parameters. Calibration parameters are based on assumed calibration models for static pressure, angle of attack, and flank angle. Estimated wind parameters are the north, east, and down components. The only assumptions needed for this method are that the inertial velocities and Euler angles are accurate, the calibration models are correct, and that the steady-state component of wind is constant throughout the maneuver. A two-minute maneuver was designed to excite the aircraft over the range of air data calibration parameters and de-correlate the angle-of-attack bias from the vertical component of wind. Simulation of the X-48B (The Boeing Company, Chicago, Illinois) aircraft was used to validate the method, ultimately using data derived from wind-tunnel testing to simulate the un-calibrated air data measurements. Results from the simulation were accurate and robust to turbulence levels comparable to those observed in flight. Future experiments are planned to evaluate the proposed air data calibration in a flight environment.

  10. Lunar Dust Simulant in Mechanical Component Testing - Paradigm and Practicality

    NASA Technical Reports Server (NTRS)

    Jett, T.; Street, K.; Abel, P.; Richmond, R.

    2008-01-01

    Due to the uniquely harsh lunar surface environment, terrestrial test activities may not adequately represent abrasive wear by lunar dust likely to be experienced in mechanical systems used in lunar exploration. Testing to identify potential moving mechanism problems has recently begun within the NASA Engineering and Safety Center Mechanical Systems Lunar Dust Assessment activity in coordination with the Exploration Technology and Development Program Dust Management Project, and these complimentary efforts will be described. Specific concerns about differences between simulant and lunar dust, and procedures for mechanical component testing with lunar simulant will be considered. In preparing for long term operations within a dusty lunar environment, the three fundamental approaches to keeping mechanical equipment functioning are dust avoidance, dust removal, and dust tolerance, with some combination of the three likely to be found in most engineering designs. Methods to exclude dust from contact with mechanical components would constitute mitigation by dust avoidance, so testing seals for dust exclusion efficacy as a function of particle size provides useful information for mechanism design. Dust of particle size less than a micron is not well documented for impact on lunar mechanical components. Therefore, creating a standardized lunar dust simulant in the particulate size range of ca. 0.1 to 1.0 micrometer is useful for testing effects on mechanical components such as bearings, gears, seals, bushings, and other moving mechanical assemblies. Approaching actual wear testing of mechanical components, it is beneficial to first establish relative wear rates caused by dust on commonly used mechanical component materials. The wear mode due to dust within mechanical components, such as abrasion caused by dust in grease(s), needs to be considered, as well as the effects of vacuum, lunar thermal cycle, and electrostatics on wear rate.

  11. Apollo experience report environmental acceptance testing

    NASA Technical Reports Server (NTRS)

    Laubach, C. H. M.

    1976-01-01

    Environmental acceptance testing was used extensively to screen selected spacecraft hardware for workmanship defects and manufacturing flaws. The minimum acceptance levels and durations and methods for their establishment are described. Component selection and test monitoring, as well as test implementation requirements, are included. Apollo spacecraft environmental acceptance test results are summarized, and recommendations for future programs are presented.

  12. Instrumentation used for hydraulic testing of potential water-bearing formations at the Waste Isolation Pilot Plant site in southeastern New Mexico

    USGS Publications Warehouse

    Basler, J.A.

    1983-01-01

    Requirements for testing hydrologic test wells at the proposed Waste Isolation Pilot Plant near Carlsbad, New Mexico, necessitated the use of inflatable formation packers and pressure transducers. Observations during drilling and initial development indicated small formation yields which would require considerable test times by conventional open-casing methods. A pressure-monitoring system was assembled for performance evaluation utilizing commercially available components. Formation pressures were monitored with a down-hole strain-gage transducer. An inflatable packer equipped with a 1/4-inch-diameter steel tube extending through the inflation element permitted sensing formation pressures in isolated test zones. Surface components of the monitoring system provided AC transducer excitation, signal conditioning for recording directly in engineering units, and both analog and digital recording. Continuous surface monitoring of formation pressures provided a means of determining test status and projecting completion times during any phase of testing. Maximum portability was afforded by battery operation with all surface components mounted in a small self-contained trailer. (USGS)

  13. Field methods to measure surface displacement and strain with the Video Image Correlation method

    NASA Technical Reports Server (NTRS)

    Maddux, Gary A.; Horton, Charles M.; Mcneill, Stephen R.; Lansing, Matthew D.

    1994-01-01

    The objective of this project was to develop methods and application procedures to measure displacement and strain fields during the structural testing of aerospace components using paint speckle in conjunction with the Video Image Correlation (VIC) system.

  14. Apparatus For Tests Of Percussion Primers

    NASA Technical Reports Server (NTRS)

    Bement, Laurence J.; Bailey, James W.; Schimmel, Morry L.

    1991-01-01

    Test apparatus and method developed to measure ignition capability of percussion primers. Closely simulates actual conditions and interfaces encountered in such applications as in munitions and rocket motors. Ignitability-testing apparatus is small bomb instrumented with pressure transducers. Sizes, shapes, and positions of bomb components and materials under test selected to obtain quantitative data on ignition.

  15. Nonlinear seismic analysis of a reactor structure impact between core components

    NASA Technical Reports Server (NTRS)

    Hill, R. G.

    1975-01-01

    The seismic analysis of the FFTF-PIOTA (Fast Flux Test Facility-Postirradiation Open Test Assembly), subjected to a horizontal DBE (Design Base Earthquake) is presented. The PIOTA is the first in a set of open test assemblies to be designed for the FFTF. Employing the direct method of transient analysis, the governing differential equations describing the motion of the system are set up directly and are implicitly integrated numerically in time. A simple lumped-nass beam model of the FFTF which includes small clearances between core components is used as a "driver" for a fine mesh model of the PIOTA. The nonlinear forces due to the impact of the core components and their effect on the PIOTA are computed.

  16. Isotropic source terms of San Jacinto fault zone earthquakes based on waveform inversions with a generalized CAP method

    NASA Astrophysics Data System (ADS)

    Ross, Z. E.; Ben-Zion, Y.; Zhu, L.

    2015-02-01

    We analyse source tensor properties of seven Mw > 4.2 earthquakes in the complex trifurcation area of the San Jacinto Fault Zone, CA, with a focus on isotropic radiation that may be produced by rock damage in the source volumes. The earthquake mechanisms are derived with generalized `Cut and Paste' (gCAP) inversions of three-component waveforms typically recorded by >70 stations at regional distances. The gCAP method includes parameters ζ and χ representing, respectively, the relative strength of the isotropic and CLVD source terms. The possible errors in the isotropic and CLVD components due to station variability is quantified with bootstrap resampling for each event. The results indicate statistically significant explosive isotropic components for at least six of the events, corresponding to ˜0.4-8 per cent of the total potency/moment of the sources. In contrast, the CLVD components for most events are not found to be statistically significant. Trade-off and correlation between the isotropic and CLVD components are studied using synthetic tests with realistic station configurations. The associated uncertainties are found to be generally smaller than the observed isotropic components. Two different tests with velocity model perturbation are conducted to quantify the uncertainty due to inaccuracies in the Green's functions. Applications of the Mann-Whitney U test indicate statistically significant explosive isotropic terms for most events consistent with brittle damage production at the source.

  17. Design of automata theory of cubical complexes with applications to diagnosis and algorithmic description

    NASA Technical Reports Server (NTRS)

    Roth, J. P.

    1972-01-01

    The following problems are considered: (1) methods for development of logic design together with algorithms, so that it is possible to compute a test for any failure in the logic design, if such a test exists, and developing algorithms and heuristics for the purpose of minimizing the computation for tests; and (2) a method of design of logic for ultra LSI (large scale integration). It was discovered that the so-called quantum calculus can be extended to render it possible: (1) to describe the functional behavior of a mechanism component by component, and (2) to compute tests for failures, in the mechanism, using the diagnosis algorithm. The development of an algorithm for the multioutput two-level minimization problem is presented and the program MIN 360 was written for this algorithm. The program has options of mode (exact minimum or various approximations), cost function, cost bound, etc., providing flexibility.

  18. Directed Design of Experiments for Validating Probability of Detection Capability of a Testing System

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R. (Inventor)

    2012-01-01

    A method of validating a probability of detection (POD) testing system using directed design of experiments (DOE) includes recording an input data set of observed hit and miss or analog data for sample components as a function of size of a flaw in the components. The method also includes processing the input data set to generate an output data set having an optimal class width, assigning a case number to the output data set, and generating validation instructions based on the assigned case number. An apparatus includes a host machine for receiving the input data set from the testing system and an algorithm for executing DOE to validate the test system. The algorithm applies DOE to the input data set to determine a data set having an optimal class width, assigns a case number to that data set, and generates validation instructions based on the case number.

  19. Reliability demonstration test for load-sharing systems with exponential and Weibull components

    PubMed Central

    Hu, Qingpei; Yu, Dan; Xie, Min

    2017-01-01

    Conducting a Reliability Demonstration Test (RDT) is a crucial step in production. Products are tested under certain schemes to demonstrate whether their reliability indices reach pre-specified thresholds. Test schemes for RDT have been studied in different situations, e.g., lifetime testing, degradation testing and accelerated testing. Systems designed with several structures are also investigated in many RDT plans. Despite the availability of a range of test plans for different systems, RDT planning for load-sharing systems hasn’t yet received the attention it deserves. In this paper, we propose a demonstration method for two specific types of load-sharing systems with components subject to two distributions: exponential and Weibull. Based on the assumptions and interpretations made in several previous works on such load-sharing systems, we set the mean time to failure (MTTF) of the total system as the demonstration target. We represent the MTTF as a summation of mean time between successive component failures. Next, we introduce generalized test statistics for both the underlying distributions. Finally, RDT plans for the two types of systems are established on the basis of these test statistics. PMID:29284030

  20. Reliability demonstration test for load-sharing systems with exponential and Weibull components.

    PubMed

    Xu, Jianyu; Hu, Qingpei; Yu, Dan; Xie, Min

    2017-01-01

    Conducting a Reliability Demonstration Test (RDT) is a crucial step in production. Products are tested under certain schemes to demonstrate whether their reliability indices reach pre-specified thresholds. Test schemes for RDT have been studied in different situations, e.g., lifetime testing, degradation testing and accelerated testing. Systems designed with several structures are also investigated in many RDT plans. Despite the availability of a range of test plans for different systems, RDT planning for load-sharing systems hasn't yet received the attention it deserves. In this paper, we propose a demonstration method for two specific types of load-sharing systems with components subject to two distributions: exponential and Weibull. Based on the assumptions and interpretations made in several previous works on such load-sharing systems, we set the mean time to failure (MTTF) of the total system as the demonstration target. We represent the MTTF as a summation of mean time between successive component failures. Next, we introduce generalized test statistics for both the underlying distributions. Finally, RDT plans for the two types of systems are established on the basis of these test statistics.

  1. Item response theory and factor analysis as a mean to characterize occurrence of response shift in a longitudinal quality of life study in breast cancer patients

    PubMed Central

    2014-01-01

    Background The occurrence of response shift (RS) in longitudinal health-related quality of life (HRQoL) studies, reflecting patient adaptation to disease, has already been demonstrated. Several methods have been developed to detect the three different types of response shift (RS), i.e. recalibration RS, 2) reprioritization RS, and 3) reconceptualization RS. We investigated two complementary methods that characterize the occurrence of RS: factor analysis, comprising Principal Component Analysis (PCA) and Multiple Correspondence Analysis (MCA), and a method of Item Response Theory (IRT). Methods Breast cancer patients (n = 381) completed the EORTC QLQ-C30 and EORTC QLQ-BR23 questionnaires at baseline, immediately following surgery, and three and six months after surgery, according to the “then-test/post-test” design. Recalibration was explored using MCA and a model of IRT, called the Linear Logistic Model with Relaxed Assumptions (LLRA) using the then-test method. Principal Component Analysis (PCA) was used to explore reconceptualization and reprioritization. Results MCA highlighted the main profiles of recalibration: patients with high HRQoL level report a slightly worse HRQoL level retrospectively and vice versa. The LLRA model indicated a downward or upward recalibration for each dimension. At six months, the recalibration effect was statistically significant for 11/22 dimensions of the QLQ-C30 and BR23 according to the LLRA model (p ≤ 0.001). Regarding the QLQ-C30, PCA indicated a reprioritization of symptom scales and reconceptualization via an increased correlation between functional scales. Conclusions Our findings demonstrate the usefulness of these analyses in characterizing the occurrence of RS. MCA and IRT model had convergent results with then-test method to characterize recalibration component of RS. PCA is an indirect method in investigating the reprioritization and reconceptualization components of RS. PMID:24606836

  2. Fiber Optics at the JLab CLAS12 Detector

    NASA Astrophysics Data System (ADS)

    Kroon, John; Giovanetti, Kevin

    2008-10-01

    The performance of wavelength shifting fibers, WLS, and method of coupling these fibers to extruded polystyrene scintillators are currently under study at James Madison University. These components are two of the main elements for the PCAL, preshower calorimeter, proposed as part of the 12 GeV upgrade for the CLAS detector at Jefferson Laboratory. The WLS fibers have been prepared, optically coupled to scintillator, and tested in order to determine their overall performance as a method of readout. Methods of coupling fiber to scintillator, a description of the test setup, test methods, PCAL readout performance, and fabrication recommendations will be presented.

  3. Kinetics and mechanism of the oxidation process of two-component Fe-Al alloys

    NASA Technical Reports Server (NTRS)

    Przewlocka, H.; Siedlecka, J.

    1982-01-01

    The oxidation process of two-component Fe-Al alloys containing up to 7.2% Al and from 18 to 30% Al was studied. Kinetic measurements were conducted using the isothermal gravimetric method in the range of 1073-1223 K and 1073-1373 K for 50 hours. The methods used in studies of the mechanism of oxidation included: X-ray microanalysis, X-ray structural analysis, metallographic analysis and marker tests.

  4. Dynamic analysis for shuttle design verification

    NASA Technical Reports Server (NTRS)

    Fralich, R. W.; Green, C. E.; Rheinfurth, M. H.

    1972-01-01

    Two approaches that are used for determining the modes and frequencies of space shuttle structures are discussed. The first method, direct numerical analysis, involves finite element mathematical modeling of the space shuttle structure in order to use computer programs for dynamic structural analysis. The second method utilizes modal-coupling techniques of experimental verification made by vibrating only spacecraft components and by deducing modes and frequencies of the complete vehicle from results obtained in the component tests.

  5. High-frequency ultrasonic methods for determining corrosion layer thickness of hollow metallic components.

    PubMed

    Liu, Hongwei; Zhang, Lei; Liu, Hong Fei; Chen, Shuting; Wang, Shihua; Wong, Zheng Zheng; Yao, Kui

    2018-05-16

    Corrosion in internal cavity is one of the most common problems occurs in many hollow metallic components, such as pipes containing corrosive fluids and high temperature turbines in aircraft. It is highly demanded to non-destructively detect the corrosion inside hollow components and determine the corrosion extent from the external side. In this work, we present two high-frequency ultrasonic non-destructive testing (NDT) technologies, including piezoelectric pulse-echo and laser-ultrasonic methods, for detecting corrosion of Ni superalloy from the opposite side. The determination of corrosion layer thickness below ∼100 µm has been demonstrated by both methods, in comparison with X-CT and SEM. With electron microscopic examination, it is found that with multilayer corrosion structure formed over a prolonged corrosion time, the ultrasonic NDT methods can only reliably reveal outer corrosion layer thickness because of the resulting acoustic contrast among the multiple layers due to their respective different mechanical parameters. A time-frequency signal analysis algorithm is employed to effectively enhance the high frequency ultrasonic signal contrast for the piezoelectric pulse-echo method. Finally, a blind test on a Ni superalloy turbine blade with internal corrosion is conducted with the high frequency piezoelectric pulser-receiver method. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Distribution of Dengue Virus Types 1 and 4 in Blood Components from Infected Blood Donors from Puerto Rico

    PubMed Central

    Añez, Germán; Heisey, Daniel A. R.; Chancey, Caren; Fares, Rafaelle C. G.; Espina, Luz M.; Souza, Kátia P. R.; Teixeira-Carvalho, Andréa; Krysztof, David E.; Foster, Gregory A.; Stramer, Susan L.; Rios, Maria

    2016-01-01

    Background Dengue is a mosquito-borne viral disease caused by the four dengue viruses (DENV-1 to 4) that can also be transmitted by blood transfusion and organ transplantation. The distribution of DENV in the components of blood from infected donors is poorly understood. Methods We used an in-house TaqMan qRT-PCR assay to test residual samples of plasma, cellular components of whole blood (CCWB), serum and clot specimens from the same collection from blood donors who were DENV-RNA-reactive in a parallel blood safety study. To assess whether DENV RNA detected by TaqMan was associated with infectious virus, DENV infectivity in available samples was determined by culture in mosquito cells. Results DENV RNA was detected by TaqMan in all tested blood components, albeit more consistently in the cellular components; 78.8% of CCWB, 73.3% of clots, 86.7% of sera and 41.8% of plasma samples. DENV-1 was detected in 48 plasma and 97 CCWB samples while DENV-4 was detected in 21 plasma and 31 CCWB samples. In mosquito cell cultures, 29/111 (26.1%) plasma and 32/97 (32.7%) CCWB samples were infectious. A subset of samples from 29 donors was separately analyzed to compare DENV viral loads in the available blood components. DENV viral loads did not differ significantly between components and ranged from 3–8 log10 PCR-detectable units/ml. Conclusions DENV was present in all tested components from most donors, and viral RNA was not preferentially distributed in any of the tested components. Infectious DENV was also present in similar proportions in cultured plasma, clot and CCWB samples, indicating that these components may serve as a resource when sample sizes are limited. However, these results suggest that the sensitivity of the nucleic acid tests (NAT) for these viruses would not be improved by testing whole blood or components other than plasma. PMID:26871560

  7. New preparation method of {beta}{double_prime}-alumina and application for AMTEC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishi, Toshiro; Tsuru, Yasuhiko; Yamamoto, Hirokazu

    1995-12-31

    The Alkali Metal Thermo-Electric Converter(AMTEC) is an energy conversion system that converts heat to electrical energy with high efficiency. The {beta}{double_prime}-alumina solid electrolyte (BASE) is the most important component in the AMTEC system. In this paper, the relationship among the conduction property, the microstructure and the amount of chemical component for BASE is studied. As an analysis of the chemical reaction for each component, the authors established a new BASE preparation method rather than using the conventional method. They also report the AMTFC cell performance using this electrolyte tube on which Mo or TiC electrode is filmed by the screenmore » printing method. Then, an electrochemical analysis and a heat cycle test of AMTEC cell are studied.« less

  8. Testing of optical components to assure performance in a high-average-power environment

    NASA Astrophysics Data System (ADS)

    Chow, Robert; Taylor, John R.; Eickelberg, William K.; Primdahl, Keith A.

    1997-11-01

    Evaluation and testing of the optical components used in the atomic vapor laser isotope separation plant is critical for qualification of suppliers, developments of new optical multilayer designs and manufacturing processes, and assurance of performance in the production cycle. The range of specifications requires development of specialized test equipment and methods which are not routine or readily available in industry. Specifications are given on material characteristics such as index homogeneity, subsurface damage left after polishing, microscopic surface defects and contamination, coating absorption, and high average power laser damage. The approach to testing these performance characteristics and assuring the quality throughout the production cycle is described.

  9. Degradation of components in drug formulations: a comparison between HPLC and DSC methods.

    PubMed

    Ceschel, G C; Badiello, R; Ronchi, C; Maffei, P

    2003-08-08

    Information about the stability of drug components and drug formulations is needed to predict the shelf-life of the final products. The studies on the interaction between the drug and the excipients may be carried out by means of accelerated stability tests followed by analytical determination of the active principle (HPLC and other methods) and by means of the differential scanning calorimetry (DSC). This research has been focused to the acetyl salicylic acid (ASA) physical-chemical characterisation by using DSC method in order to evaluate its compatibility with some of the most used excipients. It was possible to show, with the DSC method, the incompatibility of magnesium stearate with ASA; the HPLC data confirm the reduction of ASA concentration in the presence of magnesium stearate. With the other excipients the characteristic endotherms of the drug were always present and no or little degradation was observed with the accelerated stability tests. Therefore, the results with the DSC method are comparable and in good agreement with the results obtained with other methods.

  10. Nondestructive Examination Guidance for Dry Storage Casks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Ryan M.; Suffield, Sarah R.; Hirt, Evelyn H.

    In this report, an assessment of NDE methods is performed for components of NUHOMS 80 and 102 dry storage system components in an effort to assist NRC staff with review of license renewal applications. The report considers concrete components associated with the horizontal storage modules (HSMs) as well as metal components in the HSMs. In addition, the report considers the dry shielded canister (DSC). Scope is limited to NDE methods that are considered most likely to be proposed by licensees. The document, ACI 349.3R, Evaluation of Existing Nuclear Safety-Related Concrete Structures, is used as the basis for the majority ofmore » the NDE methods summarized for inspecting HSM concrete components. Two other documents, ACI 228.2R, Nondestructive Test Methods for Evaluation of Concrete in Structures, and ORNL/TM-2007/191, Inspection of Nuclear Power Plant Structure--Overview of Methods and Related Application, supplement the list with additional technologies that are considered applicable. For the canister, the ASME B&PV Code is used as the basis for NDE methods considered, along with currently funded efforts through industry (Electric Power Research Institute [EPRI]) and the U.S. Department of Energy (DOE) to develop inspection technologies for canisters. The report provides a description of HSM and DSC components with a focus on those aspects of design considered relevant to inspection. This is followed by a brief description of other concrete structural components such as bridge decks, dams, and reactor containment structures in an effort to facilitate comparison between these structures and HSM concrete components and infer which NDE methods may work best for certain HSM concrete components based on experience with these other structures. Brief overviews of the NDE methods are provided with a focus on issues and influencing factors that may impact implementation or performance. An analysis is performed to determine which NDE methods are most applicable to specific components.« less

  11. E-learning platform for automated testing of electronic circuits using signature analysis method

    NASA Astrophysics Data System (ADS)

    Gherghina, Cǎtǎlina; Bacivarov, Angelica; Bacivarov, Ioan C.; Petricǎ, Gabriel

    2016-12-01

    Dependability of electronic circuits can be ensured only through testing of circuit modules. This is done by generating test vectors and their application to the circuit. Testability should be viewed as a concerted effort to ensure maximum efficiency throughout the product life cycle, from conception and design stage, through production to repairs during products operating. In this paper, is presented the platform developed by authors for training for testability in electronics, in general and in using signature analysis method, in particular. The platform allows highlighting the two approaches in the field namely analog and digital signature of circuits. As a part of this e-learning platform, it has been developed a database for signatures of different electronic components meant to put into the spotlight different techniques implying fault detection, and from this there were also self-repairing techniques of the systems with this kind of components. An approach for realizing self-testing circuits based on MATLAB environment and using signature analysis method is proposed. This paper analyses the benefits of signature analysis method and simulates signature analyzer performance based on the use of pseudo-random sequences, too.

  12. The opportunity of silicate product manufacturing with simultaneous pig iron reduction from slag technogenic formations

    NASA Astrophysics Data System (ADS)

    Sheshukov, O. Yu.; Lobanov, D. A.; Mikheenkov, M. A.; Nekrasov, I. V.; Egiazaryan, D. K.

    2017-09-01

    There are two main kinds of slag in modern steelmaking industry: the electric arc furnace slag (EAF slag) and ladle furnace slag (LF slag). The all known slag processing schemes provide the iron-containing component reduction while silicate component stays unprocessed. On the contrary, the silicate processing schemes doesn't provide the utilization of the iron-containing component. The present-day situation doesn't solve the problem of total slag utilization. The aim of this work is to investigate the opportunity of silicate product obtaining with simultaneous pig iron reduction from EAF and LF slags. The tests are conducted by the method of simplex-lattice design. The test samples are heated and melted under reductive conditions, slowly cooled and then analyzed by XRD methods. The experiment results prove the opportunity: the Portland clinker and pig iron can be simultaneously produced on the basis of these slags with a limestone addition.

  13. An improved method for nonlinear parameter estimation: a case study of the Rössler model

    NASA Astrophysics Data System (ADS)

    He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan

    2016-08-01

    Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.

  14. Wear resistance of ductile irons

    NASA Astrophysics Data System (ADS)

    Lerner, Y. S.

    1994-06-01

    This study was undertaken to evaluate the wear resistance of different grades of ductile iron as alterna-tives to high- tensile- strength alloyed and inoculated gray irons and bronzes for machine- tool and high-pressure hydraulic components. Special test methods were employed to simulate typical conditions of reciprocating sliding wear with and without abrasive- contaminated lubricant for machine and press guideways. Quantitative relationships were established among wear rate, microstructure and micro-hardness of structural constituents, and nodule size of ductile iron. The frictional wear resistance of duc-tile iron as a bearing material was tested with hardened steel shafts using standard test techniques under continuous rotating movement with lubricant. Lubricated sliding wear tests on specimens and compo-nents for hydraulic equipment and apparatus were carried out on a special rig with reciprocating motion, simulating the working conditions in a piston/cylinder unit in a pressure range from 5 to 32 MPa. Rig and field tests on machine- tool components and units and on hydraulic parts have confirmed the test data.

  15. UGV acceptance testing

    NASA Astrophysics Data System (ADS)

    Kramer, Jeffrey A.; Murphy, Robin R.

    2006-05-01

    With over 100 models of unmanned vehicles now available for military and civilian safety, security or rescue applications, it is important to for agencies to establish acceptance testing. However, there appears to be no general guidelines for what constitutes a reasonable acceptance test. This paper describes i) a preliminary method for acceptance testing by a customer of the mechanical and electrical components of an unmanned ground vehicle system, ii) how it has been applied to a man-packable micro-robot, and iii) discusses the value of testing both to ensure that the customer has a workable system and to improve design. The test method automated the operation of the robot to repeatedly exercise all aspects and combinations of components on the robot for 6 hours. The acceptance testing process uncovered many failures consistent with those shown to occur in the field, showing that testing by the user does predict failures. The process also demonstrated that the testing by the manufacturer can provide important design data that can be used to identify, diagnose, and prevent long-term problems. Also, the structured testing environment showed that sensor systems can be used to predict errors and changes in performance, as well as uncovering unmodeled behavior in subsystems.

  16. An Alternative Method Of Specifying Shock Test Criteria

    NASA Technical Reports Server (NTRS)

    Ferebee, R. C.; Clayton, J.; Alldredge, D.; Irvine, T.

    2008-01-01

    Shock testing of aerospace vehicle hardware has presented many challenges over the years due to the high magnitude and short duration of the specifications. Recently, component structural failures have occurred during testing that have not manifested themselves on over 200 Space Shuttle solid rocket booster (SRB) flights (two boosters per flight). It is suspected that the method of specifying shock test criteria may be leaving important information out of the test process. The traditional test criteria specification, the shock response spectrum, can be duplicated by any number of waveforms that may not resemble the actual flight test recorded time history. One method of overcoming this limitation is described herein, which may prove useful for qualifying hardware for the upcoming Constellation Program.

  17. Thermal stress characterization using the electro-mechanical impedance method

    NASA Astrophysics Data System (ADS)

    Zhu, Xuan; Lanza di Scalea, Francesco; Fateh, Mahmood

    2017-04-01

    This study examines the potential of the Electro-Mechanical Impedance (EMI) method to provide an estimation of the developed thermal stress in constrained bar-like structures. This non-invasive method features the easiness of implementation and interpretation, while it is notoriously known for being vulnerable to environmental variability. A comprehensive analytical model is proposed to relate the measured electric admittance signatures of the PZT element to temperature and uniaxial stress applied to the underlying structure. The model results compare favorably to the experimental ones, where the sensitivities of features extracted from the admittance signatures to the varying stress levels and temperatures are determined. Two temperature compensation frameworks are proposed to characterize the thermal stress states: (a) a regression model is established based on temperature-only tests, and the residuals from the thermal stress tests are then used to isolate the stress measurand; (b) the temperature-only tests are decomposed by Principle Components Analysis (PCA) and the feature vectors of the thermal stress tests are reconstructed after removal of the temperaturesensitive components. For both methods, the features were selected based on their performance in Receiver Operating Characteristic (ROC) curves. Experimental results on the Continuous Welded Rails (CWR) are shown to demonstrate the effectiveness of these temperature compensation methods.

  18. Recognition by Rats of Binary Taste Solutions and Their Components.

    PubMed

    Katagawa, Yoshihisa; Yasuo, Toshiaki; Suwabe, Takeshi; Yamamura, Tomoki; Gen, Keika; Sako, Noritaka

    2016-09-13

    This behavioral study investigated how rats conditioned to binary mixtures of preferred and aversive taste stimuli, respectively, responded to the individual components in a conditioned taste aversion (CTA) paradigm. The preference of stimuli was determined based on the initial results of 2 bottle preference test. The preferred stimuli included 5mM sodium saccharin (Sacc), 0.03M NaCl (Na), 0.1M Na, 5mM Sacc + 0.03M Na, and 5mM Sacc + 0.2mM quinine hydrochloride (Q), whereas the aversive stimuli tested were 1.0M Na, 0.2mM Q, 0.3mM Q, 5mM Sacc + 1.0M Na, and 5mM Sacc + 0.3mM Q. In CTA tests where LiCl was the unconditioned stimulus, the number of licks to the preferred binary mixtures and to all tested preferred components were significantly less than in control rats. No significant difference resulted between the number of licks to the aversive binary mixtures or to all tested aversive components. However, when rats pre-exposed to the aversive components contained of the aversive binary mixtures were conditioned to these mixtures, the number of licks to all the tested stimuli was significantly less than in controls. Rats conditioned to components of the aversive binary mixtures generalized to the binary mixtures containing those components. These results suggest that rats recognize and remember preferred and aversive taste mixtures as well as the preferred and aversive components of the binary mixtures, and that pre-exposure before CTA is an available method to study the recognition of aversive taste stimuli. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. Force Reconstruction from Ejection Tests of Stores from Aircraft Used for Model Predictions and Missing/Bad Gages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross, Michael; Cap, Jerome S.; Starr, Michael J.

    One of the more severe environments for a store on an aircraft is during the ejection of the store. During this environment it is not possible to instrument all component responses, and it is also likely that some instruments may fail during the environment testing. This work provides a method for developing these responses from failed gages and uninstrumented locations. First, the forces observed by the store during the environment are reconstructed. A simple sampling method is used to reconstruct these forces given various parameters. Then, these forces are applied to a model to generate the component responses. Validation ismore » performed on this methodology.« less

  20. Preparation of a Frozen Regolith Simulant Bed for ISRU Component Testing in a Vacuum Chamber

    NASA Technical Reports Server (NTRS)

    Klenhenz, Julie; Linne, Diane

    2013-01-01

    In-Situ Resource Utilization (ISRU) systems and components have undergone extensive laboratory and field tests to expose hardware to relevant soil environments. The next step is to combine these soil environments with relevant pressure and temperature conditions. Previous testing has demonstrated how to incorporate large bins of unconsolidated lunar regolith into sufficiently sized vacuum chambers. In order to create appropriate depth dependent soil characteristics that are needed to test drilling operations for the lunar surface, the regolith simulant bed must by properly compacted and frozen. While small cryogenic simulant beds have been created for laboratory tests, this scale effort will allow testing of a full 1m drill which has been developed for a potential lunar prospector mission. Compacted bulk densities were measured at various moisture contents for GRC-3 and Chenobi regolith simulants. Vibrational compaction methods were compared with the previously used hammer compaction, or "Proctor", method. All testing was done per ASTM standard methods. A full 6.13 m3 simulant bed with 6 percent moisture by weight was prepared, compacted in layers, and frozen in a commercial freezer. Temperature and desiccation data was collected to determine logistics for preparation and transport of the simulant bed for thermal vacuum testing. Once in the vacuum facility, the simulant bed will be cryogenically frozen with liquid nitrogen. These cryogenic vacuum tests are underway, but results will not be included in this manuscript.

  1. An Analysis of Turnover Intentions: A Reexamination of Air Force Civil Engineering Company Grade Officers

    DTIC Science & Technology

    2012-03-01

    edu 75 Appendix C Factor Analysis of Measurement Items Interrole conflict Factor Analysis (FA): Table: KMO and Bartlett’s Test Kaiser-Meyer...Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization. 77 POS FA: Table: KMO and Bartlett’s...Tempo FA: Table: KMO and Bartlett’s Test Kaiser-Meyer-Olkin Measure of Sampling Adequacy. .733 Bartlett’s Test of Sphericity Approx. Chi-Square

  2. Guidelines for Design and Analysis of Large, Brittle Spacecraft Components

    NASA Technical Reports Server (NTRS)

    Robinson, E. Y.

    1993-01-01

    There were two related parts to this work. The first, conducted at The Aerospace Corporation was to develop and define methods for integrating the statistical theory of brittle strength with conventional finite element stress analysis, and to carry out a limited laboratory test program to illustrate the methods. The second part, separately funded at Aerojet Electronic Systems Division, was to create the finite element postprocessing program for integrating the statistical strength analysis with the structural analysis. The second part was monitored by Capt. Jeff McCann of USAF/SMC, as Special Study No.11, which authorized Aerojet to support Aerospace on this work requested by NASA. This second part is documented in Appendix A. The activity at Aerojet was guided by the Aerospace methods developed in the first part of this work. This joint work of Aerospace and Aerojet stemmed from prior related work for the Defense Support Program (DSP) Program Office, to qualify the DSP sensor main mirror and corrector lens for flight as part of a shuttle payload. These large brittle components of the DSP sensor are provided by Aerojet. This document defines rational methods for addressing the structural integrity and safety of large, brittle, payload components, which have low and variable tensile strength and can suddenly break or shatter. The methods are applicable to the evaluation and validation of such components, which, because of size and configuration restrictions, cannot be validated by direct proof test.

  3. Resolving the percentage of component terrains within single resolution elements

    NASA Technical Reports Server (NTRS)

    Marsh, S. E.; Switzer, P.; Kowalik, W. S.; Lyon, R. J. P.

    1980-01-01

    An approximate maximum likelihood technique employing a widely available discriminant analysis program is discussed that has been developed for resolving the percentage of component terrains within single resolution elements. The method uses all four channels of Landsat data simultaneously and does not require prior knowledge of the percentage of components in mixed pixels. It was tested in five cases that were chosen to represent mixtures of outcrop, soil and vegetation which would typically be encountered in geologic studies with Landsat data. For all five cases, the method proved to be superior to single band weighted average and linear regression techniques and permitted an estimate of the total area occupied by component terrains to within plus or minus 6% of the true area covered. Its major drawback is a consistent overestimation of the pixel component percent of the darker materials (vegetation) and an underestimation of the pixel component percent of the brighter materials (sand).

  4. Environmental Exposure and Accelerated Testing of Rubber-to-Metal Vulcanized Bonded Assemblies

    DTIC Science & Technology

    1974-11-01

    by weapon components in the field and to determine the effect of this exposure on the vulcanized bond The purpose is also to duplicate these long term...storage and environmental exposure, and to develop accelerated methods for use in predicting this resistance. BACKGROUND: The most effective method of... the rubber coatings on the M60 machine gun components, the shock isolator and recoil adapter on the CAU 28/A Minigun, rubber pads for all tracked

  5. Rapid test for the detection of hazardous microbiological material

    NASA Astrophysics Data System (ADS)

    Mordmueller, Mario; Bohling, Christian; John, Andreas; Schade, Wolfgang

    2009-09-01

    After attacks with anthrax pathogens have been committed since 2001 all over the world the fast detection and determination of biological samples has attracted interest. A very promising method for a rapid test is Laser Induced Breakdown Spectroscopy (LIBS). LIBS is an optical method which uses time-resolved or time-integrated spectral analysis of optical plasma emission after pulsed laser excitation. Even though LIBS is well established for the determination of metals and other inorganic materials the analysis of microbiological organisms is difficult due to their very similar stoichiometric composition. To analyze similar LIBS-spectra computer assisted chemometrics is a very useful approach. In this paper we report on first results of developing a compact and fully automated rapid test for the detection of hazardous microbiological material. Experiments have been carried out with two setups: A bulky one which is composed of standard laboratory components and a compact one consisting of miniaturized industrial components. Both setups work at an excitation wavelength of λ=1064nm (Nd:YAG). Data analysis is done by Principal Component Analysis (PCA) with an adjacent neural network for fully automated sample identification.

  6. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network

    PubMed Central

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-01

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006

  7. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network.

    PubMed

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-08

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.

  8. Simulation of Watts Bar Unit 1 Initial Startup Tests with Continuous Energy Monte Carlo Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Godfrey, Andrew T; Gehin, Jess C; Bekar, Kursat B

    2014-01-01

    The Consortium for Advanced Simulation of Light Water Reactors* is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications. One component of the testing and validation plan for VERA is comparison of neutronics results to a set of continuous energy Monte Carlo solutions for a range of pressurized water reactor geometries using the SCALE component KENO-VI developed by Oak Ridge National Laboratory. Recent improvements in data, methods, and parallelism have enabled KENO, previously utilized predominately as a criticality safety code, to demonstrate excellent capability and performance for reactor physics applications. The highlymore » detailed and rigorous KENO solutions provide a reliable nu-meric reference for VERAneutronics and also demonstrate the most accurate predictions achievable by modeling and simulations tools for comparison to operating plant data. This paper demonstrates the performance of KENO-VI for the Watts Bar Unit 1 Cycle 1 zero power physics tests, including reactor criticality, control rod worths, and isothermal temperature coefficients.« less

  9. Training Manuals and Technical Publications.

    ERIC Educational Resources Information Center

    Army Test and Evaluation Command, Aberdeen Proving Ground, MD.

    The objective of the Materiel Test Procedure is to describe methods for determining the need for adequacy of comprehensiveness, and clarity of training manuals and technical publications (or other pertinent types of literature) relating to the ammunition or ammunition components being tested. (Author)

  10. The Effectiveness of the Component Impact Test Method for the Side Impact Injury Assessment of the Door Trim

    NASA Astrophysics Data System (ADS)

    Youn, Younghan; Koo, Jeong-Seo

    The complete evaluation of the side vehicle structure and the occupant protection is only possible by means of the full scale side impact crash test. But, auto part manufacturers such as door trim makers can not conduct the test especially when the vehicle is under the developing process. The main objective of this study is to obtain the design guidelines by a simple component level impact test. The relationship between the target absorption energy and impactor speed were examined using the energy absorbed by the door trim. Since each different vehicle type required different energy levels on the door trim. A simple impact test method was developed to estimate abdominal injury by measuring reaction force of the impactor. The reaction force will be converted to a certain level of the energy by the proposed formula. The target of absorption energy for door trim only and the impact speed of simple impactor are derived theoretically based on the conservation of energy. With calculated speed of dummy and the effective mass of abdomen, the energy allocated in the abdomen area of door trim was calculated. The impactor speed can be calculated based on the equivalent energy of door trim absorbed during the full crash test. With the proposed design procedure for the door trim by a simple impact test method was demonstrated to evaluate the abdominal injury. This paper describes a study that was conducted to determine sensitivity of several design factors for reducing abdominal injury values using the matrix of orthogonal array method. In conclusion, with theoretical considerations and empirical test data, the main objective, standardization of door trim design using the simple impact test method was established.

  11. Conformational states and folding pathways of peptides revealed by principal-independent component analyses.

    PubMed

    Nguyen, Phuong H

    2007-05-15

    Principal component analysis is a powerful method for projecting multidimensional conformational space of peptides or proteins onto lower dimensional subspaces in which the main conformations are present, making it easier to reveal the structures of molecules from e.g. molecular dynamics simulation trajectories. However, the identification of all conformational states is still difficult if the subspaces consist of more than two dimensions. This is mainly due to the fact that the principal components are not independent with each other, and states in the subspaces cannot be visualized. In this work, we propose a simple and fast scheme that allows one to obtain all conformational states in the subspaces. The basic idea is that instead of directly identifying the states in the subspace spanned by principal components, we first transform this subspace into another subspace formed by components that are independent of one other. These independent components are obtained from the principal components by employing the independent component analysis method. Because of independence between components, all states in this new subspace are defined as all possible combinations of the states obtained from each single independent component. This makes the conformational analysis much simpler. We test the performance of the method by analyzing the conformations of the glycine tripeptide and the alanine hexapeptide. The analyses show that our method is simple and quickly reveal all conformational states in the subspaces. The folding pathways between the identified states of the alanine hexapeptide are analyzed and discussed in some detail. 2007 Wiley-Liss, Inc.

  12. Nondestructive Testing Residual Stress Using Ultrasonic Critical Refracted Longitudinal Wave

    NASA Astrophysics Data System (ADS)

    Xu, Chunguang; Song, Wentao; Pan, Qinxue; Li, Huanxin; Liu, Shuai

    Residual stress has significant impacts on the performance of the mechanical components, especially on its strength, fatigue life and corrosion resistance and dimensional stability. Based on theory of acoustoelasticity, the testing principle of ultrasonic LCR wave method is analyzed. The testing system of residual stress is build. The method of calibration of stress coefficient is proposed in order to improve the detection precision. At last, through experiments and applications on residual stress testing of oil pipeline weld joint, vehicle's torsion shaft, glass and ceramics, gear tooth root, and so on, the result show that it deserved to be studied deeply on application and popularization of ultrasonic LCR wave method.

  13. Analysis of mathematical literacy ability based on goal orientation in model eliciting activities learning with murder strategy

    NASA Astrophysics Data System (ADS)

    Wijayanti, R.; Waluya, S. B.; Masrukan

    2018-03-01

    The purpose of this research are (1) to analyze the learning quality of MEAs with MURDER strategy, (2) to analyze students’ mathematical literacy ability based on goal orientation in MEAs learning with MURDER strategy. This research is a mixed method research of concurrent embedded type where qualitative method as the primary method. The data were obtained using the methods of scale, observation, test and interviews. The results showed that (1) MEAs Learning with MURDER strategy on students' mathematical literacy ability is qualified, (2) Students who have mastery goal characteristics are able to master the seven components of mathematical literacy process although there are still two components that the solution is less than the maximum. Students who have performance goal characteristics have not mastered the components of mathematical literacy process with the maximum, they are only able to master the ability of using mathematics tool and the other components of mathematical literacy process is quite good.

  14. A new method for aerodynamic test of high altitude propellers

    NASA Astrophysics Data System (ADS)

    Gong, Xiying; Zhang, Lin

    A ground test system is designed for aerodynamic performance tests of high altitude propellers. The system is consisted of stable power supply, servo motors, two-component balance constructed by tension-compression sensors, ultrasonic anemometer, data acquisition module. It is loaded on a truck to simulate propellers' wind-tunnel test for different wind velocities at low density circumstance. The graphical programming language LABVIEW for developing virtual instrument is used to realize the test system control and data acquisition. Aerodynamic performance test of a propeller with 6.8 m diameter was completed by using this system. The results verify the feasibility of the ground test method.

  15. Wind Assessment for Aerial Payload Delivery Systems Using GPS and IMU Sensors

    DTIC Science & Technology

    2016-09-01

    post- processing of the resultant test data were the research methods used in development of this thesis . Ultimately, this thesis presents two models ...processing of the resultant test data were the research methods used in development of this thesis . Ultimately, this thesis presents two models for winds...7  E .  THESIS OBJECTIVE AND ORGANIZATION ................................. 7  II.  BLIZZARD SYSTEM COMPONENTS

  16. Space vehicle engine and heat shield environment review. Volume 1: Engineering analysis

    NASA Technical Reports Server (NTRS)

    Mcanelly, W. B.; Young, C. T. K.

    1973-01-01

    Methods for predicting the base heating characteristics of a multiple rocket engine installation are discussed. The environmental data is applied to the design of adequate protection system for the engine components. The methods for predicting the base region thermal environment are categorized as: (1) scale model testing, (2) extrapolation of previous and related flight test results, and (3) semiempirical analytical techniques.

  17. Onboard Nonlinear Engine Sensor and Component Fault Diagnosis and Isolation Scheme

    NASA Technical Reports Server (NTRS)

    Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong

    2011-01-01

    A method detects and isolates in-flight sensor, actuator, and component faults for advanced propulsion systems. In sharp contrast to many conventional methods, which deal with either sensor fault or component fault, but not both, this method considers sensor fault, actuator fault, and component fault under one systemic and unified framework. The proposed solution consists of two main components: a bank of real-time, nonlinear adaptive fault diagnostic estimators for residual generation, and a residual evaluation module that includes adaptive thresholds and a Transferable Belief Model (TBM)-based residual evaluation scheme. By employing a nonlinear adaptive learning architecture, the developed approach is capable of directly dealing with nonlinear engine models and nonlinear faults without the need of linearization. Software modules have been developed and evaluated with the NASA C-MAPSS engine model. Several typical engine-fault modes, including a subset of sensor/actuator/components faults, were tested with a mild transient operation scenario. The simulation results demonstrated that the algorithm was able to successfully detect and isolate all simulated faults as long as the fault magnitudes were larger than the minimum detectable/isolable sizes, and no misdiagnosis occurred

  18. Interobserver Reliability of the Total Body Score System for Quantifying Human Decomposition.

    PubMed

    Dabbs, Gretchen R; Connor, Melissa; Bytheway, Joan A

    2016-03-01

    Several authors have tested the accuracy of the Total Body Score (TBS) method for quantifying decomposition, but none have examined the reliability of the method as a scoring system by testing interobserver error rates. Sixteen participants used the TBS system to score 59 observation packets including photographs and written descriptions of 13 human cadavers in different stages of decomposition (postmortem interval: 2-186 days). Data analysis used a two-way random model intraclass correlation in SPSS (v. 17.0). The TBS method showed "almost perfect" agreement between observers, with average absolute correlation coefficients of 0.990 and average consistency correlation coefficients of 0.991. While the TBS method may have sources of error, scoring reliability is not one of them. Individual component scores were examined, and the influences of education and experience levels were investigated. Overall, the trunk component scores were the least concordant. Suggestions are made to improve the reliability of the TBS method. © 2016 American Academy of Forensic Sciences.

  19. 40 CFR Appendix A to Part 63 - Test Methods

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... components by a different analyst). 3.3Surrogate Reference Materials. The analyst may use surrogate compounds... the variance of the proposed method is significantly different from that of the validated method by... variables can be determined in eight experiments rather than 128 (W.J. Youden, Statistical Manual of the...

  20. Interpersonal Complexity: A Cognitive Component of Person-Centered Care

    ERIC Educational Resources Information Center

    Medvene, Louis; Grosch, Kerry; Swink, Nathan

    2006-01-01

    Purpose: This study concerns one component of the ability to provide person-centered care: the cognitive skill of perceiving others in relatively complex terms. This study tested the effectiveness of a social motivation for increasing the number of psychological constructs used to describe an unfamiliar senior citizen. Design and Methods:…

  1. How Magnetic Disturbance Influences the Attitude and Heading in Magnetic and Inertial Sensor-Based Orientation Estimation.

    PubMed

    Fan, Bingfei; Li, Qingguo; Liu, Tao

    2017-12-28

    With the advancements in micro-electromechanical systems (MEMS) technologies, magnetic and inertial sensors are becoming more and more accurate, lightweight, smaller in size as well as low-cost, which in turn boosts their applications in human movement analysis. However, challenges still exist in the field of sensor orientation estimation, where magnetic disturbance represents one of the obstacles limiting their practical application. The objective of this paper is to systematically analyze exactly how magnetic disturbances affects the attitude and heading estimation for a magnetic and inertial sensor. First, we reviewed four major components dealing with magnetic disturbance, namely decoupling attitude estimation from magnetic reading, gyro bias estimation, adaptive strategies of compensating magnetic disturbance and sensor fusion algorithms. We review and analyze the features of existing methods of each component. Second, to understand each component in magnetic disturbance rejection, four representative sensor fusion methods were implemented, including gradient descent algorithms, improved explicit complementary filter, dual-linear Kalman filter and extended Kalman filter. Finally, a new standardized testing procedure has been developed to objectively assess the performance of each method against magnetic disturbance. Based upon the testing results, the strength and weakness of the existing sensor fusion methods were easily examined, and suggestions were presented for selecting a proper sensor fusion algorithm or developing new sensor fusion method.

  2. Testing large aspheric surfaces with complementary annular subaperture interferometric method

    NASA Astrophysics Data System (ADS)

    Hou, Xi; Wu, Fan; Lei, Baiping; Fan, Bin; Chen, Qiang

    2008-07-01

    Annular subaperture interferometric method has provided an alternative solution to testing rotationally symmetric aspheric surfaces with low cost and flexibility. However, some new challenges, particularly in the motion and algorithm components, appear when applied to large aspheric surfaces with large departure in the practical engineering. Based on our previously reported annular subaperture reconstruction algorithm with Zernike annular polynomials and matrix method, and the experimental results for an approximate 130-mm diameter and f/2 parabolic mirror, an experimental investigation by testing an approximate 302-mm diameter and f/1.7 parabolic mirror with the complementary annular subaperture interferometric method is presented. We have focused on full-aperture reconstruction accuracy, and discuss some error effects and limitations of testing larger aspheric surfaces with the annular subaperture method. Some considerations about testing sector segment with complementary sector subapertures are provided.

  3. Spectral gene set enrichment (SGSE).

    PubMed

    Frost, H Robert; Li, Zhigang; Moore, Jason H

    2015-03-03

    Gene set testing is typically performed in a supervised context to quantify the association between groups of genes and a clinical phenotype. In many cases, however, a gene set-based interpretation of genomic data is desired in the absence of a phenotype variable. Although methods exist for unsupervised gene set testing, they predominantly compute enrichment relative to clusters of the genomic variables with performance strongly dependent on the clustering algorithm and number of clusters. We propose a novel method, spectral gene set enrichment (SGSE), for unsupervised competitive testing of the association between gene sets and empirical data sources. SGSE first computes the statistical association between gene sets and principal components (PCs) using our principal component gene set enrichment (PCGSE) method. The overall statistical association between each gene set and the spectral structure of the data is then computed by combining the PC-level p-values using the weighted Z-method with weights set to the PC variance scaled by Tracy-Widom test p-values. Using simulated data, we show that the SGSE algorithm can accurately recover spectral features from noisy data. To illustrate the utility of our method on real data, we demonstrate the superior performance of the SGSE method relative to standard cluster-based techniques for testing the association between MSigDB gene sets and the variance structure of microarray gene expression data. Unsupervised gene set testing can provide important information about the biological signal held in high-dimensional genomic data sets. Because it uses the association between gene sets and samples PCs to generate a measure of unsupervised enrichment, the SGSE method is independent of cluster or network creation algorithms and, most importantly, is able to utilize the statistical significance of PC eigenvalues to ignore elements of the data most likely to represent noise.

  4. Experience with helium leak and thermal shocks test of SST-1 cryo components

    NASA Astrophysics Data System (ADS)

    Sharma, Rajiv; Nimavat, Hiren; Srikanth, G. L. N.; Bairagi, Nitin; Shah, Pankil; Tanna, V. L.; Pradhan, S.

    2012-11-01

    A steady state superconducting Tokamak SST-1 is presently under its assembly stage at the Institute for Plasma Research. The SST-1 machine is a family of Superconducting SC coils for both Toroidal field and Poloidal Field. An ultra high vacuum compatible vacuum vessel, placed in the bore of the TF coils, houses the plasma facing components. A high vacuum cryostat encloses all the SC coils and the vacuum vessel. Liquid Nitrogen (LN2) cooled thermal shield between the vacuum vessel & SC coils as well as between cryostat and the SC coils. There are number of crucial cryogenic components as Electrical isolators, 80 K thermal shield, Cryogenic flexible hose etc., which have to be passed the performance validation tests as part of fulfillment of the stringent QA/QC before incorporated in the main assembly. The individual leak tests of components at RT as well as after thermal cycle from 300 K to 77 K ensure us to make final overall leak proof system. These components include, Large numbers of Electrical Isolators for Helium as well as LN2 services, Flexible Bellows and Hoses for Helium as well as LN2 services, Thermal shock tests of large numbers of 80 K Bubble shields In order to validate the helium leak tightness of these components, we have used the calibrated mass spectrometer leak detector (MSLD) at 300 K, 77 K and 4.2. Since it is very difficult to locate the leaks, which are appearing at rather lower temperatures e.g. less than 20 K, We have invented different approaches to resolve the issue of such leaks. This paper, in general describes the design of cryogenic flexible hose, assembly, couplings for leak testing, test method and techniques of thermal cycles test at 77 K inflow conditions and leak testing aspects of different cryogenic components. The test results, the problems encountered and its solutions techniques are discussed.

  5. Development of laboratory test methods to replace the simulated high-temperature grout fluidity test : [summary].

    DOT National Transportation Integrated Search

    2014-06-01

    Concretes remarkable role in construction depends on its marriage with reinforcing steel. Concrete is very strong in compression, but weak in tension, so reinforcing steel is added to increase tensile strength, yielding structural components capab...

  6. Ultrasonic detection technology based on joint robot on composite component with complex surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hao, Juan; Xu, Chunguang; Zhang, Lan

    Some components have complex surface, such as the airplane wing and the shell of a pressure vessel etc. The quality of these components determines the reliability and safety of related equipment. Ultrasonic nondestructive detection is one of the main methods used for testing material defects at present. In order to improve the testing precision, the acoustic axis of the ultrasonic transducer should be consistent with the normal direction of the measured points. When we use joint robots, automatic ultrasonic scan along the component surface normal direction can be realized by motion trajectory planning and coordinate transformation etc. In order tomore » express the defects accurately and truly, the robot position and the signal of the ultrasonic transducer should be synchronized.« less

  7. Test Standard Developed for Determining the Slow Crack Growth of Advanced Ceramics at Ambient Temperature

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Salem, Jonathan A.

    1998-01-01

    The service life of structural ceramic components is often limited by the process of slow crack growth. Therefore, it is important to develop an appropriate testing methodology for accurately determining the slow crack growth design parameters necessary for component life prediction. In addition, an appropriate test methodology can be used to determine the influences of component processing variables and composition on the slow crack growth and strength behavior of newly developed materials, thus allowing the component process to be tailored and optimized to specific needs. At the NASA Lewis Research Center, work to develop a standard test method to determine the slow crack growth parameters of advanced ceramics was initiated by the authors in early 1994 in the C 28 (Advanced Ceramics) committee of the American Society for Testing and Materials (ASTM). After about 2 years of required balloting, the draft written by the authors was approved and established as a new ASTM test standard: ASTM C 1368-97, Standard Test Method for Determination of Slow Crack Growth Parameters of Advanced Ceramics by Constant Stress-Rate Flexural Testing at Ambient Temperature. Briefly, the test method uses constant stress-rate testing to determine strengths as a function of stress rate at ambient temperature. Strengths are measured in a routine manner at four or more stress rates by applying constant displacement or loading rates. The slow crack growth parameters required for design are then estimated from a relationship between strength and stress rate. This new standard will be published in the Annual Book of ASTM Standards, Vol. 15.01, in 1998. Currently, a companion draft ASTM standard for determination of the slow crack growth parameters of advanced ceramics at elevated temperatures is being prepared by the authors and will be presented to the committee by the middle of 1998. Consequently, Lewis will maintain an active leadership role in advanced ceramics standardization within ASTM. In addition, the authors have been and are involved with several international standardization organizations including the Versailles Project on Advanced Materials and Standards (VAMAS), the International Energy Agency (IEA), and the International Organization for Standardization (ISO). The associated standardization activities involve fracture toughness, strength, elastic modulus, and the machining of advanced ceramics.

  8. Engineered Polymer Composites Through Electrospun Nanofiber Coating of Fiber Tows

    NASA Technical Reports Server (NTRS)

    Kohlman, Lee W.; Bakis, Charles; Williams, Tiffany S.; Johnston, James C.; Kuczmarski, Maria A.; Roberts, Gary D.

    2014-01-01

    Composite materials offer significant weight savings in many aerospace applications. The toughness of the interface of fibers crossing at different angles often determines failure of composite components. A method for toughening the interface in fabric and filament wound components using directly electrospun thermoplastic nanofiber on carbon fiber tow is presented. The method was first demonstrated with limited trials, and then was scaled up to a continuous lab scale process. Filament wound tubes were fabricated and tested using unmodified baseline towpreg material and nanofiber coated towpreg.

  9. Application of advanced coating techniques to rocket engine components

    NASA Technical Reports Server (NTRS)

    Verma, S. K.

    1988-01-01

    The materials problem in the space shuttle main engine (SSME) is reviewed. Potential coatings and the method of their application for improved life of SSME components are discussed. A number of advanced coatings for turbine blade components and disks are being developed and tested in a multispecimen thermal fatigue fluidized bed facility at IIT Research Institute. This facility is capable of producing severe strains of the degree present in blades and disk components of the SSME. The potential coating systems and current efforts at IITRI being taken for life extension of the SSME components are summarized.

  10. Review on pen-and-paper-based observational methods for assessing ergonomic risk factors of computer work.

    PubMed

    Rahman, Mohd Nasrull Abdol; Mohamad, Siti Shafika

    2017-01-01

    Computer works are associated with Musculoskeletal Disorders (MSDs). There are several methods have been developed to assess computer work risk factor related to MSDs. This review aims to give an overview of current techniques available for pen-and-paper-based observational methods in assessing ergonomic risk factors of computer work. We searched an electronic database for materials from 1992 until 2015. The selected methods were focused on computer work, pen-and-paper observational methods, office risk factors and musculoskeletal disorders. This review was developed to assess the risk factors, reliability and validity of pen-and-paper observational method associated with computer work. Two evaluators independently carried out this review. Seven observational methods used to assess exposure to office risk factor for work-related musculoskeletal disorders were identified. The risk factors involved in current techniques of pen and paper based observational tools were postures, office components, force and repetition. From the seven methods, only five methods had been tested for reliability. They were proven to be reliable and were rated as moderate to good. For the validity testing, from seven methods only four methods were tested and the results are moderate. Many observational tools already exist, but no single tool appears to cover all of the risk factors including working posture, office component, force, repetition and office environment at office workstations and computer work. Although the most important factor in developing tool is proper validation of exposure assessment techniques, the existing observational method did not test reliability and validity. Futhermore, this review could provide the researchers with ways on how to improve the pen-and-paper-based observational method for assessing ergonomic risk factors of computer work.

  11. Pressure activated interconnection of micro transfer printed components

    NASA Astrophysics Data System (ADS)

    Prevatte, Carl; Guven, Ibrahim; Ghosal, Kanchan; Gomez, David; Moore, Tanya; Bonafede, Salvatore; Raymond, Brook; Trindade, António Jose; Fecioru, Alin; Kneeburg, David; Meitl, Matthew A.; Bower, Christopher A.

    2016-05-01

    Micro transfer printing and other forms of micro assembly deterministically produce heterogeneously integrated systems of miniaturized components on non-native substrates. Most micro assembled systems include electrical interconnections to the miniaturized components, typically accomplished by metal wires formed on the non-native substrate after the assembly operation. An alternative scheme establishing interconnections during the assembly operation is a cost-effective manufacturing method for producing heterogeneous microsystems, and facilitates the repair of integrated microsystems, such as displays, by ex post facto addition of components to correct defects after system-level tests. This letter describes pressure-concentrating conductor structures formed on silicon (1 0 0) wafers to establish connections to preexisting conductive traces on glass and plastic substrates during micro transfer printing with an elastomer stamp. The pressure concentrators penetrate a polymer layer to form the connection, and reflow of the polymer layer bonds the components securely to the target substrate. The experimental yield of series-connected test systems with >1000 electrical connections demonstrates the suitability of the process for manufacturing, and robustness of the test systems against exposure to thermal shock, damp heat, and mechanical flexure shows reliability of the resulting bonds.

  12. Methods to Measure, Predict and Relate Friction, Wear and Fuel Economy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gravante, Steve; Fenske, George; Demas, Nicholas

    High-fidelity measurements of the coefficient of friction and the parasitic friction power of the power cylinder components have been made for the Isuzu 5.2L 4H on-highway engine. In particular, measurements of the asperity friction coefficient were made with test coupons using Argonne National Lab’s (ANL) reciprocating test rig for the ring-on-liner and skirt-on-liner component pairs. These measurements correlated well with independent measurements made by Electro-Mechanical Associates (EMA). In addition, surface roughness measurements of the Isuzu components were made using white light interferometer (WLI). The asperity friction and surface characterization are key inputs to advanced CAE simulation tools such as RINGPAKmore » and PISDYN which are used to predict the friction power and wear rates of power cylinder components. Finally, motored friction tests were successfully performed to quantify the friction mean effective pressure (FMEP) of the power cylinder components for various oils (High viscosity 15W40, low viscosity 5W20 with friction modifier (FM) and specially blended oil containing consisting of PAO/ZDDP/MoDTC) at 25, 50, and 110°C.« less

  13. Methodological reporting in qualitative, quantitative, and mixed methods health services research articles.

    PubMed

    Wisdom, Jennifer P; Cavaleri, Mary A; Onwuegbuzie, Anthony J; Green, Carla A

    2012-04-01

    Methodologically sound mixed methods research can improve our understanding of health services by providing a more comprehensive picture of health services than either method can alone. This study describes the frequency of mixed methods in published health services research and compares the presence of methodological components indicative of rigorous approaches across mixed methods, qualitative, and quantitative articles. All empirical articles (n = 1,651) published between 2003 and 2007 from four top-ranked health services journals. All mixed methods articles (n = 47) and random samples of qualitative and quantitative articles were evaluated to identify reporting of key components indicating rigor for each method, based on accepted standards for evaluating the quality of research reports (e.g., use of p-values in quantitative reports, description of context in qualitative reports, and integration in mixed method reports). We used chi-square tests to evaluate differences between article types for each component. Mixed methods articles comprised 2.85 percent (n = 47) of empirical articles, quantitative articles 90.98 percent (n = 1,502), and qualitative articles 6.18 percent (n = 102). There was a statistically significant difference (χ(2) (1) = 12.20, p = .0005, Cramer's V = 0.09, odds ratio = 1.49 [95% confidence interval = 1,27, 1.74]) in the proportion of quantitative methodological components present in mixed methods compared to quantitative papers (21.94 versus 47.07 percent, respectively) but no statistically significant difference (χ(2) (1) = 0.02, p = .89, Cramer's V = 0.01) in the proportion of qualitative methodological components in mixed methods compared to qualitative papers (21.34 versus 25.47 percent, respectively). Few published health services research articles use mixed methods. The frequency of key methodological components is variable. Suggestions are provided to increase the transparency of mixed methods studies and the presence of key methodological components in published reports. © Health Research and Educational Trust.

  14. Methodological Reporting in Qualitative, Quantitative, and Mixed Methods Health Services Research Articles

    PubMed Central

    Wisdom, Jennifer P; Cavaleri, Mary A; Onwuegbuzie, Anthony J; Green, Carla A

    2012-01-01

    Objectives Methodologically sound mixed methods research can improve our understanding of health services by providing a more comprehensive picture of health services than either method can alone. This study describes the frequency of mixed methods in published health services research and compares the presence of methodological components indicative of rigorous approaches across mixed methods, qualitative, and quantitative articles. Data Sources All empirical articles (n = 1,651) published between 2003 and 2007 from four top-ranked health services journals. Study Design All mixed methods articles (n = 47) and random samples of qualitative and quantitative articles were evaluated to identify reporting of key components indicating rigor for each method, based on accepted standards for evaluating the quality of research reports (e.g., use of p-values in quantitative reports, description of context in qualitative reports, and integration in mixed method reports). We used chi-square tests to evaluate differences between article types for each component. Principal Findings Mixed methods articles comprised 2.85 percent (n = 47) of empirical articles, quantitative articles 90.98 percent (n = 1,502), and qualitative articles 6.18 percent (n = 102). There was a statistically significant difference (χ2(1) = 12.20, p = .0005, Cramer's V = 0.09, odds ratio = 1.49 [95% confidence interval = 1,27, 1.74]) in the proportion of quantitative methodological components present in mixed methods compared to quantitative papers (21.94 versus 47.07 percent, respectively) but no statistically significant difference (χ2(1) = 0.02, p = .89, Cramer's V = 0.01) in the proportion of qualitative methodological components in mixed methods compared to qualitative papers (21.34 versus 25.47 percent, respectively). Conclusion Few published health services research articles use mixed methods. The frequency of key methodological components is variable. Suggestions are provided to increase the transparency of mixed methods studies and the presence of key methodological components in published reports. PMID:22092040

  15. Regulatory Physiology

    NASA Technical Reports Server (NTRS)

    Lane, Helen W.; Whitson, Peggy A.; Putcha, Lakshmi; Baker, Ellen; Smith, Scott M.; Stewart, Karen; Gretebeck, Randall; Nimmagudda, R. R.; Schoeller, Dale A.; Davis-Street, Janis

    1999-01-01

    As noted elsewhere in this report, a central goal of the Extended Duration Orbiter Medical Project (EDOMP) was to ensure that cardiovascular and muscle function were adequate to perform an emergency egress after 16 days of spaceflight. The goals of the Regulatory Physiology component of the EDOMP were to identify and subsequently ameliorate those biochemical and nutritional factors that deplete physiological reserves or increase risk for disease, and to facilitate the development of effective muscle, exercise, and cardiovascular countermeasures. The component investigations designed to meet these goals focused on biochemical and physiological aspects of nutrition and metabolism, the risk of renal (kidney) stone formation, gastrointestinal function, and sleep in space. Investigations involved both ground-based protocols to validate proposed methods and flight studies to test those methods. Two hardware tests were also completed.

  16. Probabilistic Component Mode Synthesis of Nondeterministic Substructures

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Ferri, Aldo A.

    1996-01-01

    Standard methods of structural dynamic analysis assume that the structural characteristics are deterministic. Recognizing that these characteristics are actually statistical in nature researchers have recently developed a variety of methods that use this information to determine probabilities of a desired response characteristic, such as natural frequency, without using expensive Monte Carlo simulations. One of the problems in these methods is correctly identifying the statistical properties of primitive variables such as geometry, stiffness, and mass. We present a method where the measured dynamic properties of substructures are used instead as the random variables. The residual flexibility method of component mode synthesis is combined with the probabilistic methods to determine the cumulative distribution function of the system eigenvalues. A simple cantilever beam test problem is presented that illustrates the theory.

  17. Current Space Station Experiments Investigating Component Level Electronics Repair

    NASA Technical Reports Server (NTRS)

    Easton, John W.; Struk, Peter M.

    2010-01-01

    The Soldering in a Reduced Gravity Experiment (SoRGE) and Component Repair Experiment (CRE)-1 are tests performed on the International Space Station to determine the techniques, tools, and training necessary to allow future crews to perform manual electronics repairs at the component level. SoRGE provides information on the formation and internal structure of through-hole solder joints, illustrating the challenges and implications of soldering in reduced gravity. SoRGE showed a significant increase in internal void defects for joints formed in low gravity compared to normal gravity. Methods for mitigating these void defects were evaluated using a modified soldering process. CRE-1 demonstrated the removal, cleaning, and replacement of electronics components by manual means on functional circuit boards. The majority of components successful passed a post-repair functional test demonstrating the feasibility of component-level repair within the confines of a spacecraft. Together, these tasks provide information to recommend material and tool improvements, training improvements, and future work to help enable electronics repairs in future space missions.

  18. Physical fitness is associated with anxiety levels in women with fibromyalgia: the al-Ándalus project.

    PubMed

    Córdoba-Torrecilla, S; Aparicio, V A; Soriano-Maldonado, A; Estévez-López, F; Segura-Jiménez, V; Álvarez-Gallardo, I; Femia, P; Delgado-Fernández, M

    2016-04-01

    To assess the independent associations of individual physical fitness components with anxiety in women with fibromyalgia and to test which physical fitness component shows the greatest association. This population-based cross-sectional study included 439 women with fibromyalgia (age 52.2 ± 8.0 years). Anxiety symptoms were measured with the State Trait Anxiety Inventory (STAI) and the anxiety item of the Revised Fibromyalgia Impact Questionnaire (FIQR). Physical fitness was assessed through the Senior Fitness Test battery and handgrip strength test. Overall, lower physical fitness was associated with higher anxiety levels (all, p < 0.05). The coefficients of the optimal regression model (stepwise selection method) between anxiety symptoms and physical fitness components adjusted for age, body fat percentage and anxiolytics intake showed that the back scratch test (b = -0.18), the chair sit-and-reach test (b = -0.12; p = 0.027) and the 6-min walk test (b = -0.02; p = 0.024) were independently and inversely associated with STAI. The back scratch test and the arm- curl test were associated with FIQR-anxiety (b = -0.05; p < 0.001 and b = -0.07; p = 0.021, respectively). Physical fitness was inversely and consistently associated with anxiety in women with fibromyalgia, regardless of the fitness component evaluated. In particular, upper-body flexibility was an independent indicator of anxiety levels, followed by cardiorespiratory fitness and muscular strength.

  19. Reliability approach to rotating-component design. [fatigue life and stress concentration

    NASA Technical Reports Server (NTRS)

    Kececioglu, D. B.; Lalli, V. R.

    1975-01-01

    A probabilistic methodology for designing rotating mechanical components using reliability to relate stress to strength is explained. The experimental test machines and data obtained for steel to verify this methodology are described. A sample mechanical rotating component design problem is solved by comparing a deterministic design method with the new design-by reliability approach. The new method shows that a smaller size and weight can be obtained for specified rotating shaft life and reliability, and uses the statistical distortion-energy theory with statistical fatigue diagrams for optimum shaft design. Statistical methods are presented for (1) determining strength distributions for steel experimentally, (2) determining a failure theory for stress variations in a rotating shaft subjected to reversed bending and steady torque, and (3) relating strength to stress by reliability.

  20. CR TKA UHMWPE wear tested after artificial aging of the vitamin E treated gliding component by simulating daily patient activities.

    PubMed

    Schwiesau, Jens; Fritz, Bernhard; Kutzner, Ines; Bergmann, Georg; Grupp, Thomas M

    2014-01-01

    The wear behaviour of total knee arthroplasty (TKA) is dominated by two wear mechanisms: the abrasive wear and the delamination of the gliding components, where the second is strongly linked to aging processes and stress concentration in the material. The addition of vitamin E to the bulk material is a potential way to reduce the aging processes. This study evaluates the wear behaviour and delamination susceptibility of the gliding components of a vitamin E blended, ultra-high molecular weight polyethylene (UHMWPE) cruciate retaining (CR) total knee arthroplasty. Daily activities such as level walking, ascending and descending stairs, bending of the knee, and sitting and rising from a chair were simulated with a data set received from an instrumented knee prosthesis. After 5 million test cycles no structural failure of the gliding components was observed. The wear rate was with 5.62 ± 0.53 mg/million cycles falling within the limit of previous reports for established wear test methods.

  1. CR TKA UHMWPE Wear Tested after Artificial Aging of the Vitamin E Treated Gliding Component by Simulating Daily Patient Activities

    PubMed Central

    Schwiesau, Jens; Fritz, Bernhard; Kutzner, Ines; Bergmann, Georg; Grupp, Thomas M.

    2014-01-01

    The wear behaviour of total knee arthroplasty (TKA) is dominated by two wear mechanisms: the abrasive wear and the delamination of the gliding components, where the second is strongly linked to aging processes and stress concentration in the material. The addition of vitamin E to the bulk material is a potential way to reduce the aging processes. This study evaluates the wear behaviour and delamination susceptibility of the gliding components of a vitamin E blended, ultra-high molecular weight polyethylene (UHMWPE) cruciate retaining (CR) total knee arthroplasty. Daily activities such as level walking, ascending and descending stairs, bending of the knee, and sitting and rising from a chair were simulated with a data set received from an instrumented knee prosthesis. After 5 million test cycles no structural failure of the gliding components was observed. The wear rate was with 5.62 ± 0.53 mg/million cycles falling within the limit of previous reports for established wear test methods. PMID:25506594

  2. Identification and partial purification of pollen allergens from Artemisia princeps.

    PubMed

    Park, H S; Hong, C S; Choi, H J; Hahm, K S

    1989-12-01

    The pollen of Artemisia has been considered as the main late summer-autumn allergen source in this country. To identify its allergenic components, Artemisia princeps pollen extracts were separated by 10% sodium dodecylsulfate polyacrylamide gel electrophoresis (SDS-PAGE), and transferred to nitrocellulose membrane, where IgE binding components were detected by the reaction with sera of twenty Artemisia-allergic patients and 125I-anti-human IgE, sixteen components in the molecular range of 10,000 and 85,000 daltons were detected. Twelve bands bound to IgE from 50% of the sera tested, and two bands (37,000, 23,000 daltons) showed the highest (85%) frequency of IgE-binding in twenty sera tested. When the gel of SDS-PAGE with Artemisia pollen extracts was sliced into 11 allergenic groups (AG) and the protein of each AG was obtained by the gel elution method, the wormwool-RAST inhibition test showed that the AG 10 demonstrated to be the most potent, and the AG 7 was the next. Six AGs showed significant responses (more than 100% of wheal size to histamine, 1 mg/ml) on the skin prick test in more than 50% of the patients tested. It is suggested that electrophoretic transfer analysis with SDS-PAGE may be a valuable method for Artemisia allergen identification, and the possibility of partial purification of allergens by employing gel elution is discussed.

  3. Usefulness of component resolved analysis of cat allergy in routine clinical practice.

    PubMed

    Eder, Katharina; Becker, Sven; San Nicoló, Marion; Berghaus, Alexander; Gröger, Moritz

    2016-01-01

    Cat allergy is of great importance, and its prevalence is increasing worldwide. Cat allergens and house dust mite allergens represent the major indoor allergens; however, they are ubiquitous. Cat sensitization and allergy are known risk factors for rhinitis, bronchial hyperreactivity and asthma. Thus, the diagnosis of sensitization to cats is important for any allergist. 70 patients with positive skin prick tests for cats were retrospectively compared regarding their skin prick test results, as well as their specific immunoglobulin E antibody profiles with regard to their responses to the native cat extract, rFel d 1, nFel d 2 and rFel d 4. 35 patients were allergic to cats, as determined by positive anamnesis and/or nasal provocation with cat allergens, and 35 patients exhibited clinically non-relevant sensitization, as indicated by negative anamnesis and/or a negative nasal allergen challenge. Native cat extract serology testing detected 100% of patients who were allergic to cats but missed eight patients who showed sensitization in the skin prick test and did not have allergic symptoms. The median values of the skin prick test, as well as those of the specific immunoglobulin E antibodies against the native cat extract, were significantly higher for allergic patients than for patients with clinically non-relevant sensitization. Component based diagnostic testing to rFel d 1 was not as reliable. Sensitization to nFel d 2 and rFel d 4 was seen only in individual patients. Extract based diagnostic methods for identifying cat allergy and sensitization, such as the skin prick test and native cat extract serology, remain crucial in routine clinical practice. In our study, component based diagnostic testing could not replace these methods with regard to the detection of sensitization to cats and differentiation between allergy and sensitization without clinical relevance. However, component resolved allergy diagnostic tools have individual implications, and future studies may facilitate a better understanding of its use and subsequently may improve the clinical management of allergic patients.

  4. Active tower damping and pitch balancing - design, simulation and field test

    NASA Astrophysics Data System (ADS)

    Duckwitz, Daniel; Shan, Martin

    2014-12-01

    The tower is one of the major components in wind turbines with a contribution to the cost of energy of 8 to 12% [1]. In this overview the load situation of the tower will be described in terms of sources of loads, load components and fatigue contribution. Then two load reduction control schemes are described along with simulation and field test results. Pitch Balancing is described as a method to reduce aerodynamic asymmetry and the resulting fatigue loads. Active Tower Damping is reducing the tower oscillations by applying appropiate pitch angle changes. A field test was conducted on an Areva M5000 wind turbine.

  5. Psychometric Evaluation of a Triage Decision Making Inventory

    DTIC Science & Technology

    2011-06-27

    the correlation matrix and inter-item correlations were reviewed. The Bartlett’s test of sphericity and the Kaiser - Meyer Olkin (KMO) were examined to...nursing experience. Principal component factor analysis with Varimax rotation was conducted using SPSS version 16. The Kaiser - Meyer - Olkin Measure of...Component Analysis. Rotation Method: Varimax with Kaiser Normalization. a. Rotation converged in 7 iterations

  6. A phase one AR/C system design

    NASA Technical Reports Server (NTRS)

    Kachmar, Peter M.; Polutchko, Robert J.; Matusky, Martin; Chu, William; Jackson, William; Montez, Moises

    1991-01-01

    The Phase One AR&C System Design integrates an evolutionary design based on the legacy of previous mission successes, flight tested components from manned Rendezvous and Proximity Operations (RPO) space programs, and additional AR&C components validated using proven methods. The Phase One system has a modular, open architecture with the standardized interfaces proposed for Space Station Freedom system architecture.

  7. Development of an improved system of wood-frame house construction

    Treesearch

    L.O. Anderson

    1965-01-01

    A new system of wood-frame house construction has been developed which combines increased use of low-grade wood, prefinished components, and rapid field assembly methods without much divergence from conventional construction. Laboratory evaluations of the components of the Nu-frame system indicated that; (a) 4-foot spacing of the W-trusses tested provides a safety...

  8. Corrosion Control Test Method for Avionic Components

    DTIC Science & Technology

    1981-09-25

    pin conn’ecLor adsemblies *Electronic test articles exposed in an avionic box The following test parameters were used: Environment A - Modified Sulfur Dic...carrier correlation criteria in Table IV. The modified sulfur dioxide/salt fog test showed the best correlation with the carrier exposed test arti...capacitor. The HCl/H 2 SO3 environment and the S2C12 environment, as expected, produced more electrical failures than the modified sulfur dioxide test

  9. Design Evaluation of High Reliability Lithium Batteries

    NASA Technical Reports Server (NTRS)

    Buchman, R. C.; Helgeson, W. D.; Istephanous, N. S.

    1985-01-01

    Within one year, a lithium battery design can be qualified for device use through the application of accelerated discharge testing, calorimetry measurements, real time tests and other supplemental testing. Materials and corrosion testing verify that the battery components remain functional during expected battery life. By combining these various methods, a high reliability lithium battery can be manufactured for applications which require zero defect battery performance.

  10. Structural Design Considerations for an 8-m Space Telescope

    NASA Technical Reports Server (NTRS)

    Arnold, William R. Sr.; Stahl, H. Philip

    2009-01-01

    NASA's upcoming ARES V launch vehicle, with its' immerse payload capacities (both volume and mass) has opened the possibilities for a whole new paradigm of space observatories. It becomes practical to consider a monolith mirror of sufficient size to permit significant scientific advantages, both in collection area and smoothness or figure at a reasonable price. The technologies and engineering to manufacture and test 8 meter class monoliths is mature, with nearly a dozen of such mirrors already in operation around the world. This paper will discuss the design requirements to adapt an 8m meniscus mirror into a Space Telescope System, both launch and operational considerations are included. With objects this massive and structurally sensitive, the mirror design must include all stages of the process. Based upon the experiences of the Hubble Space Telescope, testing and verification at both component and integrated system levels are considered vital to mission success. To this end, two different component level test methods for gravity sag (the so call zero- gravity simulation or test mount) are proposed, with one of these methods suitable for the full up system level testing as well.

  11. Structural design considerations for an 8-m space telescope

    NASA Astrophysics Data System (ADS)

    Arnold, William r., Sr.; Stahl, H. Philip

    2009-08-01

    NASA's upcoming ARES V launch vehicle, with its' immense payload capacities (both volume and mass) has opened the possibilities for a whole new paradigm of space observatories. It becomes practical to consider a monolith mirror of sufficient size to permit significant scientific advantages, both in collection area and smoothness or figure at a reasonable price. The technologies and engineering to manufacture and test 8 meter class monoliths is mature, with nearly a dozen of such mirrors already in operation around the world. This paper will discuss the design requirements to adapt an 8m meniscus mirror into a Space Telescope System, both launch and operational considerations are included. With objects this massive and structurally sensitive, the mirror design must include all stages of the process. Based upon the experiences of the Hubble Space Telescope, testing and verification at both component and integrated system levels are considered vital to mission success. To this end, two different component level test methods for gravity sag (the so call zero- gravity simulation or test mount) are proposed, with one of these methods suitable for the full up system level testing as well.

  12. Testing methods and techniques: Strength of materials and components. A compilation

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The methods, techniques, and devices used in testing the mechanical properties of various materials are presented. Although metals and metal alloys are featured prominently, some of the items describe tests on a variety of other materials, from concrete to plastics. Many of the tests described are modifications of standard testing procedures, intended either to adapt them to different materials and conditions, or to make them more rapid and accurate. In either case, the approaches presented can result in considerable cost savings and improved quality control. The compilation is presented in two sections. The first deals specifically with material strength testing; the second treats the special category of fracture and fatigue testing.

  13. The generation of monoclonal antibodies and their use in rapid diagnostic tests

    USDA-ARS?s Scientific Manuscript database

    Antibodies are the most important component of an immunoassay. In these proceedings we outline novel methods used to generate and select monoclonal antibodies that meet performance criteria for use in rapid lateral flow and microfluidic immunoassay tests for the detection of agricultural pathogens ...

  14. CRAX/Cassandra Reliability Analysis Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, D.

    1999-02-10

    Over the past few years Sandia National Laboratories has been moving toward an increased dependence on model- or physics-based analyses as a means to assess the impact of long-term storage on the nuclear weapons stockpile. These deterministic models have also been used to evaluate replacements for aging systems, often involving commercial off-the-shelf components (COTS). In addition, the models have been used to assess the performance of replacement components manufactured via unique, small-lot production runs. In either case, the limited amount of available test data dictates that the only logical course of action to characterize the reliability of these components ismore » to specifically consider the uncertainties in material properties, operating environment etc. within the physics-based (deterministic) model. This not only provides the ability to statistically characterize the expected performance of the component or system, but also provides direction regarding the benefits of additional testing on specific components within the system. An effort was therefore initiated to evaluate the capabilities of existing probabilistic methods and, if required, to develop new analysis methods to support the inclusion of uncertainty in the classical design tools used by analysts and design engineers at Sandia. The primary result of this effort is the CMX (Cassandra Exoskeleton) reliability analysis software.« less

  15. Simultaneous quantitation of 14 active components in Yinchenhao decoction by using ultra high performance liquid chromatography with diode array detection: Method development and ingredient analysis of different commonly prepared samples.

    PubMed

    Yi, YaXiong; Zhang, Yong; Ding, Yue; Lu, Lu; Zhang, Tong; Zhao, Yuan; Xu, XiaoJun; Zhang, YuXin

    2016-11-01

    We developed a novel quantitative analysis method based on ultra high performance liquid chromatography coupled with diode array detection for the simultaneous determination of the 14 main active components in Yinchenhao decoction. All components were separated on an Agilent SB-C18 column by using a gradient solvent system of acetonitrile/0.1% phosphoric acid solution at a flow rate of 0.4 mL/min for 35 min. Subsequently, linearity, precision, repeatability, and accuracy tests were implemented to validate the method. Furthermore, the method has been applied for compositional difference analysis of 14 components in eight normal-extraction Yinchenhao decoction samples, accompanied by hierarchical clustering analysis and similarity analysis. The result that all samples were divided into three groups based on different contents of components demonstrated that extraction methods of decocting, refluxing, ultrasonication and extraction solvents of water or ethanol affected component differentiation, and should be related to its clinical applications. The results also indicated that the sample prepared by patients in the family by using water extraction employing a casserole was almost same to that prepared using a stainless-steel kettle, which is mostly used in pharmaceutical factories. This research would help patients to select the best and most convenient method for preparing Yinchenhao decoction. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  17. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  18. Application of differential similarity to finding nondimensional groups important in tests of cooled engine components

    NASA Technical Reports Server (NTRS)

    Sucec, J.

    1977-01-01

    The method of differential similarity is applied to the partial differential equations and boundary conditions which govern the temperature, velocity, and pressure fields in the flowing gases and the solid stationary components in air-cooled engines. This procedure yields the nondimensional groups which must have the same value in both the test rig and the engine to produce similarity between the test results and the engine performance. These results guide the experimentalist in the design and selection of test equipment that properly scales quantities to actual engine conditions. They also provide a firm fundamental foundation for substantiation of previous similarity analyses which employed heuristic, physical reasoning arguments to arrive at the nondimensional groups.

  19. Automatic classification of artifactual ICA-components for artifact removal in EEG signals.

    PubMed

    Winkler, Irene; Haufe, Stefan; Tangermann, Michael

    2011-08-02

    Artifacts contained in EEG recordings hamper both, the visual interpretation by experts as well as the algorithmic processing and analysis (e.g. for Brain-Computer Interfaces (BCI) or for Mental State Monitoring). While hand-optimized selection of source components derived from Independent Component Analysis (ICA) to clean EEG data is widespread, the field could greatly profit from automated solutions based on Machine Learning methods. Existing ICA-based removal strategies depend on explicit recordings of an individual's artifacts or have not been shown to reliably identify muscle artifacts. We propose an automatic method for the classification of general artifactual source components. They are estimated by TDSEP, an ICA method that takes temporal correlations into account. The linear classifier is based on an optimized feature subset determined by a Linear Programming Machine (LPM). The subset is composed of features from the frequency-, the spatial- and temporal domain. A subject independent classifier was trained on 640 TDSEP components (reaction time (RT) study, n = 12) that were hand labeled by experts as artifactual or brain sources and tested on 1080 new components of RT data of the same study. Generalization was tested on new data from two studies (auditory Event Related Potential (ERP) paradigm, n = 18; motor imagery BCI paradigm, n = 80) that used data with different channel setups and from new subjects. Based on six features only, the optimized linear classifier performed on level with the inter-expert disagreement (<10% Mean Squared Error (MSE)) on the RT data. On data of the auditory ERP study, the same pre-calculated classifier generalized well and achieved 15% MSE. On data of the motor imagery paradigm, we demonstrate that the discriminant information used for BCI is preserved when removing up to 60% of the most artifactual source components. We propose a universal and efficient classifier of ICA components for the subject independent removal of artifacts from EEG data. Based on linear methods, it is applicable for different electrode placements and supports the introspection of results. Trained on expert ratings of large data sets, it is not restricted to the detection of eye- and muscle artifacts. Its performance and generalization ability is demonstrated on data of different EEG studies.

  20. Selective adsorption of flavor-active components on hydrophobic resins.

    PubMed

    Saffarionpour, Shima; Sevillano, David Mendez; Van der Wielen, Luuk A M; Noordman, T Reinoud; Brouwer, Eric; Ottens, Marcel

    2016-12-09

    This work aims to propose an optimum resin that can be used in industrial adsorption process for tuning flavor-active components or removal of ethanol for producing an alcohol-free beer. A procedure is reported for selective adsorption of volatile aroma components from water/ethanol mixtures on synthetic hydrophobic resins. High throughput 96-well microtiter-plates batch uptake experimentation is applied for screening resins for adsorption of esters (i.e. isoamyl acetate, and ethyl acetate), higher alcohols (i.e. isoamyl alcohol and isobutyl alcohol), a diketone (diacetyl) and ethanol. The miniaturized batch uptake method is adapted for adsorption of volatile components, and validated with column breakthrough analysis. The results of single-component adsorption tests on Sepabeads SP20-SS are expressed in single-component Langmuir, Freundlich, and Sips isotherm models and multi-component versions of Langmuir and Sips models are applied for expressing multi-component adsorption results obtained on several tested resins. The adsorption parameters are regressed and the selectivity over ethanol is calculated for each tested component and tested resin. Resin scores for four different scenarios of selective adsorption of esters, higher alcohols, diacetyl, and ethanol are obtained. The optimal resin for adsorption of esters is Sepabeads SP20-SS with resin score of 87% and for selective removal of higher alcohols, XAD16N, and XAD4 from Amberlite resin series are proposed with scores of 80 and 74% respectively. For adsorption of diacetyl, XAD16N and XAD4 resins with score of 86% are the optimum choice and Sepabeads SP2MGS and XAD761 resins showed the highest affinity towards ethanol. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Exploring the construct validity of the social cognition and object relations scale in a clinical sample.

    PubMed

    Stein, Michelle B; Slavin-Mulford, Jenelle; Sinclair, S Justin; Siefert, Caleb J; Blais, Mark A

    2012-01-01

    The Social Cognition and Object Relations Scale-Global rating method (SCORS-G; Stein, Hilsenroth, Slavin-Mulford, & Pinsker, 2011; Westen, 1995) measures the quality of object relations in narrative material. This study employed a multimethod approach to explore the structure and construct validity of the SCORS-G. The Thematic Apperception Test (TAT; Murray, 1943) was administered to 59 patients referred for psychological assessment at a large Northeastern U.S. hospital. The resulting 301 TAT narratives were rated using the SCORS-G method. The 8 SCORS variables were found to have high interrater reliability and good internal consistency. Principal components analysis revealed a 3-component solution with components tapping emotions/affect regulation in relationships, self-image, and aspects of cognition. Next, the construct validity of the SCORS-G components was explored using measures of intellectual and executive functioning, psychopathology, and normal personality. The 3 SCORS-G components showed unique and theoretically meaningful relationships across these broad and diverse psychological measures. This study demonstrates the value of using a standardized scoring method, like the SCORS-G, to reveal the rich and complex nature of narrative material.

  2. Image Fusion of CT and MR with Sparse Representation in NSST Domain

    PubMed Central

    Qiu, Chenhui; Wang, Yuanyuan; Zhang, Huan

    2017-01-01

    Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST) domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR-) based approach. And the dynamic group sparsity recovery (DGSR) algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation. PMID:29250134

  3. Image Fusion of CT and MR with Sparse Representation in NSST Domain.

    PubMed

    Qiu, Chenhui; Wang, Yuanyuan; Zhang, Huan; Xia, Shunren

    2017-01-01

    Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST) domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR-) based approach. And the dynamic group sparsity recovery (DGSR) algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation.

  4. How Magnetic Disturbance Influences the Attitude and Heading in Magnetic and Inertial Sensor-Based Orientation Estimation

    PubMed Central

    Li, Qingguo

    2017-01-01

    With the advancements in micro-electromechanical systems (MEMS) technologies, magnetic and inertial sensors are becoming more and more accurate, lightweight, smaller in size as well as low-cost, which in turn boosts their applications in human movement analysis. However, challenges still exist in the field of sensor orientation estimation, where magnetic disturbance represents one of the obstacles limiting their practical application. The objective of this paper is to systematically analyze exactly how magnetic disturbances affects the attitude and heading estimation for a magnetic and inertial sensor. First, we reviewed four major components dealing with magnetic disturbance, namely decoupling attitude estimation from magnetic reading, gyro bias estimation, adaptive strategies of compensating magnetic disturbance and sensor fusion algorithms. We review and analyze the features of existing methods of each component. Second, to understand each component in magnetic disturbance rejection, four representative sensor fusion methods were implemented, including gradient descent algorithms, improved explicit complementary filter, dual-linear Kalman filter and extended Kalman filter. Finally, a new standardized testing procedure has been developed to objectively assess the performance of each method against magnetic disturbance. Based upon the testing results, the strength and weakness of the existing sensor fusion methods were easily examined, and suggestions were presented for selecting a proper sensor fusion algorithm or developing new sensor fusion method. PMID:29283432

  5. 46 CFR 160.049-5 - Inspections and tests. 1

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Specification for a Buoyant Cushion Plastic Foam § 160.049-5... maintain quality control of the materials used, manufacturing methods and the finished product so as to... samples and components produced to maintain the quality of the finished product. Records of tests...

  6. 46 CFR 160.049-5 - Inspections and tests. 1

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Specification for a Buoyant Cushion Plastic Foam § 160.049-5... maintain quality control of the materials used, manufacturing methods and the finished product so as to... samples and components produced to maintain the quality of the finished product. Records of tests...

  7. 46 CFR 160.049-5 - Inspections and tests. 1

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Specification for a Buoyant Cushion Plastic Foam § 160.049-5... maintain quality control of the materials used, manufacturing methods and the finished product so as to... samples and components produced to maintain the quality of the finished product. Records of tests...

  8. Verification test report on a solar heating and hot water system

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Information is provided on the development, qualification and acceptance verification of commercial solar heating and hot water systems and components. The verification includes the performances, the efficiences and the various methods used, such as similarity, analysis, inspection, test, etc., that are applicable to satisfying the verification requirements.

  9. Blade loss transient dynamic analysis of turbomachinery

    NASA Technical Reports Server (NTRS)

    Stallone, M. J.; Gallardo, V.; Storace, A. F.; Bach, L. J.; Black, G.; Gaffney, E. F.

    1982-01-01

    This paper reports on work completed to develop an analytical method for predicting the transient non-linear response of a complete aircraft engine system due to the loss of a fan blade, and to validate the analysis by comparing the results against actual blade loss test data. The solution, which is based on the component element method, accounts for rotor-to-casing rubs, high damping and rapid deceleration rates associated with the blade loss event. A comparison of test results and predicted response show good agreement except for an initial overshoot spike not observed in test. The method is effective for analysis of large systems.

  10. Concordant 241Pu-241Am Dating of Environmental Samples: Results from Forest Fire Ash

    NASA Astrophysics Data System (ADS)

    Goldstein, S. J.; Oldham, W. J.; Murrell, M. T.; Katzman, D.

    2010-12-01

    We have measured the Pu, 237Np, 241Am, and 151Sm isotopic systematics for a set of forest fire ash samples from various locations in the western U.S. including Montana, Wyoming, Idaho, and New Mexico. The goal of this study is to develop a concordant 241Pu (t1/2 = 14.4 y)-241Am dating method for environmental collections. Environmental samples often contain mixtures of components including global fallout. There are a number of approaches for subtracting the global fallout component for such samples. One approach is to use 242Pu/239Pu as a normalizing isotope ratio in a three-isotope plot, where this ratio for the non-global fallout component can be estimated or assumed to be small. This study investigates a new, complementary method of normalization using the long-lived fission product, 151Sm (t1/2 = 90 y). We find that forest fire ash concentrates actinides and fission products with ~1E10 atoms 239Pu/g and ~1E8 atoms 151Sm/g, allowing us to measure these nuclides by mass spectrometric (MIC-TIMS) and radiometric (liquid scintillation counting) methods. The forest fire ash samples are characterized by a western U.S. regional isotopic signature representing varying mixtures of global fallout with a local component from atmospheric testing of nuclear weapons at the Nevada Test Site (NTS). Our results also show that 151Sm is well correlated with the Pu nuclides in the forest fire ash, suggesting that these nuclides have similar geochemical behavior in the environment. Results of this correlation indicate that the 151Sm/239Pu atom ratio for global fallout is ~0.164, in agreement with an independent estimate of 0.165 based on 137Cs fission yields for atmospheric weapons tests at the NTS. 241Pu-241Am dating of the non-global fallout component in the forest fire ash samples yield ages in the late 1950’s-early 1960’s, consistent with a peak in NTS weapons testing at that time. The age results for this component are in agreement using both 242Pu and 151Sm normalizations, although the errors for the 151Sm correction are currently larger due to the greater uncertainty of their measurements. Additional efforts to develop a concordant 241Pu-241Am dating method for environmental collections are underway with emphasis on soil cores.

  11. Concordant plutonium-241-americium-241 dating of environmental samples: results from forest fire ash

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldstein, Steven J; Oldham, Warren J; Murrell, Michael T

    2010-12-07

    We have measured the Pu, {sup 237}Np, {sup 241}Am, and {sup 151}Sm isotopic systematics for a set of forest fire ash samples from various locations in the western U.S. including Montana, Wyoming, Idaho, and New Mexico. The goal of this study is to develop a concordant {sup 241}Pu (t{sub 1/2} = 14.4 y)-{sup 241}Am dating method for environmental collections. Environmental samples often contain mixtures of components including global fallout. There are a number of approaches for subtracting the global fallout component for such samples. One approach is to use {sup 242}/{sup 239}Pu as a normalizing isotope ratio in a three-isotopemore » plot, where this ratio for the nonglobal fallout component can be estimated or assumed to be small. This study investigates a new, complementary method of normalization using the long-lived fission product, {sup 151}Sm (t{sub 1/2} = 90 y). We find that forest fire ash concentrates actinides and fission products with {approx}1E10 atoms {sup 239}Pu/g and {approx}1E8 atoms {sup 151}Sm/g, allowing us to measure these nuclides by mass spectrometric (MIC-TIMS) and radiometric (liquid scintillation counting) methods. The forest fire ash samples are characterized by a western U.S. regional isotopic signature representing varying mixtures of global fallout with a local component from atmospheric testing of nuclear weapons at the Nevada Test Site (NTS). Our results also show that {sup 151}Sm is well correlated with the Pu nuclides in the forest fire ash, suggesting that these nuclides have similar geochemical behavior in the environment. Results of this correlation indicate that the {sup 151}Sm/{sup 239}Pu atom ratio for global fallout is {approx}0.164, in agreement with an independent estimate of 0.165 based on {sup 137}Cs fission yields for atmospheric weapons tests at the NTS. {sup 241}Pu-{sup 241}Am dating of the non-global fallout component in the forest fire ash samples yield ages in the late 1950's-early 1960's, consistent with a peak in NTS weapons testing at that time. The age results for this component are in agreement using both {sup 242}Pu and {sup 151}Sm normalizations, although the errors for the {sup 151}Sm correction are currently larger due to the greater uncertainty of their measurements. Additional efforts to develop a concordant {sup 241}Pu-{sup 241}Am dating method for environmental collections are underway with emphasis on soil cores.« less

  12. COMMUNITY-ORIENTED DESIGN AND EVALUATION PROCESS FOR SUSTAINABLE INFRASTRUCTURE

    EPA Science Inventory

    We met our first objective by completing the physical infrastructure of the La Fortuna-Tule water and sanitation project using the CODE-PSI method. This physical component of the project was important in providing a real, relevant, community-scale test case for the methods ...

  13. Manufacturing and testing of a prototypical divertor vertical target for ITER

    NASA Astrophysics Data System (ADS)

    Merola, M.; Plöchl, L.; Chappuis, Ph; Escourbiac, F.; Grattarola, M.; Smid, I.; Tivey, R.; Vieider, G.

    2000-12-01

    After an extensive R&D activity, a medium-scale divertor vertical target prototype has been manufactured by the EU Home Team. This component contains all the main features of the corresponding ITER divertor design and consists of two units with one cooling channel each, assembled together and having an overall length and width of about 600 and 50 mm, respectively. The upper part of the prototype has a tungsten macro-brush armour, whereas the lower part is covered by CFC monoblocks. A number of joining techniques were required to manufacture this component as well as an appreciable effort in the development of suitable non-destructive testing methods. The component was high heat flux tested in FE200 electron beam facility at Le Creusot, France. It endured 100 cycles at 5 MW/m 2, 1000 cycles at 10 MW/m 2 and more then 1000 cycles at 15-20 MW/m 2. The final critical heat flux test reached a value in excess of 30 MW/m 2.

  14. Automatic Classification of Artifactual ICA-Components for Artifact Removal in EEG Signals

    PubMed Central

    2011-01-01

    Background Artifacts contained in EEG recordings hamper both, the visual interpretation by experts as well as the algorithmic processing and analysis (e.g. for Brain-Computer Interfaces (BCI) or for Mental State Monitoring). While hand-optimized selection of source components derived from Independent Component Analysis (ICA) to clean EEG data is widespread, the field could greatly profit from automated solutions based on Machine Learning methods. Existing ICA-based removal strategies depend on explicit recordings of an individual's artifacts or have not been shown to reliably identify muscle artifacts. Methods We propose an automatic method for the classification of general artifactual source components. They are estimated by TDSEP, an ICA method that takes temporal correlations into account. The linear classifier is based on an optimized feature subset determined by a Linear Programming Machine (LPM). The subset is composed of features from the frequency-, the spatial- and temporal domain. A subject independent classifier was trained on 640 TDSEP components (reaction time (RT) study, n = 12) that were hand labeled by experts as artifactual or brain sources and tested on 1080 new components of RT data of the same study. Generalization was tested on new data from two studies (auditory Event Related Potential (ERP) paradigm, n = 18; motor imagery BCI paradigm, n = 80) that used data with different channel setups and from new subjects. Results Based on six features only, the optimized linear classifier performed on level with the inter-expert disagreement (<10% Mean Squared Error (MSE)) on the RT data. On data of the auditory ERP study, the same pre-calculated classifier generalized well and achieved 15% MSE. On data of the motor imagery paradigm, we demonstrate that the discriminant information used for BCI is preserved when removing up to 60% of the most artifactual source components. Conclusions We propose a universal and efficient classifier of ICA components for the subject independent removal of artifacts from EEG data. Based on linear methods, it is applicable for different electrode placements and supports the introspection of results. Trained on expert ratings of large data sets, it is not restricted to the detection of eye- and muscle artifacts. Its performance and generalization ability is demonstrated on data of different EEG studies. PMID:21810266

  15. Study of advanced techniques for determining the long-term performance of components

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A study was conducted of techniques having the capability of determining the performance and reliability of components for spacecraft liquid propulsion applications for long term missions. The study utilized two major approaches; improvement in the existing technology, and the evolution of new technology. The criteria established and methods evolved are applicable to valve components. Primary emphasis was placed on the propellants oxygen difluoride and diborane combination. The investigation included analysis, fabrication, and tests of experimental equipment to provide data and performance criteria.

  16. Conservatism implications of shock test tailoring for multiple design environments

    NASA Technical Reports Server (NTRS)

    Baca, Thomas J.; Bell, R. Glenn; Robbins, Susan A.

    1987-01-01

    A method for analyzing shock conservation in test specifications that have been tailored to qualify a structure for multiple design environments is discussed. Shock test conservation is qualified for shock response spectra, shock intensity spectra and ranked peak acceleration data in terms of an Index of Conservation (IOC) and an Overtest Factor (OTF). The multi-environment conservation analysis addresses the issue of both absolute and average conservation. The method is demonstrated in a case where four laboratory tests have been specified to qualify a component which must survive seven different field environments. Final judgment of the tailored test specification is shown to require an understanding of the predominant failure modes of the test item.

  17. Color measurement and discrimination

    NASA Technical Reports Server (NTRS)

    Wandell, B. A.

    1985-01-01

    Theories of color measurement attempt to provide a quantative means for predicting whether two lights will be discriminable to an average observer. All color measurement theories can be characterized as follows: suppose lights a and b evoke responses from three color channels characterized as vectors, v(a) and v(b); the vector difference v(a) - v(b) corresponds to a set of channel responses that would be generated by some real light, call it *. According to theory a and b will be discriminable when * is detectable. A detailed development and test of the classic color measurement approach are reported. In the absence of a luminance component in the test stimuli, a and b, the theory holds well. In the presence of a luminance component, the theory is clearly false. When a luminance component is present discrimination judgements depend largely on whether the lights being discriminated fall in separate, categorical regions of color space. The results suggest that sensory estimation of surface color uses different methods, and the choice of method depends upon properties of the image. When there is significant luminance variation a categorical method is used, while in the absence of significant luminance variation judgments are continuous and consistant with the measurement approach.

  18. Proportion estimation using prior cluster purities

    NASA Technical Reports Server (NTRS)

    Terrell, G. R. (Principal Investigator)

    1980-01-01

    The prior distribution of CLASSY component purities is studied, and this information incorporated into maximum likelihood crop proportion estimators. The method is tested on Transition Year spring small grain segments.

  19. Accelerated test program for sealed nickel-cadmium spacecraft batteries/cells

    NASA Technical Reports Server (NTRS)

    Goodman, L. A.

    1976-01-01

    The feasibility was examined of inducing an accelerated test on sealed Nickel-Cadmium batteries or cells as a tool for spacecraft projects and battery users to determine: (1) the prediction of life capability; (2) a method of evaluating the effect of design and component changes in cells; and (3) a means of reducing time and cost of cell testing.

  20. A simple implementation of a normal mixture approach to differential gene expression in multiclass microarrays.

    PubMed

    McLachlan, G J; Bean, R W; Jones, L Ben-Tovim

    2006-07-01

    An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.

  1. Fourier-transform and global contrast interferometer alignment methods

    DOEpatents

    Goldberg, Kenneth A.

    2001-01-01

    Interferometric methods are presented to facilitate alignment of image-plane components within an interferometer and for the magnified viewing of interferometer masks in situ. Fourier-transforms are performed on intensity patterns that are detected with the interferometer and are used to calculate pseudo-images of the electric field in the image plane of the test optic where the critical alignment of various components is being performed. Fine alignment is aided by the introduction and optimization of a global contrast parameter that is easily calculated from the Fourier-transform.

  2. Consistent Practices for the Probability of Detection (POD) of Fracture Critical Metallic Components Project

    NASA Technical Reports Server (NTRS)

    Hughitt, Brian; Generazio, Edward (Principal Investigator); Nichols, Charles; Myers, Mika (Principal Investigator); Spencer, Floyd (Principal Investigator); Waller, Jess (Principal Investigator); Wladyka, Jordan (Principal Investigator); Aldrin, John; Burke, Eric; Cerecerez, Laura; hide

    2016-01-01

    NASA-STD-5009 requires that successful flaw detection by NDE methods be statistically qualified for use on fracture critical metallic components, but does not standardize practices. This task works towards standardizing calculations and record retention with a web-based tool, the NNWG POD Standards Library or NPSL. Test methods will also be standardized with an appropriately flexible appendix to -5009 identifying best practices. Additionally, this appendix will describe how specimens used to qualify NDE systems will be cataloged, stored and protected from corrosion, damage, or loss.

  3. Partitioning Ocean Wave Spectra Obtained from Radar Observations

    NASA Astrophysics Data System (ADS)

    Delaye, Lauriane; Vergely, Jean-Luc; Hauser, Daniele; Guitton, Gilles; Mouche, Alexis; Tison, Celine

    2016-08-01

    2D wave spectra of ocean waves can be partitioned into several wave components to better characterize the scene. We present here two methods of component detection: one based on watershed algorithm and the other based on a Bayesian approach. We tested both methods on a set of simulated SWIM data, the Ku-band real aperture radar embarked on the CFOSAT (China- France Oceanography Satellite) mission which launch is planned mid-2018. We present the results and the limits of both approaches and show that Bayesian method can also be applied to other kind of wave spectra observations as those obtained with the radar KuROS, an airborne radar wave spectrometer.

  4. Combined slope ratio analysis and linear-subtraction: An extension of the Pearce ratio method

    NASA Astrophysics Data System (ADS)

    De Waal, Sybrand A.

    1996-07-01

    A new technique, called combined slope ratio analysis, has been developed by extending the Pearce element ratio or conserved-denominator method (Pearce, 1968) to its logical conclusions. If two stoichiometric substances are mixed and certain chemical components are uniquely contained in either one of the two mixing substances, then by treating these unique components as conserved, the composition of the substance not containing the relevant component can be accurately calculated within the limits allowed by analytical and geological error. The calculated composition can then be subjected to rigorous statistical testing using the linear-subtraction method recently advanced by Woronow (1994). Application of combined slope ratio analysis to the rocks of the Uwekahuna Laccolith, Hawaii, USA, and the lavas of the 1959-summit eruption of Kilauea Volcano, Hawaii, USA, yields results that are consistent with field observations.

  5. Which factors may affect the willingness to take the HIV test? A research on Italian adults' sample.

    PubMed

    Mancini, Tiziana; Foà, Chiara

    2013-09-01

    Background and aim. Why people do not take the HIV test? The literature on the health-related behaviors associated with HIV infection has highlighted the role played by socio-demographical, behavioral, and cognitive variables. Less often has been studies the impact of psychosocial and normative factors that can affect willingness to test HIV. The aim of this study was to investigate which were the main psycho-social factors that promote/inhibit the intention to take the HIV test. Method. A questionnaire was submitted to a sample of 775 Italian adults (50. 7% female; mean age = 37.24; SD = 10.94; range 17 - 66 years). Results. Logistic Regression Analysis shown that age, risk behaviors, and personal concern are significantly predictors of the intention even if a positive attitude towards HIV test is the strongest predictor. Results showed also that the normative component of attitude (perception of social disapproval) and emotional component (shame and embarrassment) discouraged people from taking the test, while the cognitive-rational component did not. Conclusions. Are the perception of social disapproval by "significant others" and the social emotions of shame and embarrassment that discourage people from taking the test. Implications will be discussed.

  6. Degradation mechanisms and accelerated testing in PEM fuel cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borup, Rodney L; Mukundan, Rangachary

    2010-01-01

    The durability of PEM fuel cells is a major barrier to the commercialization of these systems for stationary and transportation power applications. Although there has been recent progress in improving durability, further improvements are needed to meet the commercialization targets. Past improvements have largely been made possible because of the fundamental understanding of the underlying degradation mechanisms. By investigating component and cell degradation modes; defining the fundamental degradation mechanisms of components and component interactions new materials can be designed to improve durability. Various factors have been shown to affect the useful life of PEM fuel cells. Other issues arise frommore » component optimization. Operational conditions (such as impurities in either the fuel and oxidant stream), cell environment, temperature (including subfreezing exposure), pressure, current, voltage, etc.; or transient versus continuous operation, including start-up and shutdown procedures, represent other factors that can affect cell performance and durability. The need for Accelerated Stress Tests (ASTs) can be quickly understood given the target lives for fuel cell systems: 5000 hours ({approx} 7 months) for automotive, and 40,000 hrs ({approx} 4.6 years) for stationary systems. Thus testing methods that enable more rapid screening of individual components to determine their durability characteristics, such as off-line environmental testing, are needed for evaluating new component durability in a reasonable turn-around time. This allows proposed improvements in a component to be evaluated rapidly and independently, subsequently allowing rapid advancement in PEM fuel cell durability. These tests are also crucial to developers in order to make sure that they do not sacrifice durability while making improvements in costs (e.g. lower platinum group metal [PGM] loading) and performance (e.g. thinner membrane or a GDL with better water management properties). To achieve a deeper understanding and improve PEM fuel cell durability LANL is conducting research to better define fuel cell component degradation mechanisms and correlate AST measurements to component in 'real-world' situations.« less

  7. [Psychometric properties of the Polish version of the Oldenburg Burnout Inventory (OLBI)].

    PubMed

    Baka, Łukasz; Basińska, Beata A

    2016-01-01

    The objective of this study was to test the psychometric properties of the Polish version of the Oldenburg Burnout Inventory (OLBI) - its factor structure, reliability, validity and standard norms. The study was conducted on 3 independent samples of 1804, 366 and 48 workers employed in social service and general service professions. To test the OLBI structure the exploratory factor analysis was conducted. The reliability was assessed by means of Cronbach's α coefficient (the internal consistent) and test-retest (the stability over time) method, with a 6-week follow-up. The construct validity of the OLBI was tested by means of correlation analysis, using perceived stress and work engagement as the criterion variables. The result of the factor analysis confirmed a 2-factor structure of the Inventory but the construction of each factor differed from that in the OLBI original version. Therefore, 2 separate factor analyses - each for the single component of job burnout (exhaustion and disengagement from work) - were conducted. The analyses revealed that each of the components consisted of 2 subscales. The reliability of the OLBI was supported by 2 methods. It was also proved that job burnout and its 2 components, exhaustion and disengagement from work, were positively correlated with perceived stress and negatively correlated with work engagement and its 3 components - vigor, absorption and dedication. Despite certain limitations the Polish version of the OLBI shows satisfactory psychometric properties and it can be used to measure job burnout in Polish conditions. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  8. Development of the GPM Observatory Thermal Vacuum Test Model

    NASA Technical Reports Server (NTRS)

    Yang, Kan; Peabody, Hume

    2012-01-01

    A software-based thermal modeling process was documented for generating the thermal panel settings necessary to simulate worst-case on-orbit flight environments in an observatory-level thermal vacuum test setup. The method for creating such a thermal model involved four major steps: (1) determining the major thermal zones for test as indicated by the major dissipating components on the spacecraft, then mapping the major heat flows between these components; (2) finding the flight equivalent sink temperatures for these test thermal zones; (3) determining the thermal test ground support equipment (GSE) design and initial thermal panel settings based on the equivalent sink temperatures; and (4) adjusting the panel settings in the test model to match heat flows and temperatures with the flight model. The observatory test thermal model developed from this process allows quick predictions of the performance of the thermal vacuum test design. In this work, the method described above was applied to the Global Precipitation Measurement (GPM) core observatory spacecraft, a joint project between NASA and the Japanese Aerospace Exploration Agency (JAXA) which is currently being integrated at NASA Goddard Space Flight Center for launch in Early 2014. From preliminary results, the thermal test model generated from this process shows that the heat flows and temperatures match fairly well with the flight thermal model, indicating that the test model can simulate fairly accurately the conditions on-orbit. However, further analysis is needed to determine the best test configuration possible to validate the GPM thermal design before the start of environmental testing later this year. Also, while this analysis method has been applied solely to GPM, it should be emphasized that the same process can be applied to any mission to develop an effective test setup and panel settings which accurately simulate on-orbit thermal environments.

  9. Method of detecting system function by measuring frequency response

    DOEpatents

    Morrison, John L.; Morrison, William H.

    2008-07-01

    Real time battery impedance spectrum is acquired using one time record, Compensated Synchronous Detection (CSD). This parallel method enables battery diagnostics. The excitation current to a test battery is a sum of equal amplitude sin waves of a few frequencies spread over range of interest. The time profile of this signal has duration that is a few periods of the lowest frequency. The voltage response of the battery, average deleted, is the impedance of the battery in the time domain. Since the excitation frequencies are known, synchronous detection processes the time record and each component, both magnitude and phase, is obtained. For compensation, the components, except the one of interest, are reassembled in the time domain. The resulting signal is subtracted from the original signal and the component of interest is synchronously detected. This process is repeated for each component.

  10. The Application of Strain Range Partitioning Method to Torsional Creep-Fatigue Interaction

    NASA Technical Reports Server (NTRS)

    Zamrik, S. Y.

    1975-01-01

    The method of strain range partitioning was applied to a series of torsional fatigue tests conducted on tubular 304 stainless steel specimens at 1200 F. Creep strain was superimposed on cycling strain, and the resulting strain range was partitioned into four components; completely reversed plastic shear strain, plastic shear strain followed by creep strain, creep strain followed by plastic strain and completely reversed creep strain. Each strain component was related to the cyclic life of the material. The damaging effects of the individual strain components were expressed by a linear life fraction rule. The plastic shear strain component showed the least detrimental factor when compared to creep strain reversed by plastic strain. In the latter case, a reduction of torsional fatigue life in the order of magnitude of 1.5 was observed.

  11. Method of Detecting System Function by Measuring Frequency Response

    NASA Technical Reports Server (NTRS)

    Morrison, John L. (Inventor); Morrison, William H. (Inventor)

    2008-01-01

    Real time battery impedance spectrum is acquired using one time record, Compensated Synchronous Detection (CSD). This parallel method enables battery diagnostics. The excitation current to a test battery is a sum of equal amplitude sin waves of a few frequencies spread over range of interest. The time profile of this signal has duration that is a few periods of the lowest frequency. The voltage response of the battery, average deleted, is the impedance of the battery in the time domain. Since the excitation frequencies are known, synchronous detection processes the time record and each component, both magnitude and phase, is obtained. For compensation, the components, except the one of interest, are reassembled in the time domain. The resulting signal is subtracted from the original signal and the component of interest is synchronously detected. This process is repeated for each component.

  12. Space flight requirements for fiber optic components: qualification testing and lessons learned

    NASA Astrophysics Data System (ADS)

    Ott, Melanie N.; Jin, Xiaodan Linda; Chuska, Richard; Friedberg, Patricia; Malenab, Mary; Matuszeski, Adam

    2006-04-01

    "Qualification" of fiber optic components holds a very different meaning than it did ten years ago. In the past, qualification meant extensive prolonged testing and screening that led to a programmatic method of reliability assurance. For space flight programs today, the combination of using higher performance commercial technology, with shorter development schedules and tighter mission budgets makes long term testing and reliability characterization unfeasible. In many cases space flight missions will be using technology within years of its development and an example of this is fiber laser technology. Although the technology itself is not a new product the components that comprise a fiber laser system change frequently as processes and packaging changes occur. Once a process or the materials for manufacturing a component change, even the data that existed on its predecessor can no longer provide assurance on the newer version. In order to assure reliability during a space flight mission, the component engineer must understand the requirements of the space flight environment as well as the physics of failure of the components themselves. This can be incorporated into an efficient and effective testing plan that "qualifies" a component to specific criteria defined by the program given the mission requirements and the component limitations. This requires interaction at the very initial stages of design between the system design engineer, mechanical engineer, subsystem engineer and the component hardware engineer. Although this is the desired interaction what typically occurs is that the subsystem engineer asks the components or development engineers to meet difficult requirements without knowledge of the current industry situation or the lack of qualification data. This is then passed on to the vendor who can provide little help with such a harsh set of requirements due to high cost of testing for space flight environments. This presentation is designed to guide the engineers of design, development and components, and vendors of commercial components with how to make an efficient and effective qualification test plan with some basic generic information about many space flight requirements. Issues related to the physics of failure, acceptance criteria and lessons learned will also be discussed to assist with understanding how to approach a space flight mission in an ever changing commercial photonics industry.

  13. Space Flight Requirements for Fiber Optic Components; Qualification Testing and Lessons Learned

    NASA Technical Reports Server (NTRS)

    Ott, Melanie N.; Jin, Xiaodan Linda; Chuska, Richard; Friedberg, Patricia; Malenab, Mary; Matuszeski, Adam

    2007-01-01

    "Qualification" of fiber optic components holds a very different meaning than it did ten years ago. In the past, qualification meant extensive prolonged testing and screening that led to a programmatic method of reliability assurance. For space flight programs today, the combination of using higher performance commercial technology, with shorter development schedules and tighter mission budgets makes long term testing and reliability characterization unfeasible. In many cases space flight missions will be using technology within years of its development and an example of this is fiber laser technology. Although the technology itself is not a new product the components that comprise a fiber laser system change frequently as processes and packaging changes occur. Once a process or the materials for manufacturing a component change, even the data that existed on its predecessor can no longer provide assurance on the newer version. In order to assure reliability during a space flight mission, the component engineer must understand the requirements of the space flight environment as well as the physics of failure of the components themselves. This can be incorporated into an efficient and effective testing plan that "qualifies" a component to specific criteria defined by the program given the mission requirements and the component limitations. This requires interaction at the very initial stages of design between the system design engineer, mechanical engineer, subsystem engineer and the component hardware engineer. Although this is the desired interaction what typically occurs is that the subsystem engineer asks the components or development engineers to meet difficult requirements without knowledge of the current industry situation or the lack of qualification data. This is then passed on to the vendor who can provide little help with such a harsh set of requirements due to high cost of testing for space flight environments. This presentation is designed to guide the engineers of design, development and components, and vendors of commercial components with how to make an efficient and effective qualification test plan with some basic generic information about many space flight requirements. Issues related to the physics of failure, acceptance criteria and lessons learned will also be discussed to assist with understanding how to approach a space flight mission in an ever changing commercial photonics industry.

  14. Graphical method for comparative statistical study of vaccine potency tests.

    PubMed

    Pay, T W; Hingley, P J

    1984-03-01

    Producers and consumers are interested in some of the intrinsic characteristics of vaccine potency assays for the comparative evaluation of suitable experimental design. A graphical method is developed which represents the precision of test results, the sensitivity of such results to changes in dosage, and the relevance of the results in the way they reflect the protection afforded in the host species. The graphs can be constructed from Producer's scores and Consumer's scores on each of the scales of test score, antigen dose and probability of protection against disease. A method for calculating these scores is suggested and illustrated for single and multiple component vaccines, for tests which do or do not employ a standard reference preparation, and for tests which employ quantitative or quantal systems of scoring.

  15. A reduced order, test verified component mode synthesis approach for system modeling applications

    NASA Astrophysics Data System (ADS)

    Butland, Adam; Avitabile, Peter

    2010-05-01

    Component mode synthesis (CMS) is a very common approach used for the generation of large system models. In general, these modeling techniques can be separated into two categories: those utilizing a combination of constraint modes and fixed interface normal modes and those based on a combination of free interface normal modes and residual flexibility terms. The major limitation of the methods utilizing constraint modes and fixed interface normal modes is the inability to easily obtain the required information from testing; the result of this limitation is that constraint mode-based techniques are primarily used with numerical models. An alternate approach is proposed which utilizes frequency and shape information acquired from modal testing to update reduced order finite element models using exact analytical model improvement techniques. The connection degrees of freedom are then rigidly constrained in the test verified, reduced order model to provide the boundary conditions necessary for constraint modes and fixed interface normal modes. The CMS approach is then used with this test verified, reduced order model to generate the system model for further analysis. A laboratory structure is used to show the application of the technique with both numerical and simulated experimental components to describe the system and validate the proposed approach. Actual test data is then used in the approach proposed. Due to typical measurement data contaminants that are always included in any test, the measured data is further processed to remove contaminants and is then used in the proposed approach. The final case using improved data with the reduced order, test verified components is shown to produce very acceptable results from the Craig-Bampton component mode synthesis approach. Use of the technique with its strengths and weaknesses are discussed.

  16. Test-retest reliability of cognitive EEG

    NASA Technical Reports Server (NTRS)

    McEvoy, L. K.; Smith, M. E.; Gevins, A.

    2000-01-01

    OBJECTIVE: Task-related EEG is sensitive to changes in cognitive state produced by increased task difficulty and by transient impairment. If task-related EEG has high test-retest reliability, it could be used as part of a clinical test to assess changes in cognitive function. The aim of this study was to determine the reliability of the EEG recorded during the performance of a working memory (WM) task and a psychomotor vigilance task (PVT). METHODS: EEG was recorded while subjects rested quietly and while they performed the tasks. Within session (test-retest interval of approximately 1 h) and between session (test-retest interval of approximately 7 days) reliability was calculated for four EEG components: frontal midline theta at Fz, posterior theta at Pz, and slow and fast alpha at Pz. RESULTS: Task-related EEG was highly reliable within and between sessions (r0.9 for all components in WM task, and r0.8 for all components in the PVT). Resting EEG also showed high reliability, although the magnitude of the correlation was somewhat smaller than that of the task-related EEG (r0.7 for all 4 components). CONCLUSIONS: These results suggest that under appropriate conditions, task-related EEG has sufficient retest reliability for use in assessing clinical changes in cognitive status.

  17. An application of holographic interferometry for dynamic vibration analysis of a jet engine turbine compressor rotor

    NASA Astrophysics Data System (ADS)

    Fein, Howard

    2003-09-01

    Holographic Interferometry has been successfully employed to characterize the materials and behavior of diverse types of structures under dynamic stress. Specialized variations of this technology have also been applied to define dynamic and vibration related structural behavior. Such applications of holographic technique offer some of the most effective methods of modal and dynamic analysis available. Real-time dynamic testing of the modal and mechanical behavior of jet engine turbine, rotor, vane, and compressor structures has always required advanced instrumentation for data collection in either simulated flight operation test or computer-based modeling and simulations. Advanced optical holography techniques are alternate methods which result in actual full-field behavioral data in a noninvasive, noncontact environment. These methods offer significant insight in both the development and subsequent operational test and modeling of advanced jet engine turbine and compressor rotor structures and their integration with total vehicle system dynamics. Structures and materials can be analyzed with very low amplitude excitation and the resultant data can be used to adjust the accuracy of mathematically derived structural and behavioral models. Holographic Interferometry offers a powerful tool to aid in the developmental engineering of turbine rotor and compressor structures for high stress applications. Aircraft engine applications in particular most consider operational environments where extremes in vibration and impulsive as well as continuous mechanical stress can affect both operation and structural stability. These considerations present ideal requisites for analysis using advanced holographic methods in the initial design and test of turbine rotor components. Holographic techniques are nondestructive, real-time, and definitive in allowing the identification of vibrational modes, displacements, and motion geometries. Such information can be crucial to the determination of mechanical configurations and designs as well as critical operational parameters of turbine structural components or unit turbine components fabricated from advanced and exotic new materials or using new fabrication methods. Anomalous behavioral characteristics can be directly related to hidden structural or mounting anomalies and defects.

  18. Development of a Probabilistic Component Mode Synthesis Method for the Analysis of Non-Deterministic Substructures

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Ferri, Aldo A.

    1995-01-01

    Standard methods of structural dynamic analysis assume that the structural characteristics are deterministic. Recognizing that these characteristics are actually statistical in nature, researchers have recently developed a variety of methods that use this information to determine probabilities of a desired response characteristic, such as natural frequency, without using expensive Monte Carlo simulations. One of the problems in these methods is correctly identifying the statistical properties of primitive variables such as geometry, stiffness, and mass. This paper presents a method where the measured dynamic properties of substructures are used instead as the random variables. The residual flexibility method of component mode synthesis is combined with the probabilistic methods to determine the cumulative distribution function of the system eigenvalues. A simple cantilever beam test problem is presented that illustrates the theory.

  19. Deformable known component model-based reconstruction for coronary CT angiography

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Tilley, S.; Xu, S.; Mathews, A.; McVeigh, E. R.; Stayman, J. W.

    2017-03-01

    Purpose: Atherosclerosis detection remains challenging in coronary CT angiography for patients with cardiac implants. Pacing electrodes of a pacemaker or lead components of a defibrillator can create substantial blooming and streak artifacts in the heart region, severely hindering the visualization of a plaque of interest. We present a novel reconstruction method that incorporates a deformable model for metal leads to eliminate metal artifacts and improve anatomy visualization even near the boundary of the component. Methods: The proposed reconstruction method, referred as STF-dKCR, includes a novel parameterization of the component that integrates deformation, a 3D-2D preregistration process that estimates component shape and position, and a polyenergetic forward model for x-ray propagation through the component where the spectral properties are jointly estimated. The methodology was tested on physical data of a cardiac phantom acquired on a CBCT testbench. The phantom included a simulated vessel, a metal wire emulating a pacing lead, and a small Teflon sphere attached to the vessel wall, mimicking a calcified plaque. The proposed method was also compared to the traditional FBP reconstruction and an interpolation-based metal correction method (FBP-MAR). Results: Metal artifacts presented in standard FBP reconstruction were significantly reduced in both FBP-MAR and STF- dKCR, yet only the STF-dKCR approach significantly improved the visibility of the small Teflon target (within 2 mm of the metal wire). The attenuation of the Teflon bead improved to 0.0481 mm-1 with STF-dKCR from 0.0166 mm-1 with FBP and from 0.0301 mm-1 with FBP-MAR - much closer to the expected 0.0414 mm-1. Conclusion: The proposed method has the potential to improve plaque visualization in coronary CT angiography in the presence of wire-shaped metal components.

  20. A Review of Treatment Adherence Measurement Methods

    ERIC Educational Resources Information Center

    Schoenwald, Sonja K.; Garland, Ann F.

    2013-01-01

    Fidelity measurement is critical for testing the effectiveness and implementation in practice of psychosocial interventions. Adherence is a critical component of fidelity. The purposes of this review were to catalogue adherence measurement methods and assess existing evidence for the valid and reliable use of the scores that they generate and the…

  1. Evaluation of Criterion Validity for Scales with Congeneric Measures

    ERIC Educational Resources Information Center

    Raykov, Tenko

    2007-01-01

    A method for estimating criterion validity of scales with homogeneous components is outlined. It accomplishes point and interval estimation of interrelationship indices between composite scores and criterion variables and is useful for testing hypotheses about criterion validity of measurement instruments. The method can also be used with missing…

  2. Rapid screening of guar gum using portable Raman spectral identification methods.

    PubMed

    Srivastava, Hirsch K; Wolfgang, Steven; Rodriguez, Jason D

    2016-01-25

    Guar gum is a well-known inactive ingredient (excipient) used in a variety of oral pharmaceutical dosage forms as a thickener and stabilizer of suspensions and as a binder of powders. It is also widely used as a food ingredient in which case alternatives with similar properties, including chemically similar gums, are readily available. Recent supply shortages and price fluctuations have caused guar gum to come under increasing scrutiny for possible adulteration by substitution of cheaper alternatives. One way that the U.S. FDA is attempting to screen pharmaceutical ingredients at risk for adulteration or substitution is through field-deployable spectroscopic screening. Here we report a comprehensive approach to evaluate two field-deployable Raman methods--spectral correlation and principal component analysis--to differentiate guar gum from other gums. We report a comparison of the sensitivity of the spectroscopic screening methods with current compendial identification tests. The ability of the spectroscopic methods to perform unambiguous identification of guar gum compared to other gums makes them an enhanced surveillance alternative to the current compendial identification tests, which are largely subjective in nature. Our findings indicate that Raman spectral identification methods perform better than compendial identification methods and are able to distinguish guar gum from other gums with 100% accuracy for samples tested by spectral correlation and principal component analysis. Published by Elsevier B.V.

  3. Correlational structure of 'frontal' tests and intelligence tests indicates two components with asymmetrical neurostructural correlates in old age.

    PubMed

    Cox, Simon R; MacPherson, Sarah E; Ferguson, Karen J; Nissan, Jack; Royle, Natalie A; MacLullich, Alasdair M J; Wardlaw, Joanna M; Deary, Ian J

    2014-09-01

    Both general fluid intelligence ( g f ) and performance on some 'frontal tests' of cognition decline with age. Both types of ability are at least partially dependent on the integrity of the frontal lobes, which also deteriorate with age. Overlap between these two methods of assessing complex cognition in older age remains unclear. Such overlap could be investigated using inter-test correlations alone, as in previous studies, but this would be enhanced by ascertaining whether frontal test performance and g f share neurobiological variance. To this end, we examined relationships between g f and 6 frontal tests (Tower, Self-Ordered Pointing, Simon, Moral Dilemmas, Reversal Learning and Faux Pas tests) in 90 healthy males, aged ~ 73 years. We interpreted their correlational structure using principal component analysis, and in relation to MRI-derived regional frontal lobe volumes (relative to maximal healthy brain size). g f correlated significantly and positively (.24 ≤  r  ≤ .53) with the majority of frontal test scores. Some frontal test scores also exhibited shared variance after controlling for g f . Principal component analysis of test scores identified units of g f -common and g f -independent variance. The former was associated with variance in the left dorsolateral (DL) and anterior cingulate (AC) regions, and the latter with variance in the right DL and AC regions. Thus, we identify two biologically-meaningful components of variance in complex cognitive performance in older age and suggest that age-related changes to DL and AC have the greatest cognitive impact.

  4. Seismic damage diagnosis of a masonry building using short-term damping measurements

    NASA Astrophysics Data System (ADS)

    Kouris, Leonidas Alexandros S.; Penna, Andrea; Magenes, Guido

    2017-04-01

    It is of considerable importance to perform dynamic identification and detect damage in existing structures. This paper describes a new and practical method for damage diagnosis of masonry buildings requiring minimum computational effort. The method is based on the relative variation of modal damping and validated against experimental data from a full scale two storey shake table test. The experiment involves a building subjected to uniaxial vibrations of progressively increasing intensity at the facilities of EUCENTRE laboratory (Pavia, Italy) up to a near collapse damage state. Five time-histories are applied scaling the Montenegro (1979) accelerogram. These strong motion tests are preceded by random vibration tests (RVT's) which are used to perform modal analysis. Two deterministic methods are applied: the single degree of freedom (SDOF) assumption together with the peak-picking method in the discrete frequency domain and the Eigen realisation algorithm with data correlations (ERA-DC) in the discrete time domain. Regarding the former procedure, some improvements are incorporated to locate rigorously the natural frequencies and estimate the modal damping. The progressive evolution of the modal damping is used as a key indicator to characterise damage on the building. Modal damping is connected to the structural mass and stiffness. A square integrated but only with two components expression for proportional (classical) damping is proposed to fit better with the experimental measurements of modal damping ratios. Using this Rayleigh order formulation the contribution of each of the damping components is evaluated. The stiffness component coefficient is proposed as an effective index to detect damage and quantify its intensity.

  5. Application of a Physics-Based Stabilization Criterion to Flight System Thermal Testing

    NASA Technical Reports Server (NTRS)

    Baker, Charles; Garrison, Matthew; Cottingham, Christine; Peabody, Sharon

    2010-01-01

    The theory shown here can provide thermal stability criteria based on physics and a goal steady state error rather than on an arbitrary "X% Q/mC(sub P)" method. The ability to accurately predict steady-state temperatures well before thermal balance is reached could be very useful during testing. This holds true for systems where components are changing temperature at different rates, although it works better for the components closest to the sink. However, the application to these test cases shows some significant limitations: This theory quickly falls apart if the thermal control system in question is tightly coupled to a large mass not accounted for in the calculations, so it is more useful in subsystem-level testing than full orbiter tests. Tight couplings to a fluctuating sink causes noise in the steady state temperature predictions.

  6. Evaluation of Amino Acid and Energy Utilization in Feedstuff for Swine and Poultry Diets

    PubMed Central

    Kong, C.; Adeola, O.

    2014-01-01

    An accurate feed formulation is essential for optimizing feed efficiency and minimizing feed cost for swine and poultry production. Because energy and amino acid (AA) account for the major cost of swine and poultry diets, a precise determination of the availability of energy and AA in feedstuffs is essential for accurate diet formulations. Therefore, the methodology for determining the availability of energy and AA should be carefully selected. The total collection and index methods are 2 major procedures for estimating the availability of energy and AA in feedstuffs for swine and poultry diets. The total collection method is based on the laborious production of quantitative records of feed intake and output, whereas the index method can avoid the laborious work, but greatly relies on accurate chemical analysis of index compound. The direct method, in which the test feedstuff in a diet is the sole source of the component of interest, is widely used to determine the digestibility of nutritional components in feedstuffs. In some cases, however, it may be necessary to formulate a basal diet and a test diet in which a portion of the basal diet is replaced by the feed ingredient to be tested because of poor palatability and low level of the interested component in the test ingredients. For the digestibility of AA, due to the confounding effect on AA composition of protein in feces by microorganisms in the hind gut, ileal digestibility rather than fecal digestibility has been preferred as the reliable method for estimating AA digestibility. Depending on the contribution of ileal endogenous AA losses in the ileal digestibility calculation, ileal digestibility estimates can be expressed as apparent, standardized, and true ileal digestibility, and are usually determined using the ileal cannulation method for pigs and the slaughter method for poultry. Among these digestibility estimates, the standardized ileal AA digestibility that corrects apparent ileal digestibility for basal endogenous AA losses, provides appropriate information for the formulation of swine and poultry diets. The total quantity of energy in feedstuffs can be partitioned into different components including gross energy (GE), digestible energy (DE), metabolizable energy (ME), and net energy based on the consideration of sequential energy losses during digestion and metabolism from GE in feeds. For swine, the total collection method is suggested for determining DE and ME in feedstuffs whereas for poultry the classical ME assay and the precision-fed method are applicable. Further investigation for the utilization of ME may be conducted by measuring either heat production or energy retention using indirect calorimetry or comparative slaughter method, respectively. This review provides information on the methodology used to determine accurate estimates of AA and energy availability for formulating swine and poultry diets. PMID:25050031

  7. Evaluation of amino Acid and energy utilization in feedstuff for Swine and poultry diets.

    PubMed

    Kong, C; Adeola, O

    2014-07-01

    An accurate feed formulation is essential for optimizing feed efficiency and minimizing feed cost for swine and poultry production. Because energy and amino acid (AA) account for the major cost of swine and poultry diets, a precise determination of the availability of energy and AA in feedstuffs is essential for accurate diet formulations. Therefore, the methodology for determining the availability of energy and AA should be carefully selected. The total collection and index methods are 2 major procedures for estimating the availability of energy and AA in feedstuffs for swine and poultry diets. The total collection method is based on the laborious production of quantitative records of feed intake and output, whereas the index method can avoid the laborious work, but greatly relies on accurate chemical analysis of index compound. The direct method, in which the test feedstuff in a diet is the sole source of the component of interest, is widely used to determine the digestibility of nutritional components in feedstuffs. In some cases, however, it may be necessary to formulate a basal diet and a test diet in which a portion of the basal diet is replaced by the feed ingredient to be tested because of poor palatability and low level of the interested component in the test ingredients. For the digestibility of AA, due to the confounding effect on AA composition of protein in feces by microorganisms in the hind gut, ileal digestibility rather than fecal digestibility has been preferred as the reliable method for estimating AA digestibility. Depending on the contribution of ileal endogenous AA losses in the ileal digestibility calculation, ileal digestibility estimates can be expressed as apparent, standardized, and true ileal digestibility, and are usually determined using the ileal cannulation method for pigs and the slaughter method for poultry. Among these digestibility estimates, the standardized ileal AA digestibility that corrects apparent ileal digestibility for basal endogenous AA losses, provides appropriate information for the formulation of swine and poultry diets. The total quantity of energy in feedstuffs can be partitioned into different components including gross energy (GE), digestible energy (DE), metabolizable energy (ME), and net energy based on the consideration of sequential energy losses during digestion and metabolism from GE in feeds. For swine, the total collection method is suggested for determining DE and ME in feedstuffs whereas for poultry the classical ME assay and the precision-fed method are applicable. Further investigation for the utilization of ME may be conducted by measuring either heat production or energy retention using indirect calorimetry or comparative slaughter method, respectively. This review provides information on the methodology used to determine accurate estimates of AA and energy availability for formulating swine and poultry diets.

  8. Design and analysis of the federal aviation administration next generation fire test burner

    NASA Astrophysics Data System (ADS)

    Ochs, Robert Ian

    The United States Federal Aviation Administration makes use of threat-based fire test methods for the certification of aircraft cabin materials to enhance the level of safety in the event of an in-flight or post-crash fire on a transport airplane. The global nature of the aviation industry results in these test methods being performed at hundreds of laboratories around the world; in some cases testing identical materials at multiple labs but yielding different results. Maintenance of this standard for an elevated level of safety requires that the test methods be as well defined as possible, necessitating a comprehensive understanding of critical test method parameters. The tests have evolved from simple Bunsen burner material tests to larger, more complicated apparatuses, requiring greater understanding of the device for proper application. The FAA specifies a modified home heating oil burner to simulate the effects of large, intense fires for testing of aircraft seat cushions, cargo compartment liners, power plant components, and thermal acoustic insulation. Recently, the FAA has developed a Next Generation (NexGen) Fire Test burner to replace the original oil burner that has become commercially unavailable. The NexGen burner design is based on the original oil burner but with more precise control of the air and fuel flow rates with the addition of a sonic nozzle and a pressurized fuel system. Knowledge of the fundamental flow properties created by various burner configurations is desired to develop an updated and standardized burner configuration for use around the world for aircraft materials fire testing and airplane certification. To that end, the NexGen fire test burner was analyzed with Particle Image Velocimetry (PIV) to resolve the non-reacting exit flow field and determine the influence of the configuration of burner components. The correlation between the measured flow fields and the standard burner performance metrics of flame temperature and burnthrough time was studied. Potential design improvements were also evaluated that could simplify burner set up and operation.

  9. 40 CFR 60.564 - Test methods and procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the Administrator. If the carrier component of the gas stream is nitrogen, then an average molecular... from materials balance by good engineering practice. (i) The owner or operator shall determine...

  10. Response of an Impact Test Apparatus for Fall Protective Headgear Testing Using a Hybrid-III Head/Neck Assembly

    PubMed Central

    Caccese, V.; Ferguson, J.; Lloyd, J.; Edgecomb, M.; Seidi, M.; Hajiaghamemar, M.

    2017-01-01

    A test method based upon a Hybrid-III head and neck assembly that includes measurement of both linear and angular acceleration is investigated for potential use in impact testing of protective headgear. The test apparatus is based upon a twin wire drop test system modified with the head/neck assembly and associated flyarm components. This study represents a preliminary assessment of the test apparatus for use in the development of protective headgear designed to prevent injury due to falls. By including angular acceleration in the test protocol it becomes possible to assess and intentionally reduce this component of acceleration. Comparisons of standard and reduced durometer necks, various anvils, front, rear, and side drop orientations, and response data on performance of the apparatus are provided. Injury measures summarized for an unprotected drop include maximum linear and angular acceleration, head injury criteria (HIC), rotational injury criteria (RIC), and power rotational head injury criteria (PRHIC). Coefficient of variation for multiple drops ranged from 0.4 to 6.7% for linear acceleration. Angular acceleration recorded in a side drop orientation resulted in highest coefficient of variation of 16.3%. The drop test apparatus results in a reasonably repeatable test method that has potential to be used in studies of headgear designed to reduce head impact injury. PMID:28216804

  11. Comparison of Measured Flapwise Structural Bending Moments on a Teetering Rotor Blade With Results Calculated From the Measured Pressure Distribution

    NASA Technical Reports Server (NTRS)

    Mayo, Alton P.

    1959-01-01

    Flapwise bending moments were calculated for a teetering rotor blade using a reasonably rapid theoretical method in which airloads obtained from wind-tunnel tests were employed. The calculated moments agreed reasonably well with those measured with strain gages under the same test conditions. The range of the tests included one hovering and two forward-flight conditions. The rotor speed for the test was very near blade resonance, and difficult-to-calculate resonance effects apparently were responsible for the largest differences between the calculated and measured harmonic components of blade bending moments. These differences, moreover, were largely nullified when the harmonic components were combined to give a comparison of the calculated and measured blade total- moment time histories. The degree of agreement shown is therefore considered adequate to warrant the use of the theoretical method in establishing and applying methods of prediction of rotor-blade fatigue loads. At the same time, the validity of the experimental methods of obtaining both airload and blade stress measurement is also indicated to be adequate for use in establishing improved methods for prediction of rotor-blade fatigue loads during the design stage. The blade stiffnesses and natural frequencies were measured and found to be in close agreement with calculated values; however, for a condition of blade resonance the use of the experimental stiffness values resulted in better agreement between calculated and measured blade stresses.

  12. Three-beam interferogram analysis method for surface flatness testing of glass plates and wedges

    NASA Astrophysics Data System (ADS)

    Sunderland, Zofia; Patorski, Krzysztof

    2015-09-01

    When testing transparent plates with high quality flat surfaces and a small angle between them the three-beam interference phenomenon is observed. Since the reference beam and the object beams reflected from both the front and back surface of a sample are detected, the recorded intensity distribution may be regarded as a sum of three fringe patterns. Images of that type cannot be succesfully analyzed with standard interferogram analysis methods. They contain, however, useful information on the tested plate surface flatness and its optical thickness variations. Several methods were elaborated to decode the plate parameters. Our technique represents a competitive solution which allows for retrieval of phase components of the three-beam interferogram. It requires recording two images: a three-beam interferogram and the two-beam one with the reference beam blocked. Mutually subtracting these images leads to the intensity distribution which, under some assumptions, provides access to the two component fringe sets which encode surfaces flatness. At various stages of processing we take advantage of nonlinear operations as well as single-frame interferogram analysis methods. Two-dimensional continuous wavelet transform (2D CWT) is used to separate a particular fringe family from the overall interferogram intensity distribution as well as to estimate the phase distribution from a pattern. We distinguish two processing paths depending on the relative density of fringe sets which is connected with geometry of a sample and optical setup. The proposed method is tested on simulated data.

  13. Density of ocular components of the bovine eye.

    PubMed

    Su, Xiao; Vesco, Christina; Fleming, Jacquelyn; Choh, Vivian

    2009-10-01

    Density is essential for acoustic characterization of tissues and provides a basic input for ultrasound backscatter and absorption models. Despite the existence of extensive compilations of acoustic properties, neither unified data on ocular density nor comparisons of the densities between all ocular components can be found. This study was undertaken to determine the mass density of all the ocular components of the bovine eye. Liquid components were measured through mass/volume ratio, whereas solid tissues were measured with two different densitometry techniques based on Archimedes Principle. The first method determines the density by measuring dry and wet weight of the tissues. The second method consists of immersing the tissues in sucrose solutions of varying densities and observing their buoyancy. Although the mean densities for all tissues were found to be within 0.02 g/cm by both methods, only the sucrose solution method offered a consistent relative order for all measured ocular components, as well as a considerably smaller standard deviation (a maximum standard deviation of 0.004 g/cm for cornea). The lens was found to be the densest component, followed by the sclera, cornea, choroid, retina, aqueous, and vitreous humors. The consistent results of the sucrose solution tests suggest that the ocular mass density is a physical property that is more dependent on the compositional and structural characteristics of the tissue and than on population variability.

  14. Full waveform inversion using a decomposed single frequency component from a spectrogram

    NASA Astrophysics Data System (ADS)

    Ha, Jiho; Kim, Seongpil; Koo, Namhyung; Kim, Young-Ju; Woo, Nam-Sub; Han, Sang-Mok; Chung, Wookeen; Shin, Sungryul; Shin, Changsoo; Lee, Jaejoon

    2018-06-01

    Although many full waveform inversion methods have been developed to construct velocity models of subsurface, various approaches have been presented to obtain an inversion result with long-wavelength features even though seismic data lacking low-frequency components were used. In this study, a new full waveform inversion algorithm was proposed to recover a long-wavelength velocity model that reflects the inherent characteristics of each frequency component of seismic data using a single-frequency component decomposed from the spectrogram. We utilized the wavelet transform method to obtain the spectrogram, and the decomposed signal from the spectrogram was used as transformed data. The Gauss-Newton method with the diagonal elements of an approximate Hessian matrix was used to update the model parameters at each iteration. Based on the results of time-frequency analysis in the spectrogram, numerical tests with some decomposed frequency components were performed using a modified SEG/EAGE salt dome (A-A‧) line to demonstrate the feasibility of the proposed inversion algorithm. This demonstrated that a reasonable inverted velocity model with long-wavelength structures can be obtained using a single frequency component. It was also confirmed that when strong noise occurs in part of the frequency band, it is feasible to obtain a long-wavelength velocity model from the noise data with a frequency component that is less affected by the noise. Finally, it was confirmed that the results obtained from the spectrogram inversion can be used as an initial velocity model in conventional inversion methods.

  15. Advanced Turbine Technology Applications Project (ATTAP)

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Reports technical effort by AlliedSignal Engines in sixth year of DOE/NASA funded project. Topics include: gas turbine engine design modifications of production APU to incorporate ceramic components; fabrication and processing of silicon nitride blades and nozzles; component and engine testing; and refinement and development of critical ceramics technologies, including: hot corrosion testing and environmental life predictive model; advanced NDE methods for internal flaws in ceramic components; and improved carbon pulverization modeling during impact. ATTAP project is oriented toward developing high-risk technology of ceramic structural component design and fabrication to carry forward to commercial production by 'bridging the gap' between structural ceramics in the laboratory and near-term commercial heat engine application. Current ATTAP project goal is to support accelerated commercialization of advanced, high-temperature engines for hybrid vehicles and other applications. Project objectives are to provide essential and substantial early field experience demonstrating ceramic component reliability and durability in modified, available, gas turbine engine applications; and to scale-up and improve manufacturing processes of ceramic turbine engine components and demonstrate application of these processes in the production environment.

  16. Periodic component analysis as a spatial filter for SSVEP-based brain-computer interface.

    PubMed

    Kiran Kumar, G R; Reddy, M Ramasubba

    2018-06-08

    Traditional Spatial filters used for steady-state visual evoked potential (SSVEP) extraction such as minimum energy combination (MEC) require the estimation of the background electroencephalogram (EEG) noise components. Even though this leads to improved performance in low signal to noise ratio (SNR) conditions, it makes such algorithms slow compared to the standard detection methods like canonical correlation analysis (CCA) due to the additional computational cost. In this paper, Periodic component analysis (πCA) is presented as an alternative spatial filtering approach to extract the SSVEP component effectively without involving extensive modelling of the noise. The πCA can separate out components corresponding to a given frequency of interest from the background electroencephalogram (EEG) by capturing the temporal information and does not generalize SSVEP based on rigid templates. Data from ten test subjects were used to evaluate the proposed method and the results demonstrate that the periodic component analysis acts as a reliable spatial filter for SSVEP extraction. Statistical tests were performed to validate the results. The experimental results show that πCA provides significant improvement in accuracy compared to standard CCA and MEC in low SNR conditions. The results demonstrate that πCA provides better detection accuracy compared to CCA and on par with that of MEC at a lower computational cost. Hence πCA is a reliable and efficient alternative detection algorithm for SSVEP based brain-computer interface (BCI). Copyright © 2018. Published by Elsevier B.V.

  17. The Effect of Simulated Flash-Heat Pasteurization on Immune Components of Human Milk.

    PubMed

    Daniels, Brodie; Schmidt, Stefan; King, Tracy; Israel-Ballard, Kiersten; Amundson Mansen, Kimberly; Coutsoudis, Anna

    2017-02-22

    A pasteurization temperature monitoring system has been designed using FoneAstra, a cellphone-based networked sensing system, to monitor simulated flash-heat (FH) pasteurization. This study compared the effect of the FoneAstra FH (F-FH) method with the Sterifeed Holder method currently used by human milk banks on human milk immune components (immunoglobulin A (IgA), lactoferrin activity, lysozyme activity, interleukin (IL)-8 and IL-10). Donor milk samples ( N = 50) were obtained from a human milk bank, and pasteurized. Concentrations of IgA, IL-8, IL-10, lysozyme activity and lactoferrin activity were compared to their controls using the Student's t -test. Both methods demonstrated no destruction of interleukins. While the Holder method retained all lysozyme activity, the F-FH method only retained 78.4% activity ( p < 0.0001), and both methods showed a decrease in lactoferrin activity (71.1% Holder vs. 38.6% F-FH; p < 0.0001) and a decrease in the retention of total IgA (78.9% Holder vs. 25.2% F-FH; p < 0.0001). Despite increased destruction of immune components compared to Holder pasteurization, the benefits of F-FH in terms of its low cost, feasibility, safety and retention of immune components make it a valuable resource in low-income countries for pasteurizing human milk, potentially saving infants' lives.

  18. Isothermal Calorimetric Observations of the Effect of Welding on Compatibility of Stainless Steels with High-Test Hydrogen Peroxide Propellant

    NASA Technical Reports Server (NTRS)

    Gostowski, Rudy

    2003-01-01

    High-Test Hydrogen Peroxide (HTP) is receiving renewed interest as a monopropellant and as the oxidizer for bipropellant systems. HTP is hydrogen peroxide having concentrations ranging from 70 to 98%. In these applications the energy and oxygen released during decomposition of HTP is used for propulsion. In propulsion systems components must be fabricated and connected using available joining processes. Welding is a common joining method for metallic components. The goal of this study was to compare the HTP compatibility of welded vs. unwelded stainless steel.

  19. A trial of reliable estimation of non-double-couple component of microearthquakes

    NASA Astrophysics Data System (ADS)

    Imanishi, K.; Uchide, T.

    2017-12-01

    Although most tectonic earthquakes are caused by shear failure, it has been reported that injection-induced seismicity and earthquakes occurring in volcanoes and geothermal areas contain non double couple (non-DC) components (e.g, Dreger et al., 2000). Also in the tectonic earthquakes, small non-DC components are beginning to be detected (e.g, Ross et al., 2015). However, it is generally limited to relatively large earthquakes that the non-DC component can be estimated with sufficient accuracy. In order to gain further understanding of fluid-driven earthquakes and fault zone properties, it is important to estimate full moment tensor of many microearthquakes with high precision. In the last AGU meeting, we proposed a method that iteratively applies the relative moment tensor inversion (RMTI) (Dahm, 1996) to source clusters improving each moment tensor as well as their relative accuracy. This new method overcomes the problem of RMTI that errors in the mechanism of reference events lead to biased solutions for other events, while taking advantage of RMTI that the source mechanisms can be determined without a computation of Green's function. The procedure is briefly summarized as follows: (1) Sample co-located multiple earthquakes with focal mechanisms, as initial solutions, determined by an ordinary method. (2) Apply the RMTI to estimate the source mechanism of each event relative to those of the other events. (3) Repeat the step 2 for the modified source mechanisms until the reduction of total residual converges. In order to confirm whether the method can resolve non-DC components, we conducted numerical tests on synthetic data. Amplitudes were computed assuming non-DC sources, amplifying by factor between 0.2 and 4 as site effects, and adding 10% random noise. As initial solutions in the step 1, we gave DC sources with arbitrary strike, dip and rake angle. In a test with eight sources at 12 stations, for example, all solutions were successively improved by iteration. Non-DC components were successfully resolved in spite of the fact that we gave DC sources as initial solutions. The application of the method to microearthquakes in geothermal area in Japan will be presented.

  20. Assessing Attitudes about Genetic Testing as a Component of Continuing Medical Education

    ERIC Educational Resources Information Center

    Mrazek, Michael; Koenig, Barbara; Skime, Michelle; Snyder, Karen; Hook, Christopher; Black, John, III; Mrazek, David

    2007-01-01

    Objective: To investigate the attitudes among mental health professionals regarding the use of genetic testing. Methods: Psychiatrists and other mental health professionals (N = 41) who were enrolled in a week-long course in psychiatric genomics completed questionnaires before and after the course designed to assess how diagnostic genetic tests…

  1. Genetic Influences on Cognitive Function Using the Cambridge Neuropsychological Test Automated Battery

    ERIC Educational Resources Information Center

    Singer, Jamie J.; MacGregor, Alex J.; Cherkas, Lynn F.; Spector, Tim D.

    2006-01-01

    The genetic relationship between intelligence and components of cognition remains controversial. Conflicting results may be a function of the limited number of methods used in experimental evaluation. The current study is the first to use CANTAB (The Cambridge Neuropsychological Test Automated Battery). This is a battery of validated computerised…

  2. Pyrolysis kinetics and combustion of thin wood using advanced cone calorimetry test method

    Treesearch

    Mark A. Dietenberger

    2011-01-01

    Mechanistic pyrolysis kinetics analysis of extractives, holocellulose, and lignin in solid wood over entire heating regime was possible using specialized cone calorimeter test and new mathematical analysis tools. Added hardware components include: modified sample holder for thin specimen with tiny thermocouples, methane ring burner with stainless steel mesh above cone...

  3. Forensic Toxicology: An Introduction.

    PubMed

    Smith, Michael P; Bluth, Martin H

    2016-12-01

    This article presents an overview of forensic toxicology. The authors describe the three components that make up forensic toxicology: workplace drug testing, postmortem toxicology, and human performance toxicology. Also discussed are the specimens that are tested, the methods used, and how the results are interpreted in this particular discipline. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Test Design with Cognition in Mind

    ERIC Educational Resources Information Center

    Gorin, Joanna S.

    2006-01-01

    One of the primary themes of the National Research Council's 2001 book "Knowing What Students Know" was the importance of cognition as a component of assessment design and measurement theory (NRC, 2001). One reaction to the book has been an increased use of sophisticated statistical methods to model cognitive information available in test data.…

  5. Fatigue Damage Spectrum calculation in a Mission Synthesis procedure for Sine-on-Random excitations

    NASA Astrophysics Data System (ADS)

    Angeli, Andrea; Cornelis, Bram; Troncossi, Marco

    2016-09-01

    In many real-life environments, certain mechanical and electronic components may be subjected to Sine-on-Random vibrations, i.e. excitations composed of random vibrations superimposed on deterministic (sinusoidal) contributions, in particular sine tones due to some rotating parts of the system (e.g. helicopters, engine-mounted components,...). These components must be designed to withstand the fatigue damage induced by the “composed” vibration environment, and qualification tests are advisable for the most critical ones. In the case of an accelerated qualification test, a proper test tailoring which starts from the real environment (measured vibration signals) and which preserves not only the accumulated fatigue damage but also the “nature” of the excitation (i.e. sinusoidal components plus random process) is important to obtain reliable results. In this paper, the classic time domain approach is taken as a reference for the comparison of different methods for the Fatigue Damage Spectrum (FDS) calculation in case of Sine-on-Random vibration environments. Then, a methodology to compute a Sine-on-Random specification based on a mission FDS is proposed.

  6. Development of Advanced In-Cylinder Components and Tribological Systems for Low Heat Rejection Diesel Engines

    NASA Technical Reports Server (NTRS)

    Yonushonis, T. M.; Wiczynski, P. D.; Myers, M. R.; Anderson, D. D.; McDonald, A. C.; Weber, H. G.; Richardson, D. E.; Stafford, R. J.; Naylor, M. G.

    1999-01-01

    In-cylinder components and tribological system concepts were designed, fabricated and tested at conditions anticipated for a 55% thermal efficiency heavy duty diesel engine for the year 2000 and beyond. A Cummins L10 single cylinder research engine was used to evaluate a spherical joint piston and connecting rod with 19.3 MPa (2800 psi) peak cylinder pressure capability, a thermal fatigue resistant insulated cylinder head, radial combustion seal cylinder liners, a highly compliant steel top compression ring, a variable geometry turbocharger, and a microwave heated particulate trap. Components successfully demonstrated in the final test included spherical joint connecting rod with a fiber reinforced piston, high conformability steel top rings with wear resistant coatings, ceramic exhaust ports with strategic oil cooling and radial combustion seal cylinder liner with cooling jacket transfer fins. A Cummins 6B diesel was used to develop the analytical methods, materials, manufacturing technology and engine components for lighter weight diesel engines without sacrificing performance or durability. A 6B diesel engine was built and tested to calibrate analytical models for the aluminum cylinder head and aluminum block.

  7. An experimental study of an adaptive-wall wind tunnel

    NASA Technical Reports Server (NTRS)

    Celik, Zeki; Roberts, Leonard

    1988-01-01

    A series of adaptive wall ventilated wind tunnel experiments was carried out to demonstrate the feasibility of using the side wall pressure distribution as the flow variable for the assessment of compatibility with free air conditions. Iterative and one step convergence methods were applied using the streamwise velocity component, the side wall pressure distribution and the normal velocity component in order to investigate their relative merits. The advantage of using the side wall pressure as the flow variable is to reduce the data taking time which is one the major contributors to the total testing time. In ventilated adaptive wall wind tunnel testing, side wall pressure measurements require simple instrumentation as opposed to the Laser Doppler Velocimetry used to measure the velocity components. In ventilated adaptive wall tunnel testing, influence coefficients are required to determine the pressure corrections in the plenum compartment. Experiments were carried out to evaluate the influence coefficients from side wall pressure distributions, and from streamwise and normal velocity distributions at two control levels. Velocity measurements were made using a two component Laser Doppler Velocimeter system.

  8. Development of replicated optics for AXAF-1 XDA testing

    NASA Technical Reports Server (NTRS)

    Engelhaupt, Darell; Wilson, Michele; Martin, Greg

    1995-01-01

    Advanced optical systems for applications such as grazing incidence Wolter I x-ray mirror assemblies require extraordinary mirror surfaces in terms of fine finish and surface figure. The impeccable mirror surface is on the inside of the rotational mirror form. One practical method of producing devices with these requirements is to first fabricate an exterior surface for the optical device then replicate that surface to have the inverse component with lightweight characteristics. The replicated optic is not better than the master or mandrel from which it is made. This task identifies methods and materials for forming these extremely low roughness optical components. The objectives of this contract were to (1) prepare replication samples of electroless nickel coated aluminum, and determine process requirements for plating XDA test optic; (2) prepare and assemble plating equipment required to process a demonstration optic; (3) characterize mandrels, replicas and test samples for residual stress, surface contamination and surface roughness and figure using equipment at MSFC and; (4) provide technical expertise in establishing the processes, procedures, supplies and equipment needed to process the XDA test optics.

  9. Effects of a Cooperative Learning Strategy on Teaching and Learning Phases of Matter and One-Component Phase Diagrams

    ERIC Educational Resources Information Center

    Doymus, Kemal

    2007-01-01

    This study aims to determine the effects of cooperative learning (using the jigsaw method) on students' achievement in a general chemistry course. The Chemistry Achievement Test (CAT) and Phase Achievement Test (PAT) were used. The questions on the CAT relate to solids, liquids, gases, bonding, matter, and matter states. This test was given to…

  10. Army Helicopter Crashworthiness

    DTIC Science & Technology

    1983-10-01

    protect the structure surrounding the occupied Cabin volume. Components. An important part of this program was to evaluate analysis methods that could...rigid (nonstroking) seats and the production BLACK HAWK helicopter crashworthy crewseat. Tests of three embalmed cadavers in the rigid seat gave mixed...CONDITIONS FOR RIGID SEAT TESTS WITH EMBALMED CADAVERS 1 CADAVER WEIGHT PEAK TEST NO. NO. AGE HEIGHT (LB) SEX ACCEL. (G) FRACTURE CONDITION SERIES #1

  11. Theory of chromatic noise masking applied to testing linearity of S-cone detection mechanisms.

    PubMed

    Giulianini, Franco; Eskew, Rhea T

    2007-09-01

    A method for testing the linearity of cone combination of chromatic detection mechanisms is applied to S-cone detection. This approach uses the concept of mechanism noise, the noise as seen by a postreceptoral neural mechanism, to represent the effects of superposing chromatic noise components in elevating thresholds and leads to a parameter-free prediction for a linear mechanism. The method also provides a test for the presence of multiple linear detectors and off-axis looking. No evidence for multiple linear mechanisms was found when using either S-cone increment or decrement tests. The results for both S-cone test polarities demonstrate that these mechanisms combine their cone inputs nonlinearly.

  12. A mixture model with a reference-based automatic selection of components for disease classification from protein and/or gene expression levels

    PubMed Central

    2011-01-01

    Background Bioinformatics data analysis is often using linear mixture model representing samples as additive mixture of components. Properly constrained blind matrix factorization methods extract those components using mixture samples only. However, automatic selection of extracted components to be retained for classification analysis remains an open issue. Results The method proposed here is applied to well-studied protein and genomic datasets of ovarian, prostate and colon cancers to extract components for disease prediction. It achieves average sensitivities of: 96.2 (sd = 2.7%), 97.6% (sd = 2.8%) and 90.8% (sd = 5.5%) and average specificities of: 93.6% (sd = 4.1%), 99% (sd = 2.2%) and 79.4% (sd = 9.8%) in 100 independent two-fold cross-validations. Conclusions We propose an additive mixture model of a sample for feature extraction using, in principle, sparseness constrained factorization on a sample-by-sample basis. As opposed to that, existing methods factorize complete dataset simultaneously. The sample model is composed of a reference sample representing control and/or case (disease) groups and a test sample. Each sample is decomposed into two or more components that are selected automatically (without using label information) as control specific, case specific and not differentially expressed (neutral). The number of components is determined by cross-validation. Automatic assignment of features (m/z ratios or genes) to particular component is based on thresholds estimated from each sample directly. Due to the locality of decomposition, the strength of the expression of each feature across the samples can vary. Yet, they will still be allocated to the related disease and/or control specific component. Since label information is not used in the selection process, case and control specific components can be used for classification. That is not the case with standard factorization methods. Moreover, the component selected by proposed method as disease specific can be interpreted as a sub-mode and retained for further analysis to identify potential biomarkers. As opposed to standard matrix factorization methods this can be achieved on a sample (experiment)-by-sample basis. Postulating one or more components with indifferent features enables their removal from disease and control specific components on a sample-by-sample basis. This yields selected components with reduced complexity and generally, it increases prediction accuracy. PMID:22208882

  13. Mechanistic flexible pavement overlay design program.

    DOT National Transportation Integrated Search

    2009-07-01

    The current Louisiana Department of Transportation and Development (LADOTD) overlay thickness design method follows the Component : Analysis procedure provided in the 1993 AASHTO pavement design guide. Since neither field nor laboratory tests a...

  14. Microfluidic systems and methods of transport and lysis of cells and analysis of cell lysate

    DOEpatents

    Culbertson, Christopher T.; Jacobson, Stephen C.; McClain, Maxine A.; Ramsey, J. Michael

    2004-08-31

    Microfluidic systems and methods are disclosed which are adapted to transport and lyse cellular components of a test sample for analysis. The disclosed microfluidic systems and methods, which employ an electric field to rupture the cell membrane, cause unusually rapid lysis, thereby minimizing continued cellular activity and resulting in greater accuracy of analysis of cell processes.

  15. Microfluidic systems and methods for transport and lysis of cells and analysis of cell lysate

    DOEpatents

    Culbertson, Christopher T [Oak Ridge, TN; Jacobson, Stephen C [Knoxville, TN; McClain, Maxine A [Knoxville, TN; Ramsey, J Michael [Knoxville, TN

    2008-09-02

    Microfluidic systems and methods are disclosed which are adapted to transport and lyse cellular components of a test sample for analysis. The disclosed microfluidic systems and methods, which employ an electric field to rupture the cell membrane, cause unusually rapid lysis, thereby minimizing continued cellular activity and resulting in greater accuracy of analysis of cell processes.

  16. Modulation by EEG features of BOLD responses to interictal epileptiform discharges

    PubMed Central

    LeVan, Pierre; Tyvaert, Louise; Gotman, Jean

    2013-01-01

    Introduction EEG-fMRI of interictal epileptiform discharges (IEDs) usually assumes a fixed hemodynamic response function (HRF). This study investigates HRF variability with respect to IED amplitude fluctuations using independent component analysis (ICA), with the goal of improving the specificity of EEG-fMRI analyses. Methods We selected EEG-fMRI data from 10 focal epilepsy patients with a good quality EEG. IED amplitudes were calculated in an average reference montage. The fMRI data were decomposed by ICA and a deconvolution method identified IED-related components by detecting time courses with a significant HRF time-locked to the IEDs (F-test, p<0.05). Individual HRF amplitudes were then calculated for each IED. Components with a significant HRF/IED amplitude correlation (Spearman test, p< 0.05) were compared to the presumed epileptogenic focus and to results of a general linear model (GLM) analysis. Results In 7 patients, at least one IED-related component was concordant with the focus, but many IED-related components were at distant locations. When considering only components with a significant HRF/IED amplitude correlation, distant components could be discarded, significantly increasing the relative proportion of activated voxels in the focus (p=0.02). In the 3 patients without concordant IED-related components, no HRF/IED amplitude correlations were detected inside the brain. Integrating IED-related amplitudes in the GLM significantly improved fMRI signal modeling in the epileptogenic focus in 4 patients (p< 0.05). Conclusion Activations in the epileptogenic focus appear to show significant correlations between HRF and IED amplitudes, unlike distant responses. These correlations could be integrated in the analysis to increase the specificity of EEG-fMRI studies in epilepsy. PMID:20026222

  17. Gottingen Wind Tunnel for Testing Aircraft Models

    NASA Technical Reports Server (NTRS)

    Prandtl, L

    1920-01-01

    Given here is a brief description of the Gottingen Wind Tunnel for the testing of aircraft models, preceded by a history of its development. Included are a number of diagrams illustrating, among other things, a sectional elevation of the wind tunnel, the pressure regulator, the entrance cone and method of supporting a model for simple drag tests, a three-component balance, and a propeller testing device, all of which are discussed in the text.

  18. Spin-orbit coupling calculations with the two-component normalized elimination of the small component method

    NASA Astrophysics Data System (ADS)

    Filatov, Michael; Zou, Wenli; Cremer, Dieter

    2013-07-01

    A new algorithm for the two-component Normalized Elimination of the Small Component (2cNESC) method is presented and tested in the calculation of spin-orbit (SO) splittings for a series of heavy atoms and their molecules. The 2cNESC is a Dirac-exact method that employs the exact two-component one-electron Hamiltonian and thus leads to exact Dirac SO splittings for one-electron atoms. For many-electron atoms and molecules, the effect of the two-electron SO interaction is modeled by a screened nucleus potential using effective nuclear charges as proposed by Boettger [Phys. Rev. B 62, 7809 (2000), 10.1103/PhysRevB.62.7809]. The use of the screened nucleus potential for the two-electron SO interaction leads to accurate spinor energy splittings, for which the deviations from the accurate Dirac Fock-Coulomb values are on the average far below the deviations observed for other effective one-electron SO operators. For hydrogen halides HX (X = F, Cl, Br, I, At, and Uus) and mercury dihalides HgX2 (X = F, Cl, Br, I) trends in spinor energies and SO splittings as obtained with the 2cNESC method are analyzed and discussed on the basis of coupling schemes and the electronegativity of X.

  19. Correlational structure of ‘frontal’ tests and intelligence tests indicates two components with asymmetrical neurostructural correlates in old age

    PubMed Central

    Cox, Simon R.; MacPherson, Sarah E.; Ferguson, Karen J.; Nissan, Jack; Royle, Natalie A.; MacLullich, Alasdair M.J.; Wardlaw, Joanna M.; Deary, Ian J.

    2014-01-01

    Both general fluid intelligence (gf) and performance on some ‘frontal tests’ of cognition decline with age. Both types of ability are at least partially dependent on the integrity of the frontal lobes, which also deteriorate with age. Overlap between these two methods of assessing complex cognition in older age remains unclear. Such overlap could be investigated using inter-test correlations alone, as in previous studies, but this would be enhanced by ascertaining whether frontal test performance and gf share neurobiological variance. To this end, we examined relationships between gf and 6 frontal tests (Tower, Self-Ordered Pointing, Simon, Moral Dilemmas, Reversal Learning and Faux Pas tests) in 90 healthy males, aged ~ 73 years. We interpreted their correlational structure using principal component analysis, and in relation to MRI-derived regional frontal lobe volumes (relative to maximal healthy brain size). gf correlated significantly and positively (.24 ≤ r ≤ .53) with the majority of frontal test scores. Some frontal test scores also exhibited shared variance after controlling for gf. Principal component analysis of test scores identified units of gf-common and gf-independent variance. The former was associated with variance in the left dorsolateral (DL) and anterior cingulate (AC) regions, and the latter with variance in the right DL and AC regions. Thus, we identify two biologically-meaningful components of variance in complex cognitive performance in older age and suggest that age-related changes to DL and AC have the greatest cognitive impact. PMID:25278641

  20. Comparative Analysis of Metabolic Syndrome Components in over 15,000 African Americans Identifies Pleiotropic Variants: Results from the PAGE Study

    PubMed Central

    Carty, Cara L.; Bhattacharjee, Samsiddhi; Haessler, Jeff; Cheng, Iona; Hindorff, Lucia A.; Aroda, Vanita; Carlson, Christopher S.; Hsu, Chun-Nan; Wilkens, Lynne; Liu, Simin; Selvin, Elizabeth; Jackson, Rebecca; North, Kari E.; Peters, Ulrike; Pankow, James S.; Chatterjee, Nilanjan; Kooperberg, Charles

    2014-01-01

    Background Metabolic syndrome (MetS) refers to the clustering of cardio-metabolic risk factors including dyslipidemia, central adiposity, hypertension and hyperglycemia in individuals. Identification of pleiotropic genetic factors associated with MetS traits may shed light on key pathways or mediators underlying MetS. Methods and Results Using the Metabochip array in 15,148 African Americans (AA) from the PAGE Study, we identify susceptibility loci and investigate pleiotropy among genetic variants using a subset-based meta-analysis method, ASsociation-analysis-based-on-subSETs (ASSET). Unlike conventional models which lack power when associations for MetS components are null or have opposite effects, ASSET uses one-sided tests to detect positive and negative associations for components separately and combines tests accounting for correlations among components. With ASSET, we identify 27 SNPs in 1 glucose and 4 lipids loci (TCF7L2, LPL, APOA5, CETP, LPL, APOC1/APOE/TOMM40) significantly associated with MetS components overall, all P< 2.5e-7, the Bonferroni adjusted P-value. Three loci replicate in a Hispanic population, n=5172. A novel AA-specific variant, rs12721054/APOC1, and rs10096633/LPL are associated with ≥3 MetS components. We find additional evidence of pleiotropy for APOE, TOMM40, TCF7L2 and CETP variants, many with opposing effects; e.g. the same rs7901695/TCF7L2 allele is associated with increased odds of high glucose and decreased odds of central adiposity. Conclusions We highlight a method to increase power in large-scale genomic association analyses, and report a novel variant associated with all MetS components in AA. We also identify pleiotropic associations that may be clinically useful in patient risk profiling and for informing translational research of potential gene targets and medications. PMID:25023634

  1. An effect of humid climate on micro structure and chemical component of natural composite (Boehmeria nivea-Albizia falcata) based wind turbine blade

    NASA Astrophysics Data System (ADS)

    Sudarsono, S.; Purwanto; Sudarsono, Johny W.

    2018-02-01

    In this work, wind turbine blade NACA 4415 is fabricated from natural composite of Boehmeria nivea and Albizia falcate. The composite fabrication method used is hand lay up method. The aim of the work is to investigate an effect of humid climate of coastal area on micro structure and chemical composition of composite material of the blade. The wind turbine is tested at Pantai Baru, Bantul, Yogyakarta for 5.5 months. The micro structure scanning is performed with Scanning Electron Microscope (SEM) and material component is measured with Energy Dispersive X-ray spectrometer (EDS). The samples are tested before and after the use within 5.5 month at the location. The results show that composite material inexperienced interface degradation and insignificant change of micro structure. From EDS test, it is observed that Na filtration reduces C and increases O in composite material after 5.5 months.

  2. From Laboratory Research to a Clinical Trial

    PubMed Central

    Michels, Harold T.; Keevil, C. William; Salgado, Cassandra D.; Schmidt, Michael G.

    2015-01-01

    Objective: This is a translational science article that discusses copper alloys as antimicrobial environmental surfaces. Bacteria die when they come in contact with copper alloys in laboratory tests. Components made of copper alloys were also found to be efficacious in a clinical trial. Background: There are indications that bacteria found on frequently touched environmental surfaces play a role in infection transmission. Methods: In laboratory testing, copper alloy samples were inoculated with bacteria. In clinical trials, the amount of live bacteria on the surfaces of hospital components made of copper alloys, as well as those made from standard materials, was measured. Finally, infection rates were tracked in the hospital rooms with the copper components and compared to those found in the rooms containing the standard components. Results: Greater than a 99.9% reduction in live bacteria was realized in laboratory tests. In the clinical trials, an 83% reduction in bacteria was seen on the copper alloy components, when compared to the surfaces made from standard materials in the control rooms. Finally, the infection rates were found to be reduced by 58% in patient rooms with components made of copper, when compared to patients' rooms with components made of standard materials. Conclusions: Bacteria die on copper alloy surfaces in both the laboratory and the hospital rooms. Infection rates were lowered in those hospital rooms containing copper components. Thus, based on the presented information, the placement of copper alloy components, in the built environment, may have the potential to reduce not only hospital-acquired infections but also patient treatment costs. PMID:26163568

  3. Simulation Analysis of Helicopter Ground Resonance Nonlinear Dynamics

    NASA Astrophysics Data System (ADS)

    Zhu, Yan; Lu, Yu-hui; Ling, Ai-min

    2017-07-01

    In order to accurately predict the dynamic instability of helicopter ground resonance, a modeling and simulation method of helicopter ground resonance considering nonlinear dynamic characteristics of components (rotor lead-lag damper, landing gear wheel and absorber) is presented. The numerical integral method is used to calculate the transient responses of the body and rotor, simulating some disturbance. To obtain quantitative instabilities, Fast Fourier Transform (FFT) is conducted to estimate the modal frequencies, and the mobile rectangular window method is employed in the predictions of the modal damping in terms of the response time history. Simulation results show that ground resonance simulation test can exactly lead up the blade lead-lag regressing mode frequency, and the modal damping obtained according to attenuation curves are close to the test results. The simulation test results are in accordance with the actual accident situation, and prove the correctness of the simulation method. This analysis method used for ground resonance simulation test can give out the results according with real helicopter engineering tests.

  4. Collection of quantitative chemical release field data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demirgian, J.; Macha, S.; Loyola Univ.

    1999-01-01

    Detection and quantitation of chemicals in the environment requires Fourier-transform infrared (FTIR) instruments that are properly calibrated and tested. This calibration and testing requires field testing using matrices that are representative of actual instrument use conditions. Three methods commonly used for developing calibration files and training sets in the field are a closed optical cell or chamber, a large-scale chemical release, and a small-scale chemical release. There is no best method. The advantages and limitations of each method should be considered in evaluating field results. Proper calibration characterizes the sensitivity of an instrument, its ability to detect a component inmore » different matrices, and the quantitative accuracy and precision of the results.« less

  5. Application of lifting wavelet and random forest in compound fault diagnosis of gearbox

    NASA Astrophysics Data System (ADS)

    Chen, Tang; Cui, Yulian; Feng, Fuzhou; Wu, Chunzhi

    2018-03-01

    Aiming at the weakness of compound fault characteristic signals of a gearbox of an armored vehicle and difficult to identify fault types, a fault diagnosis method based on lifting wavelet and random forest is proposed. First of all, this method uses the lifting wavelet transform to decompose the original vibration signal in multi-layers, reconstructs the multi-layer low-frequency and high-frequency components obtained by the decomposition to get multiple component signals. Then the time-domain feature parameters are obtained for each component signal to form multiple feature vectors, which is input into the random forest pattern recognition classifier to determine the compound fault type. Finally, a variety of compound fault data of the gearbox fault analog test platform are verified, the results show that the recognition accuracy of the fault diagnosis method combined with the lifting wavelet and the random forest is up to 99.99%.

  6. QSAR study of anthranilic acid sulfonamides as inhibitors of methionine aminopeptidase-2 using LS-SVM and GRNN based on principal components.

    PubMed

    Shahlaei, Mohsen; Sabet, Razieh; Ziari, Maryam Bahman; Moeinifard, Behzad; Fassihi, Afshin; Karbakhsh, Reza

    2010-10-01

    Quantitative relationships between molecular structure and methionine aminopeptidase-2 inhibitory activity of a series of cytotoxic anthranilic acid sulfonamide derivatives were discovered. We have demonstrated the detailed application of two efficient nonlinear methods for evaluation of quantitative structure-activity relationships of the studied compounds. Components produced by principal component analysis as input of developed nonlinear models were used. The performance of the developed models namely PC-GRNN and PC-LS-SVM were tested by several validation methods. The resulted PC-LS-SVM model had a high statistical quality (R(2)=0.91 and R(CV)(2)=0.81) for predicting the cytotoxic activity of the compounds. Comparison between predictability of PC-GRNN and PC-LS-SVM indicates that later method has higher ability to predict the activity of the studied molecules. Copyright (c) 2010 Elsevier Masson SAS. All rights reserved.

  7. Optimum runway orientation relative to crosswinds

    NASA Technical Reports Server (NTRS)

    Falls, L. W.; Brown, S. C.

    1972-01-01

    Specific magnitudes of crosswinds may exist that could be constraints to the success of an aircraft mission such as the landing of the proposed space shuttle. A method is required to determine the orientation or azimuth of the proposed runway which will minimize the probability of certain critical crosswinds. Two procedures for obtaining the optimum runway orientation relative to minimizing a specified crosswind speed are described and illustrated with examples. The empirical procedure requires only hand calculations on an ordinary wind rose. The theoretical method utilizes wind statistics computed after the bivariate normal elliptical distribution is applied to a data sample of component winds. This method requires only the assumption that the wind components are bivariate normally distributed. This assumption seems to be reasonable. Studies are currently in progress for testing wind components for bivariate normality for various stations. The close agreement between the theoretical and empirical results for the example chosen substantiates the bivariate normal assumption.

  8. Multi-spectrometer calibration transfer based on independent component analysis.

    PubMed

    Liu, Yan; Xu, Hao; Xia, Zhenzhen; Gong, Zhiyong

    2018-02-26

    Calibration transfer is indispensable for practical applications of near infrared (NIR) spectroscopy due to the need for precise and consistent measurements across different spectrometers. In this work, a method for multi-spectrometer calibration transfer is described based on independent component analysis (ICA). A spectral matrix is first obtained by aligning the spectra measured on different spectrometers. Then, by using independent component analysis, the aligned spectral matrix is decomposed into the mixing matrix and the independent components of different spectrometers. These differing measurements between spectrometers can then be standardized by correcting the coefficients within the independent components. Two NIR datasets of corn and edible oil samples measured with three and four spectrometers, respectively, were used to test the reliability of this method. The results of both datasets reveal that spectra measurements across different spectrometers can be transferred simultaneously and that the partial least squares (PLS) models built with the measurements on one spectrometer can predict that the spectra can be transferred correctly on another.

  9. Laser heterodyne surface profiler

    DOEpatents

    Sommargren, Gary E.

    1982-01-01

    A method and apparatus is disclosed for testing the deviation of the face of an object from a flat smooth surface using a beam of coherent light of two plane-polarized components, one of a frequency constantly greater than the other by a fixed amount to produce a difference frequency with a constant phase to be used as a reference. The beam also is split into its two components with the separate components directed onto spaced apart points onthe face of the object to be tested for smoothness. The object is rotated on an axis coincident with one component which is directed to the face of the object at the center which constitutes a virtual fixed point. This component also is used as a reference. The other component follows a circular track on the face of the object as the object is rotated. The two components are recombined after reflection to produce a reflected frequency difference of a phase proportional to the difference in path length which is compared with the reference phase to produce a signal proportional to the deviation of the height of the surface along the circular track with respect to the fixed point at the center.

  10. 40 CFR 60.564 - Test methods and procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... If the carrier component of the gas stream is nitrogen, then an average molecular weight of 28 g/g... from materials balance by good engineering practice. (i) The owner or operator shall determine...

  11. 40 CFR 60.564 - Test methods and procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... If the carrier component of the gas stream is nitrogen, then an average molecular weight of 28 g/g... from materials balance by good engineering practice. (i) The owner or operator shall determine...

  12. 40 CFR 60.564 - Test methods and procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... If the carrier component of the gas stream is nitrogen, then an average molecular weight of 28 g/g... from materials balance by good engineering practice. (i) The owner or operator shall determine...

  13. 40 CFR 60.564 - Test methods and procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... If the carrier component of the gas stream is nitrogen, then an average molecular weight of 28 g/g... from materials balance by good engineering practice. (i) The owner or operator shall determine...

  14. Electrostatic discharge control for STDN stations

    NASA Technical Reports Server (NTRS)

    Mckiernan, J.

    1983-01-01

    This manual defines the requirements and control methods necessary to control the effect of electrostatic discharges that damage or destroy electronic equipment components. Test procedures for measuring the effectiveness of the control are included.

  15. [Study on two preparation methods for beta-CD inclusion compound of four traditional Chinese medicine volatile oils].

    PubMed

    Li, Hailiang; Cui, Xiaoli; Tong, Yan; Gong, Muxin

    2012-04-01

    To compare inclusion effects and process conditions of two preparation methods-colloid mill and saturated solution-for beta-CD inclusion compound of four traditional Chinese medicine volatile oils and study the relationship between each process condition and volatile oil physical properties and the regularity of selective inclusion of volatile oil components. Volatile oils from Nardostachyos Radix et Rhizoma, Amomi Fructus, Zingiberis Rhizoma and Angelicaesinensis Radix were prepared using two methods in the orthogonal test. These inclusion compounds by optimized processes were assessed and compared by such methods as TLC, IR and scanning electron microscope. Inclusion oils were extracted by steam distillation, and the components found before and after inclusion were analyzed by GC-MS. Analysis showed that new inclusion compounds, but inclusion compounds prepared by the two processes had differences to some extent. The colloid mill method showed a better inclusion effect than the saturated solution method, indicating that their process conditions had relations with volatile oil physical properties. There were differences in the inclusion selectivity of components between each other. The colloid mill method for inclusion preparation is more suitable for industrial requirements. To prepare volatile oil inclusion compounds with heavy gravity and high refractive index, the colloid mill method needs longer time and more water, while the saturated solution method requires higher temperature and more beta-cyclodextrin. The inclusion complex prepared with the colloid mill method contains extended molecular weight chemical composition, but the kinds of components are reduced.

  16. A Comparison of PSD Enveloping Methods for Nonstationary Vibration

    NASA Technical Reports Server (NTRS)

    Irvine, Tom

    2015-01-01

    There is a need to derive a power spectral density (PSD) envelope for nonstationary acceleration time histories, including launch vehicle data, so that components can be designed and tested accordingly. This paper presents the results of the three methods for an actual flight accelerometer record. Guidelines are given for the application of each method to nonstationary data. The method can be extended to other scenarios, including transportation vibration.

  17. Application to Noninvasive Measurement of Blood Components Based on Infrared Spectroscopy

    NASA Astrophysics Data System (ADS)

    Tamura, Kazuto; Ishizawa, Hiroaki; Fujita, Keiichi; Kaneko, Wataru; Morikawa, Tomotaka; Toba, Eiji; Kobayashi, Hideo

    Recently, lifestyle diseases (diabetics, hyperlipemia etc.) have been steadily increasing, because change of diet, lack of exercise, increase an alcoholic intake, and increase a stress. It is a matter of vital importance to us. About tens of millions of people in Japan have approached the danger of lifestyle diseases. So they have to do a blood test to make sure that they have controlled physical condition themselves. Therefore, they have to measure blood components again and again. So, they are burden too heavy. This paper describes a new noninvasive measurement of blood components based on optical sensing. This uses Fourier transform infrared spectroscopy of attenuated total reflection. In order to study, the influence of individual difference, the internal standard method was introduced. This paper describes the detail of the internal standard method and its effect to the blood components calibration. Significant improvement was obtained by using the internal standard.

  18. Time-dependent reliability analysis of ceramic engine components

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.

    1993-01-01

    The computer program CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing either the power or Paris law relations. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. Two example problems demonstrating proof testing and fatigue parameter estimation are given.

  19. A joint sparse representation-based method for double-trial evoked potentials estimation.

    PubMed

    Yu, Nannan; Liu, Haikuan; Wang, Xiaoyan; Lu, Hanbing

    2013-12-01

    In this paper, we present a novel approach to solving an evoked potentials estimating problem. Generally, the evoked potentials in two consecutive trials obtained by repeated identical stimuli of the nerves are extremely similar. In order to trace evoked potentials, we propose a joint sparse representation-based double-trial evoked potentials estimation method, taking full advantage of this similarity. The estimation process is performed in three stages: first, according to the similarity of evoked potentials and the randomness of a spontaneous electroencephalogram, the two consecutive observations of evoked potentials are considered as superpositions of the common component and the unique components; second, making use of their characteristics, the two sparse dictionaries are constructed; and finally, we apply the joint sparse representation method in order to extract the common component of double-trial observations, instead of the evoked potential in each trial. A series of experiments carried out on simulated and human test responses confirmed the superior performance of our method. © 2013 Elsevier Ltd. Published by Elsevier Ltd. All rights reserved.

  20. A Method to Capture Macroslip at Bolted Interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopkins, Ronald Neil; Heitman, Lili Anne Akin

    2015-10-01

    Relative motion at bolted connections can occur for large shock loads as the internal shear force in the bolted connection overcomes the frictional resistive force. This macroslip in a structure dissipates energy and reduces the response of the components above the bolted connection. There is a need to be able to capture macroslip behavior in a structural dynamics model. A linear model and many nonlinear models are not able to predict marcoslip effectively. The proposed method to capture macroslip is to use the multi-body dynamics code ADAMS to model joints with 3-D contact at the bolted interfaces. This model includesmore » both static and dynamic friction. The joints are preloaded and the pinning effect when a bolt shank impacts a through hole inside diameter is captured. Substructure representations of the components are included to account for component flexibility and dynamics. This method was applied to a simplified model of an aerospace structure and validation experiments were performed to test the adequacy of the method.« less

  1. A Method to Capture Macroslip at Bolted Interfaces [PowerPoint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopkins, Ronald Neil; Heitman, Lili Anne Akin

    2016-01-01

    Relative motion at bolted connections can occur for large shock loads as the internal shear force in the bolted connection overcomes the frictional resistive force. This macroslip in a structure dissipates energy and reduces the response of the components above the bolted connection. There is a need to be able to capture macroslip behavior in a structural dynamics model. A linear model and many nonlinear models are not able to predict marcoslip effectively. The proposed method to capture macroslip is to use the multi-body dynamics code ADAMS to model joints with 3-D contact at the bolted interfaces. This model includesmore » both static and dynamic friction. The joints are preloaded and the pinning effect when a bolt shank impacts a through hole inside diameter is captured. Substructure representations of the components are included to account for component flexibility and dynamics. This method was applied to a simplified model of an aerospace structure and validation experiments were performed to test the adequacy of the method.« less

  2. Measurement of Energy Performances for General-Structured Servers

    NASA Astrophysics Data System (ADS)

    Liu, Ren; Chen, Lili; Li, Pengcheng; Liu, Meng; Chen, Haihong

    2017-11-01

    Energy consumption of servers in data centers increases rapidly along with the wide application of Internet and connected devices. To improve the energy efficiency of servers, voluntary or mandatory energy efficiency programs for servers, including voluntary label program or mandatory energy performance standards have been adopted or being prepared in the US, EU and China. However, the energy performance of servers and testing methods of servers are not well defined. This paper presents matrices to measure the energy performances of general-structured servers. The impacts of various components of servers on their energy performances are also analyzed. Based on a set of normalized workload, the author proposes a standard method for testing energy efficiency of servers. Pilot tests are conducted to assess the energy performance testing methods of servers. The findings of the tests are discussed in the paper.

  3. Hermetic edge sealing of photovoltaic modules

    NASA Astrophysics Data System (ADS)

    1983-02-01

    The edge sealing technique is accomplished by a combination of a chemical bond between glass and aluminum, formed by electrostatic bonding, and a metallurgical bond between aluminum and aluminum, formed by ultrasonic welding. Such a glass to metal seal promises to provide a low cost, long lifetime, highly effective hermetic seal which can protect module components from severe environments. Development of the sealing techniques and demonstration of their effectiveness by fabricating a small number of dummy modules, up to eight inches square in size, and testing them for hermeticity using helium leak testing methods are reviewed. Non-destructive test methods are investigated.

  4. Hermetic edge sealing of photovoltaic modules

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The edge sealing technique is accomplished by a combination of a chemical bond between glass and aluminum, formed by electrostatic bonding, and a metallurgical bond between aluminum and aluminum, formed by ultrasonic welding. Such a glass to metal seal promises to provide a low cost, long lifetime, highly effective hermetic seal which can protect module components from severe environments. Development of the sealing techniques and demonstration of their effectiveness by fabricating a small number of dummy modules, up to eight inches square in size, and testing them for hermeticity using helium leak testing methods are reviewed. Non-destructive test methods are investigated.

  5. The Development of Expansion Plug Wedge Test for Clad Tubing Structure Mechanical Property Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jy-An John; Jiang, Hao

    2016-01-12

    To determine the tensile properties of irradiated fuel cladding in a hot cell, a simple test was developed at the Oak Ridge National Laboratory (ORNL) and is described fully in US Patent Application 20060070455, “Expanded plug method for developing circumferential mechanical properties of tubular materials.” This method is designed for testing fuel rod cladding ductility in a hot cell using an expandable plug to stretch a small ring of irradiated cladding material. The specimen strain is determined using the measured diametrical expansion of the ring. This method removes many complexities associated with specimen preparation and testing. The advantages are themore » simplicity of measuring the test component assembly in the hot cell and the direct measurement of the specimen’s strain. It was also found that cladding strength could be determined from the test results.« less

  6. In-vivo quantitative measurement of tissue oxygen saturation of human webbing using a transmission type continuous-wave near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Aizimu, Tuerxun; Adachi, Makoto; Nakano, Kazuya; Ohnishi, Takashi; Nakaguchi, Toshiya; Takahashi, Nozomi; Nakada, Taka-aki; Oda, Shigeto; Haneishi, Hideaki

    2018-02-01

    Near-infrared spectroscopy (NIRS) is a noninvasive method for monitoring tissue oxygen saturation (StO2). Many commercial NIRS devices are presently available. However, the precision of those devices is relatively poor because they are using the reflectance-model with which it is difficult to obtain the blood volume and other unchanged components of the tissue. Human webbing is a thin part of the hand and suitable to measure spectral transmittance. In this paper, we present a method for measuring StO2 of human webbing from a transmissive continuous-wave nearinfrared spectroscopy (CW-NIRS) data. The method is based on the modified Beer-Lambert law (MBL) and it consists of two steps. In the first step, we give a pressure to the upstream region of the measurement point to perturb the concentration of deoxy- and oxy-hemoglobin as remaining the other components and measure the spectral signals. From the measured data, spectral absorbance due to the components other than hemoglobin is calculated. In the second step, spectral measurement is performed at arbitrary time instance and the spectral absorbance obtained in the step 1 is subtracted from the measured absorbance. The tissue oxygen saturation (StO2) is estimated from the remained data. The method was evaluated on an arterial occlusion test (AOT) and a venous occlusion test (VOT). In the evaluation experiment, we confirmed that reasonable values of StO2 were obtained by the proposed method.

  7. Monitoring the quality of total hip replacement in a tertiary care department using a cumulative summation statistical method (CUSUM).

    PubMed

    Biau, D J; Meziane, M; Bhumbra, R S; Dumaine, V; Babinet, A; Anract, P

    2011-09-01

    The purpose of this study was to define immediate post-operative 'quality' in total hip replacements and to study prospectively the occurrence of failure based on these definitions of quality. The evaluation and assessment of failure were based on ten radiological and clinical criteria. The cumulative summation (CUSUM) test was used to study 200 procedures over a one-year period. Technical criteria defined failure in 17 cases (8.5%), those related to the femoral component in nine (4.5%), the acetabular component in 32 (16%) and those relating to discharge from hospital in five (2.5%). Overall, the procedure was considered to have failed in 57 of the 200 total hip replacements (28.5%). The use of a new design of acetabular component was associated with more failures. For the CUSUM test, the level of adequate performance was set at a rate of failure of 20% and the level of inadequate performance set at a failure rate of 40%; no alarm was raised by the test, indicating that there was no evidence of inadequate performance. The use of a continuous monitoring statistical method is useful to ensure that the quality of total hip replacement is maintained, especially as newer implants are introduced.

  8. Gallium Arsenide Pilot Line for High Performance Components

    DTIC Science & Technology

    1988-06-02

    shown in Figure 4. A complete functional and timing verification was performed by GOALIE , MOTIS, and ADVICE tools. GOALIE was used to convert the...using LTX2 and was verified using GOALIE , and ADVICE. S The performance of the circuits was measured using 256 test-vectors on an Advantest T3340...cycling per MIL STD 883C, Method 1010.7 Condition C. No evidence of damage was found. A sample of fifteen leads were pull tested per MIL STD 883C. Method

  9. Design Considerations and Experimental Verification of a Rail Brake Armature Based on Linear Induction Motor Technology

    NASA Astrophysics Data System (ADS)

    Sakamoto, Yasuaki; Kashiwagi, Takayuki; Hasegawa, Hitoshi; Sasakawa, Takashi; Fujii, Nobuo

    This paper describes the design considerations and experimental verification of an LIM rail brake armature. In order to generate power and maximize the braking force density despite the limited area between the armature and the rail and the limited space available for installation, we studied a design method that is suitable for designing an LIM rail brake armature; we considered adoption of a ring winding structure. To examine the validity of the proposed design method, we developed a prototype ring winding armature for the rail brakes and examined its electromagnetic characteristics in a dynamic test system with roller rigs. By repeating various tests, we confirmed that unnecessary magnetic field components, which were expected to be present under high speed running condition or when a ring winding armature was used, were not present. Further, the necessary magnetic field component and braking force attained the desired values. These studies have helped us to develop a basic design method that is suitable for designing the LIM rail brake armatures.

  10. Characterizing Time Series Data Diversity for Wind Forecasting: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodge, Brian S; Chartan, Erol Kevin; Feng, Cong

    Wind forecasting plays an important role in integrating variable and uncertain wind power into the power grid. Various forecasting models have been developed to improve the forecasting accuracy. However, it is challenging to accurately compare the true forecasting performances from different methods and forecasters due to the lack of diversity in forecasting test datasets. This paper proposes a time series characteristic analysis approach to visualize and quantify wind time series diversity. The developed method first calculates six time series characteristic indices from various perspectives. Then the principal component analysis is performed to reduce the data dimension while preserving the importantmore » information. The diversity of the time series dataset is visualized by the geometric distribution of the newly constructed principal component space. The volume of the 3-dimensional (3D) convex polytope (or the length of 1D number axis, or the area of the 2D convex polygon) is used to quantify the time series data diversity. The method is tested with five datasets with various degrees of diversity.« less

  11. Recommendations for the treatment of aging in standard technical specifications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orton, R.D.; Allen, R.P.

    1995-09-01

    As part of the US Nuclear Regulatory Commission`s Nuclear Plant Aging Research Program, Pacific Northwest Laboratory (PNL) evaluated the standard technical specifications for nuclear power plants to determine whether the current surveillance requirements (SRs) were effective in detecting age-related degradation. Nuclear Plant Aging Research findings for selected systems and components were reviewed to identify the stressors and operative aging mechanisms and to evaluate the methods available to detect, differentiate, and trend the resulting aging degradation. Current surveillance and testing requirements for these systems and components were reviewed for their effectiveness in detecting degraded conditions and for potential contributions to prematuremore » degradation. When the current surveillance and testing requirements appeared ineffective in detecting aging degradation or potentially could contribute to premature degradation, a possible deficiency in the SRs was identified that could result in undetected degradation. Based on this evaluation, PNL developed recommendations for inspection, surveillance, trending, and condition monitoring methods to be incorporated in the SRs to better detect age- related degradation of these selected systems and components.« less

  12. 16 CFR 309.10 - Alternative vehicle fuel rating.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Analysis of Natural Gas by Gas Chromatography.” For the purposes of this section, fuel ratings for the... methods set forth in ASTM D 1946-90, “Standard Practice for Analysis of Reformed Gas by Gas Chromatography... the principal component of compressed natural gas are to be determined in accordance with test methods...

  13. 16 CFR 309.10 - Alternative vehicle fuel rating.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Analysis of Natural Gas by Gas Chromatography.” For the purposes of this section, fuel ratings for the... methods set forth in ASTM D 1946-90, “Standard Practice for Analysis of Reformed Gas by Gas Chromatography... the principal component of compressed natural gas are to be determined in accordance with test methods...

  14. Interactive Videodisc as a Component in a Multi-Method Approach to Anatomy and Physiology.

    ERIC Educational Resources Information Center

    Wheeler, Donald A.; Wheeler, Mary Jane

    At Cuyahoga Community College (Ohio), computer-controlled interactive videodisc technology is being used as one of several instructional methods to teach anatomy and physiology. The system has the following features: audio-visual instruction, interaction with immediate feedback, self-pacing, fill-in-the-blank quizzes for testing total recall,…

  15. An overview of the mathematical and statistical analysis component of RICIS

    NASA Technical Reports Server (NTRS)

    Hallum, Cecil R.

    1987-01-01

    Mathematical and statistical analysis components of RICIS (Research Institute for Computing and Information Systems) can be used in the following problem areas: (1) quantification and measurement of software reliability; (2) assessment of changes in software reliability over time (reliability growth); (3) analysis of software-failure data; and (4) decision logic for whether to continue or stop testing software. Other areas of interest to NASA/JSC where mathematical and statistical analysis can be successfully employed include: math modeling of physical systems, simulation, statistical data reduction, evaluation methods, optimization, algorithm development, and mathematical methods in signal processing.

  16. Fetal ECG extraction using independent component analysis by Jade approach

    NASA Astrophysics Data System (ADS)

    Giraldo-Guzmán, Jader; Contreras-Ortiz, Sonia H.; Lasprilla, Gloria Isabel Bautista; Kotas, Marian

    2017-11-01

    Fetal ECG monitoring is a useful method to assess the fetus health and detect abnormal conditions. In this paper we propose an approach to extract fetal ECG from abdomen and chest signals using independent component analysis based on the joint approximate diagonalization of eigenmatrices approach. The JADE approach avoids redundancy, what reduces matrix dimension and computational costs. Signals were filtered with a high pass filter to eliminate low frequency noise. Several levels of decomposition were tested until the fetal ECG was recognized in one of the separated sources output. The proposed method shows fast and good performance.

  17. Techniques for evaluating digestibility of energy, amino acids, phosphorus, and calcium in feed ingredients for pigs.

    PubMed

    Zhang, Fengrui; Adeola, Olayiwola

    2017-12-01

    Sound feed formulation is dependent upon precise evaluation of energy and nutrients values in feed ingredients. Hence the methodology to determine the digestibility of energy and nutrients in feedstuffs should be chosen carefully before conducting experiments. The direct and difference procedures are widely used to determine the digestibility of energy and nutrients in feedstuffs. The direct procedure is normally considered when the test feedstuff can be formulated as the sole source of the component of interest in the test diet. However, in some cases where test ingredients can only be formulated to replace a portion of the basal diet to provide the component of interest, the difference procedure can be applied to get equally robust values. Based on components of interest, ileal digesta or feces can be collected, and different sample collection processes can be used. For example, for amino acids (AA), to avoid the interference of fermentation in the hind gut, ileal digesta samples are collected to determine the ileal digestibility and simple T-cannula and index method are commonly used techniques for AA digestibility analysis. For energy, phosphorus, and calcium, normally fecal samples will be collected to determine the total tract digestibility, and therefore the total collection method is recommended to obtain more accurate estimates. Concerns with the use of apparent digestibility values include different estimated values from different inclusion level and non-additivity in mixtures of feed ingredients. These concerns can be overcome by using standardized digestibility, or true digestibility, by correcting endogenous losses of components from apparent digestibility values. In this review, methodologies used to determine energy and nutrients digestibility in pigs are discussed. It is suggested that the methodology should be carefully selected based on the component of interest, feed ingredients, and available experimental facilities.

  18. Engine rotor health monitoring: an experimental approach to fault detection and durability assessment

    NASA Astrophysics Data System (ADS)

    Abdul-Aziz, Ali; Woike, Mark R.; Clem, Michelle; Baaklini, George

    2015-03-01

    Efforts to update and improve turbine engine components in meeting flights safety and durability requirements are commitments that engine manufacturers try to continuously fulfill. Most of their concerns and developments energies focus on the rotating components as rotor disks. These components typically undergo rigorous operating conditions and are subject to high centrifugal loadings which subject them to various failure mechanisms. Thus, developing highly advanced health monitoring technology to screen their efficacy and performance is very essential to their prolonged service life and operational success. Nondestructive evaluation techniques are among the many screening methods that presently are being used to pre-detect hidden flaws and mini cracks prior to any appalling events occurrence. Most of these methods or procedures are confined to evaluating material's discontinuities and other defects that have mature to a point where failure is eminent. Hence, development of more robust techniques to pre-predict faults prior to any catastrophic events in these components is highly vital. This paper is focused on presenting research activities covering the ongoing research efforts at NASA Glenn Research Center (GRC) rotor dynamics laboratory in support of developing a fault detection system for key critical turbine engine components. Data obtained from spin test experiments of a rotor disk that relates to investigating behavior of blade tip clearance, tip timing and shaft displacement based on measured data acquired from sensor devices such as eddy current, capacitive and microwave are presented. Additional results linking test data with finite element modeling to characterize the structural durability of a cracked rotor as it relays to the experimental tests and findings is also presented. An obvious difference in the vibration response is shown between the notched and the baseline no notch rotor disk indicating the presence of some type of irregularity.

  19. Fracture Tests of Etched Components Using a Focused Ion Beam Machine

    NASA Technical Reports Server (NTRS)

    Kuhn, Jonathan, L.; Fettig, Rainer K.; Moseley, S. Harvey; Kutyrev, Alexander S.; Orloff, Jon; Powers, Edward I. (Technical Monitor)

    2000-01-01

    Many optical MEMS device designs involve large arrays of thin (0.5 to 1 micron components subjected to high stresses due to cyclic loading. These devices are fabricated from a variety of materials, and the properties strongly depend on size and processing. Our objective is to develop standard and convenient test methods that can be used to measure the properties of large numbers of witness samples, for every device we build. In this work we explore a variety of fracture test configurations for 0.5 micron thick silicon nitride membranes machined using the Reactive Ion Etching (RIE) process. Testing was completed using an FEI 620 dual focused ion beam milling machine. Static loads were applied using a probe. and dynamic loads were applied through a piezo-electric stack mounted at the base of the probe. Results from the tests are presented and compared, and application for predicting fracture probability of large arrays of devices are considered.

  20. Design, fabrication and test of graphite/epoxy metering truss structure components, phase 3

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The design, materials, tooling, manufacturing processes, quality control, test procedures, and results associated with the fabrication and test of graphite/epoxy metering truss structure components exhibiting a near zero coefficient of thermal expansion are described. Analytical methods were utilized, with the aid of a computer program, to define the most efficient laminate configurations in terms of thermal behavior and structural requirements. This was followed by an extensive material characterization and selection program, conducted for several graphite/graphite/hybrid laminate systems to obtain experimental data in support of the analytical predictions. Mechanical property tests as well as the coefficient of thermal expansion tests were run on each laminate under study, the results of which were used as the selection criteria for the single most promising laminate. Further coefficient of thermal expansion measurement was successfully performed on three subcomponent tubes utilizing the selected laminate.

  1. Application of the Bootstrap Statistical Method in Deriving Vibroacoustic Specifications

    NASA Technical Reports Server (NTRS)

    Hughes, William O.; Paez, Thomas L.

    2006-01-01

    This paper discusses the Bootstrap Method for specification of vibroacoustic test specifications. Vibroacoustic test specifications are necessary to properly accept or qualify a spacecraft and its components for the expected acoustic, random vibration and shock environments seen on an expendable launch vehicle. Traditionally, NASA and the U.S. Air Force have employed methods of Normal Tolerance Limits to derive these test levels based upon the amount of data available, and the probability and confidence levels desired. The Normal Tolerance Limit method contains inherent assumptions about the distribution of the data. The Bootstrap is a distribution-free statistical subsampling method which uses the measured data themselves to establish estimates of statistical measures of random sources. This is achieved through the computation of large numbers of Bootstrap replicates of a data measure of interest and the use of these replicates to derive test levels consistent with the probability and confidence desired. The comparison of the results of these two methods is illustrated via an example utilizing actual spacecraft vibroacoustic data.

  2. Sorbent, Sublimation, and Icing Modeling Methods: Experimental Validation and Application to an Integrated MTSA Subassembly Thermal Model

    NASA Technical Reports Server (NTRS)

    Bower, Chad; Padilla, Sebastian; Iacomini, Christie; Paul, Heather L.

    2010-01-01

    This paper details the validation of modeling methods for the three core components of a Metabolic heat regenerated Temperature Swing Adsorption (MTSA) subassembly, developed for use in a Portable Life Support System (PLSS). The first core component in the subassembly is a sorbent bed, used to capture and reject metabolically produced carbon dioxide (CO2). The sorbent bed performance can be augmented with a temperature swing driven by a liquid CO2 (LCO2) sublimation heat exchanger (SHX) for cooling the sorbent bed, and a condensing, icing heat exchanger (CIHX) for warming the sorbent bed. As part of the overall MTSA effort, scaled design validation test articles for each of these three components have been independently tested in laboratory conditions. Previously described modeling methodologies developed for implementation in Thermal Desktop and SINDA/FLUINT are reviewed and updated, their application in test article models outlined, and the results of those model correlations relayed. Assessment of the applicability of each modeling methodology to the challenge of simulating the response of the test articles and their extensibility to a full scale integrated subassembly model is given. The independent verified and validated modeling methods are applied to the development of a MTSA subassembly prototype model and predictions of the subassembly performance are given. These models and modeling methodologies capture simulation of several challenging and novel physical phenomena in the Thermal Desktop and SINDA/FLUINT software suite. Novel methodologies include CO2 adsorption front tracking and associated thermal response in the sorbent bed, heat transfer associated with sublimation of entrained solid CO2 in the SHX, and water mass transfer in the form of ice as low as 210 K in the CIHX.

  3. Water monitor system: Phase 1 test report

    NASA Technical Reports Server (NTRS)

    Taylor, R. E.; Jeffers, E. L.

    1976-01-01

    Automatic water monitor system was tested with the objectives of assuring high-quality effluent standards and accelerating the practice of reclamation and reuse of water. The NASA water monitor system is described. Various components of the system, including the necessary sensors, the sample collection system, and the data acquisition and display system, are discussed. The test facility and the analysis methods are described. Test results are reviewed, and recommendations for water monitor system design improvement are presented.

  4. High Energy/LET Radiation EEE Parts Certification Handbook

    NASA Technical Reports Server (NTRS)

    Reddell, Brandon

    2012-01-01

    Certifying electronic components is a very involved process. It includes pre-coordination with the radiation test facility for time, schedule and cost, as well as intimate work with designers to develop test procedures and hardware. It also involves work with radiation engineers to understand the effects of the radiation field on the test article/setup as well as the analysis and production of a test report. The technical content of traditional ionizing radiation testing protocol is in wide use and generally follows established standards (ref. Appendix C). This document is not intended to cover all these areas but to cover the methodology of using Variable Depth Bragg Peak (VDBP) to accomplish the goal of characterizing an electronic component. The Variable Depth Bragg Peak (VDBP) test method is primarily used for deep space applications of electronics. However, it can be used on any part for any radiation environment, especially those parts where the sensitive volume cannot be reached by the radiation beam. An example of this problem would be issues that arise in de-lidding of parts or in parts with flip-chip designs, etc. The VDBP method is ideally suited to test modern avionics designs which increasingly incorporate commercial off-the-shelf (COTS) parts and units. Johnson Space Center (JSC) developed software provides assistance to users in developing the radiation characterization data from the raw test data.

  5. Computational method to predict thermodynamic, transport, and flow properties for the modified Langley 8-foot high-temperature tunnel

    NASA Technical Reports Server (NTRS)

    Venkateswaran, S.; Hunt, L. Roane; Prabhu, Ramadas K.

    1992-01-01

    The Langley 8 foot high temperature tunnel (8 ft HTT) is used to test components of hypersonic vehicles for aerothermal loads definition and structural component verification. The test medium of the 8 ft HTT is obtained by burning a mixture of methane and air under high pressure; the combustion products are expanded through an axisymmetric conical contoured nozzle to simulate atmospheric flight at Mach 7. This facility was modified to raise the oxygen content of the test medium to match that of air and to include Mach 4 and Mach 5 capabilities. These modifications will facilitate the testing of hypersonic air breathing propulsion systems for a wide range of flight conditions. A computational method to predict the thermodynamic, transport, and flow properties of the equilibrium chemically reacting oxygen enriched methane-air combustion products was implemented in a computer code. This code calculates the fuel, air, and oxygen mass flow rates and test section flow properties for Mach 7, 5, and 4 nozzle configurations for given combustor and mixer conditions. Salient features of the 8 ft HTT are described, and some of the predicted tunnel operational characteristics are presented in the carpet plots to assist users in preparing test plans.

  6. Testing of Military Towbars

    DTIC Science & Technology

    2016-09-28

    pin diameters, lunette diameter, clevis end details, cross section, and overall tube length and straightness. b. Weld failures, voids, cracks...etc., should be considered failures if they are identified visually or using a nondestructive weld inspection test method, per the applicable American... Welding Society standard for the specific material being inspected. c. Broken or cracked components, or catastrophic damage should be considered

  7. Pyrolysis kinetics and combustion of thin wood by an advanced cone caorimetry test method

    Treesearch

    Mark Dietenberger

    2012-01-01

    Pyrolysis kinetics analysis of extractives, holocellulose, and lignin in the solid redwood over the entire heating regime was possible by specialized cone calorimeter test and new mathematical analysis tools. Added hardware components include: modified sample holder for the thin specimen with tiny thermocouples, the methane ring burner with stainless-steel mesh above...

  8. An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests

    ERIC Educational Resources Information Center

    Attali, Yigal

    2010-01-01

    Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…

  9. Method for obtaining aerodynamic data on hypersonic configurations with scramjet exhaust flow simulation

    NASA Technical Reports Server (NTRS)

    Hartill, W. R.

    1977-01-01

    A hypersonic wind tunnel test method for obtaining credible aerodynamic data on a complete hypersonic vehicle (generic X-24c) with scramjet exhaust flow simulation is described. The general problems of simulating the scramjet exhaust as well as accounting for scramjet inlet flow and vehicle forces are analyzed, and candidate test methods are described and compared. The method selected as most useful makes use of a thrust-minus-drag flow-through balance with a completely metric model. Inlet flow is diverted by a fairing. The incremental effect of the fairing is determined in the testing of two reference models. The net thrust of the scramjet module is an input to be determined in large-scale module tests with scramjet combustion. Force accounting is described, and examples of force component levels are predicted. Compatibility of the test method with candidate wind tunnel facilities is described, and a preliminary model mechanical arrangement drawing is presented. The balance design and performance requirements are described in a detailed specification. Calibration procedures, model instrumentation, and a test plan for the model are outlined.

  10. Reduction hybrid artifacts of EMG-EOG in electroencephalography evoked by prefrontal transcranial magnetic stimulation

    NASA Astrophysics Data System (ADS)

    Bai, Yang; Wan, Xiaohong; Zeng, Ke; Ni, Yinmei; Qiu, Lirong; Li, Xiaoli

    2016-12-01

    Objective. When prefrontal-transcranial magnetic stimulation (p-TMS) performed, it may evoke hybrid artifact mixed with muscle activity and blink activity in EEG recordings. Reducing this kind of hybrid artifact challenges the traditional preprocessing methods. We aim to explore method for the p-TMS evoked hybrid artifact removal. Approach. We propose a novel method used as independent component analysis (ICA) post processing to reduce the p-TMS evoked hybrid artifact. Ensemble empirical mode decomposition (EEMD) was used to decompose signal into multi-components, then the components were separated with artifact reduced by blind source separation (BSS) method. Three standard BSS methods, ICA, independent vector analysis, and canonical correlation analysis (CCA) were tested. Main results. Synthetic results showed that EEMD-CCA outperformed others as ICA post processing step in hybrid artifacts reduction. Its superiority was clearer when signal to noise ratio (SNR) was lower. In application to real experiment, SNR can be significantly increased and the p-TMS evoked potential could be recovered from hybrid artifact contaminated signal. Our proposed method can effectively reduce the p-TMS evoked hybrid artifacts. Significance. Our proposed method may facilitate future prefrontal TMS-EEG researches.

  11. Evaluation of Four Methods for Predicting Carbon Stocks of Korean Pine Plantations in Heilongjiang Province, China

    PubMed Central

    Gao, Huilin; Dong, Lihu; Li, Fengri; Zhang, Lianjun

    2015-01-01

    A total of 89 trees of Korean pine (Pinus koraiensis) were destructively sampled from the plantations in Heilongjiang Province, P.R. China. The sample trees were measured and calculated for the biomass and carbon stocks of tree components (i.e., stem, branch, foliage and root). Both compatible biomass and carbon stock models were developed with the total biomass and total carbon stocks as the constraints, respectively. Four methods were used to evaluate the carbon stocks of tree components. The first method predicted carbon stocks directly by the compatible carbon stocks models (Method 1). The other three methods indirectly predicted the carbon stocks in two steps: (1) estimating the biomass by the compatible biomass models, and (2) multiplying the estimated biomass by three different carbon conversion factors (i.e., carbon conversion factor 0.5 (Method 2), average carbon concentration of the sample trees (Method 3), and average carbon concentration of each tree component (Method 4)). The prediction errors of estimating the carbon stocks were compared and tested for the differences between the four methods. The results showed that the compatible biomass and carbon models with tree diameter (D) as the sole independent variable performed well so that Method 1 was the best method for predicting the carbon stocks of tree components and total. There were significant differences among the four methods for the carbon stock of stem. Method 2 produced the largest error, especially for stem and total. Methods 3 and Method 4 were slightly worse than Method 1, but the differences were not statistically significant. In practice, the indirect method using the mean carbon concentration of individual trees was sufficient to obtain accurate carbon stocks estimation if carbon stocks models are not available. PMID:26659257

  12. Latent component-based gear tooth fault detection filter using advanced parametric modeling

    NASA Astrophysics Data System (ADS)

    Ettefagh, M. M.; Sadeghi, M. H.; Rezaee, M.; Chitsaz, S.

    2009-10-01

    In this paper, a new parametric model-based filter is proposed for gear tooth fault detection. The designing of the filter consists of identifying the most proper latent component (LC) of the undamaged gearbox signal by analyzing the instant modules (IMs) and instant frequencies (IFs) and then using the component with lowest IM as the proposed filter output for detecting fault of the gearbox. The filter parameters are estimated by using the LC theory in which an advanced parametric modeling method has been implemented. The proposed method is applied on the signals, extracted from simulated gearbox for detection of the simulated gear faults. In addition, the method is used for quality inspection of the produced Nissan-Junior vehicle gearbox by gear profile error detection in an industrial test bed. For evaluation purpose, the proposed method is compared with the previous parametric TAR/AR-based filters in which the parametric model residual is considered as the filter output and also Yule-Walker and Kalman filter are implemented for estimating the parameters. The results confirm the high performance of the new proposed fault detection method.

  13. Adobe photoshop quantification (PSQ) rather than point-counting: A rapid and precise method for quantifying rock textural data and porosities

    NASA Astrophysics Data System (ADS)

    Zhang, Xuefeng; Liu, Bo; Wang, Jieqiong; Zhang, Zhe; Shi, Kaibo; Wu, Shuanglin

    2014-08-01

    Commonly used petrological quantification methods are visual estimation, counting, and image analyses. However, in this article, an Adobe Photoshop-based analyzing method (PSQ) is recommended for quantifying the rock textural data and porosities. Adobe Photoshop system provides versatile abilities in selecting an area of interest and the pixel number of a selection could be read and used to calculate its area percentage. Therefore, Adobe Photoshop could be used to rapidly quantify textural components, such as content of grains, cements, and porosities including total porosities and different genetic type porosities. This method was named as Adobe Photoshop Quantification (PSQ). The workflow of the PSQ method was introduced with the oolitic dolomite samples from the Triassic Feixianguan Formation, Northeastern Sichuan Basin, China, for example. And the method was tested by comparing with the Folk's and Shvetsov's "standard" diagrams. In both cases, there is a close agreement between the "standard" percentages and those determined by the PSQ method with really small counting errors and operator errors, small standard deviations and high confidence levels. The porosities quantified by PSQ were evaluated against those determined by the whole rock helium gas expansion method to test the specimen errors. Results have shown that the porosities quantified by the PSQ are well correlated to the porosities determined by the conventional helium gas expansion method. Generally small discrepancies (mostly ranging from -3% to 3%) are caused by microporosities which would cause systematic underestimation of 2% and/or by macroporosities causing underestimation or overestimation in different cases. Adobe Photoshop could be used to quantify rock textural components and porosities. This method has been tested to be precise and accurate. It is time saving compared with usual methods.

  14. Short communication: Principal components and factor analytic models for test-day milk yield in Brazilian Holstein cattle.

    PubMed

    Bignardi, A B; El Faro, L; Rosa, G J M; Cardoso, V L; Machado, P F; Albuquerque, L G

    2012-04-01

    A total of 46,089 individual monthly test-day (TD) milk yields (10 test-days), from 7,331 complete first lactations of Holstein cattle were analyzed. A standard multivariate analysis (MV), reduced rank analyses fitting the first 2, 3, and 4 genetic principal components (PC2, PC3, PC4), and analyses that fitted a factor analytic structure considering 2, 3, and 4 factors (FAS2, FAS3, FAS4), were carried out. The models included the random animal genetic effect and fixed effects of the contemporary groups (herd-year-month of test-day), age of cow (linear and quadratic effects), and days in milk (linear effect). The residual covariance matrix was assumed to have full rank. Moreover, 2 random regression models were applied. Variance components were estimated by restricted maximum likelihood method. The heritability estimates ranged from 0.11 to 0.24. The genetic correlation estimates between TD obtained with the PC2 model were higher than those obtained with the MV model, especially on adjacent test-days at the end of lactation close to unity. The results indicate that for the data considered in this study, only 2 principal components are required to summarize the bulk of genetic variation among the 10 traits. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  15. Bulk Electrical Cable Non-Destructive Examination Methods for Nuclear Power Plant Cable Aging Management Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glass, Samuel W.; Jones, Anthony M.; Fifield, Leonard S.

    This Pacific Northwest National Laboratory milestone report describes progress to date on the investigation of nondestructive test methods focusing particularly on bulk electrical test methods that provide key indicators of cable aging and damage. The work includes a review of relevant literature as well as hands-on experimental verification of inspection capabilities. As nuclear power plants consider applying for second, or subsequent, license renewal to extend their operating period from 60 years to 80 years, it is important to understand how the materials installed in plant systems and components will age during that time and develop aging management programs to assuremore » continued safe operation under normal and design basis events (DBE). Normal component and system tests typically confirm the cables can perform their normal operational function. The focus of the cable test program, however, is directed toward the more demanding challenge of assuring the cable function under accident or DBE. The industry has adopted 50% elongation at break (EAB) relative to the un-aged cable condition as the acceptability standard. All tests are benchmarked against the cable EAB test. EAB, however, is a destructive test so the test programs must apply an array of other nondestructive examination (NDE) tests to assure or infer the overall set of cable’s system integrity. Assessment of cable integrity is further complicated in many cases by vendor’s use of dissimilar material for jacket and insulation. Frequently the jacket will degrade more rapidly than the underlying insulation. Although this can serve as an early alert to cable damage, direct test of the cable insulation without violating the protective jacket becomes problematic. This report addresses the range of bulk electrical NDE cable tests that are or could be practically implemented in a field-test situation with a particular focus on frequency domain reflectometry (FDR). The FDR test method offers numerous advantages over many other bulk electrical tests. Two commercial FDR systems plus a laboratory vector network analyzer are used to test an array of aged and un-aged cables under identical conditions. Several conclusions are set forth, and a number of knowledge gaps are identified.« less

  16. Agreement among the Productivity Components of Eight Presenteeism Tests in a Sample of Health Care Workers.

    PubMed

    Thompson, Angus H; Waye, Arianna

    2018-06-01

    Presenteeism (reduced productivity at work) is thought to be responsible for large economic costs. Nevertheless, much of the research supporting this is based on self-report questionnaires that have not been adequately evaluated. To examine the level of agreement among leading tests of presenteeism and to determine the inter-relationship of the two productivity subcategories, amount and quality, within the context of construct validity and method variance. Just under 500 health care workers from an urban health area were asked to complete a questionnaire containing the productivity items from eight presenteeism instruments. The analysis included an examination of test intercorrelations, separately for amount and quality, supplemented by principal-component analyses to determine whether either construct could be described by a single factor. A multitest, multiconstruct analysis was performed on the four tests that assessed both amount and quality to test for the relative contributions of construct and method variance. A total of 137 questionnaires were completed. Agreement among tests was positive, but modest. Pearson r ranges were 0 to 0.64 (mean = 0.32) for Amount and 0.03 to 0.38 (mean = 0.25) for Quality. Further analysis suggested that agreement was influenced more by method variance than by the productivity constructs the tests were designed to measure. The results suggest that presenteeism tests do not accurately assess work performance. Given their importance in the determination of policy-relevant conclusions, attention needs to be given to test improvement in the context of criterion validity assessment. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  17. Product formulation for ohmic heating: blanching as a pretreatment method to improve uniformity in heating of solid-liquid food mixtures.

    PubMed

    Sarang, S; Sastry, S K; Gaines, J; Yang, T C S; Dunne, P

    2007-06-01

    The electrical conductivity of food components is critical to ohmic heating. Food components of different electrical conductivities heat at different rates. While equal electrical conductivities of all phases are desirable, real food products may behave differently. In the present study involving chicken chow mein consisting of a sauce and different solid components, celery, water chestnuts, mushrooms, bean sprouts, and chicken, it was observed that the sauce was more conductive than all solid components over the measured temperature range. To improve heating uniformity, a blanching method was developed to increase the ionic content of the solid components. By blanching different solid components in a highly conductive sauce at 100 degrees C for different lengths of time, it was possible to adjust their conductivity to that of the sauce. Chicken chow mein samples containing blanched particulates were compared with untreated samples with respect to ohmic heating uniformity at 60 Hz up to 140 degrees C. All components of the treated product containing blanched solids heated more uniformly than untreated product. In sensory tests, 3 different formulations of the blanched product showed good quality attributes and overall acceptability, demonstrating the practical feasibility of the blanching protocol.

  18. Revisiting tests for neglected nonlinearity using artificial neural networks.

    PubMed

    Cho, Jin Seo; Ishida, Isao; White, Halbert

    2011-05-01

    Tests for regression neglected nonlinearity based on artificial neural networks (ANNs) have so far been studied by separately analyzing the two ways in which the null of regression linearity can hold. This implies that the asymptotic behavior of general ANN-based tests for neglected nonlinearity is still an open question. Here we analyze a convenient ANN-based quasi-likelihood ratio statistic for testing neglected nonlinearity, paying careful attention to both components of the null. We derive the asymptotic null distribution under each component separately and analyze their interaction. Somewhat remarkably, it turns out that the previously known asymptotic null distribution for the type 1 case still applies, but under somewhat stronger conditions than previously recognized. We present Monte Carlo experiments corroborating our theoretical results and showing that standard methods can yield misleading inference when our new, stronger regularity conditions are violated.

  19. Fracture toughness in Mode I (GIC) for ductile adhesives

    NASA Astrophysics Data System (ADS)

    Gálvez, P.; Carbas, RJC; Campilho, RDSG; Abenojar, J.; Martínez, MA; Silva LFM, da

    2017-05-01

    Works carried out in this publication belong to a project that seeks the replacement of welded joints by adhesive joints at stress concentration nodes in bus structures. Fracture toughness in Mode I (GIC) has been measured for two different ductile adhesives, SikaTack Drive and SikaForce 7720. SikaTack Drive is a single-component polyurethane adhesive with high viscoelasticity (more than 100%), whose main use is the car-glass joining and SikaForce 7720 is double-component structural polyurethane adhesive. Experimental works have been carried out from the test called Double Cantilever Beam (DCB), using two steel beams as adherents and an adhesive thickness according to the problem posed in the Project, of 2 and 3 mm for SikaForce 7720 and SikaTack Drive, respectively. Three different methods have been used for measuring the fracture toughness in mode I (GIC) from the values obtained in the experimental DCB procedure for each adhesive: Corrected Beam Theory (CBT), Compliance Calibration Method (CCM) and Compliance Based Beam Method (CBBM). Four DCB specimens have been tested for each adhesive. Dispersion of each GIC calculation method for each adhesive has been studied. Likewise variations between the three different methods have been also studied for each adhesive.

  20. Sustained prediction ability of net analyte preprocessing methods using reduced calibration sets. Theoretical and experimental study involving the spectrophotometric analysis of multicomponent mixtures.

    PubMed

    Goicoechea, H C; Olivieri, A C

    2001-07-01

    A newly developed multivariate method involving net analyte preprocessing (NAP) was tested using central composite calibration designs of progressively decreasing size regarding the multivariate simultaneous spectrophotometric determination of three active components (phenylephrine, diphenhydramine and naphazoline) and one excipient (methylparaben) in nasal solutions. Its performance was evaluated and compared with that of partial least-squares (PLS-1). Minimisation of the calibration predicted error sum of squares (PRESS) as a function of a moving spectral window helped to select appropriate working spectral ranges for both methods. The comparison of NAP and PLS results was carried out using two tests: (1) the elliptical joint confidence region for the slope and intercept of a predicted versus actual concentrations plot for a large validation set of samples and (2) the D-optimality criterion concerning the information content of the calibration data matrix. Extensive simulations and experimental validation showed that, unlike PLS, the NAP method is able to furnish highly satisfactory results when the calibration set is reduced from a full four-component central composite to a fractional central composite, as expected from the modelling requirements of net analyte based methods.

  1. An efficient classification method based on principal component and sparse representation.

    PubMed

    Zhai, Lin; Fu, Shujun; Zhang, Caiming; Liu, Yunxian; Wang, Lu; Liu, Guohua; Yang, Mingqiang

    2016-01-01

    As an important application in optical imaging, palmprint recognition is interfered by many unfavorable factors. An effective fusion of blockwise bi-directional two-dimensional principal component analysis and grouping sparse classification is presented. The dimension reduction and normalizing are implemented by the blockwise bi-directional two-dimensional principal component analysis for palmprint images to extract feature matrixes, which are assembled into an overcomplete dictionary in sparse classification. A subspace orthogonal matching pursuit algorithm is designed to solve the grouping sparse representation. Finally, the classification result is gained by comparing the residual between testing and reconstructed images. Experiments are carried out on a palmprint database, and the results show that this method has better robustness against position and illumination changes of palmprint images, and can get higher rate of palmprint recognition.

  2. Dual laser optical system and method for studying fluid flow

    NASA Technical Reports Server (NTRS)

    Owen, R. B.; Witherow, W. K. (Inventor)

    1983-01-01

    A dual laser optical system and method is disclosed for visualization of phenomena in transport substances which induce refractive index gradients such as fluid flow and pressure and temperature gradients in fluids and gases. Two images representing mutually perpendicular components of refractive index gradients may be viewed simultaneously on screen. Two lasers having wave lengths in the visible range but separated by about 1000 angstroms are utilized to provide beams which are collimated into a beam containing components of the different wave lengths. The collimated beam is passed through a test volume of the transparent substance. The collimated beam is then separated into components of the different wave lengths and focused onto a pair of knife edges arranged mutually perpendicular to produce and project images onto the screen.

  3. How many fish in a tank? Constructing an automated fish counting system by using PTV analysis

    NASA Astrophysics Data System (ADS)

    Abe, S.; Takagi, T.; Takehara, K.; Kimura, N.; Hiraishi, T.; Komeyama, K.; Torisawa, S.; Asaumi, S.

    2017-02-01

    Because escape from a net cage and mortality are constant problems in fish farming, health control and management of facilities are important in aquaculture. In particular, the development of an accurate fish counting system has been strongly desired for the Pacific Bluefin tuna farming industry owing to the high market value of these fish. The current fish counting method, which involves human counting, results in poor accuracy; moreover, the method is cumbersome because the aquaculture net cage is so large that fish can only be counted when they move to another net cage. Therefore, we have developed an automated fish counting system by applying particle tracking velocimetry (PTV) analysis to a shoal of swimming fish inside a net cage. In essence, we treated the swimming fish as tracer particles and estimated the number of fish by analyzing the corresponding motion vectors. The proposed fish counting system comprises two main components: image processing and motion analysis, where the image-processing component abstracts the foreground and the motion analysis component traces the individual's motion. In this study, we developed a Region Extraction and Centroid Computation (RECC) method and a Kalman filter and Chi-square (KC) test for the two main components. To evaluate the efficiency of our method, we constructed a closed system, placed an underwater video camera with a spherical curved lens at the bottom of the tank, and recorded a 360° view of a swimming school of Japanese rice fish (Oryzias latipes). Our study showed that almost all fish could be abstracted by the RECC method and the motion vectors could be calculated by the KC test. The recognition rate was approximately 90% when more than 180 individuals were observed within the frame of the video camera. These results suggest that the presented method has potential application as a fish counting system for industrial aquaculture.

  4. A DATA-DRIVEN MODEL FOR SPECTRA: FINDING DOUBLE REDSHIFTS IN THE SLOAN DIGITAL SKY SURVEY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsalmantza, P.; Hogg, David W., E-mail: vivitsal@mpia.de

    2012-07-10

    We present a data-driven method-heteroscedastic matrix factorization, a kind of probabilistic factor analysis-for modeling or performing dimensionality reduction on observed spectra or other high-dimensional data with known but non-uniform observational uncertainties. The method uses an iterative inverse-variance-weighted least-squares minimization procedure to generate a best set of basis functions. The method is similar to principal components analysis (PCA), but with the substantial advantage that it uses measurement uncertainties in a responsible way and accounts naturally for poorly measured and missing data; it models the variance in the noise-deconvolved data space. A regularization can be applied, in the form of a smoothnessmore » prior (inspired by Gaussian processes) or a non-negative constraint, without making the method prohibitively slow. Because the method optimizes a justified scalar (related to the likelihood), the basis provides a better fit to the data in a probabilistic sense than any PCA basis. We test the method on Sloan Digital Sky Survey (SDSS) spectra, concentrating on spectra known to contain two redshift components: these are spectra of gravitational lens candidates and massive black hole binaries. We apply a hypothesis test to compare one-redshift and two-redshift models for these spectra, utilizing the data-driven model trained on a random subset of all SDSS spectra. This test confirms 129 of the 131 lens candidates in our sample and all of the known binary candidates, and turns up very few false positives.« less

  5. Repeated decompositions reveal the stability of infomax decomposition of fMRI data

    PubMed Central

    Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott

    2010-01-01

    In this study, we decomposed 12 fMRI data sets from six subjects each 101 times using the infomax algorithm. The first decomposition was taken as a reference decomposition; the others were used to form a component matrix of 100 by 100 components. Equivalence relations between components in this matrix, defined as maximum spatial correlations to the components of the reference decomposition, were found by the Hungarian sorting method and used to form 100 equivalence classes for each data set. We then tested the reproducibility of the matched components in the equivalence classes using uncertainty measures based on component distributions, time courses, and ROC curves. Infomax ICA rarely failed to derive nearly the same components in different decompositions. Very few components per data set were poorly reproduced, even using vector angle uncertainty measures stricter than correlation and detection theory measures. PMID:17281453

  6. The effectiveness of formative assessment with understanding by design (UbD) stages in forming habits of mind in prospective teachers

    NASA Astrophysics Data System (ADS)

    Gloria, R. Y.; Sudarmin, S.; Wiyanto; Indriyanti, D. R.

    2018-03-01

    Habits of mind are intelligent thinking dispositions that every individual needs to have, and it needs an effort to form them as expected. A behavior can be formed by continuous practice; therefore the student's habits of mind can also be formed and trained. One effort that can be used to encourage the formation of habits of mind is a formative assessment strategy with the stages of UbD (Understanding by Design), and a study needs to be done to prove it. This study aims to determine the contribution of formative assessment to the value of habits of mind owned by prospective teachers. The method used is a quantitative method with a quasi-experimental design. To determine the effectiveness of formative assessment with Ubd stages on the formation of habits of mind, correlation test and regression analysis were conducted in the formative assessment questionnaire consisting of three components, i.e. feed back, peer assessment and self assessment, and habits of mind. The result of the research shows that from the three components of Formative Assessment, only Feedback component does not show correlation to students’ habits of mind (r = 0.323). While peer assessment component (r = 0. 732) and self assessment component (r = 0.625), both indicate correlation. From the regression test the overall component of the formative assessment contributed to the habits of mind at 57.1%. From the result of the research, it can be concluded that the formative assessment with Ubd stages is effective and contributes in forming the student's habits of mind; the formative assessment components that contributed the most are the peer assessment and self assessment. The greatest contribution goes to the Thinking interdependently category.

  7. Measurement of fracture stress for 6000-series extruded aluminum alloy tube using multiaxial tube expansion testing method

    NASA Astrophysics Data System (ADS)

    Nagai, Keisuke; Kuwabara, Toshihiko; Ilinich, Andrey; Luckey, George

    2018-05-01

    A servo-controlled tension-internal pressure testing machine with an optical 3D digital image correlation system (DIC) is used to measure the multiaxial deformation behavior of an extruded aluminum alloy tube for a strain range from initial yield to fracture. The outer diameter of the test sample is 50.8 mm and wall thickness 2.8 mm. Nine linear stress paths are applied to the specimens: σɸ (axial true stress component) : σθ (circumferential true stress component) = 1:0, 4:1, 2:1, 4:3, 1:1, 3:4, 1:2, 1:4, and 0:1. The equivalent strain rate is approximately 5 × 10-4 s-1 constant. The forming limit curve (FLC) and forming limit stress curve (FLSC) are also measured. Moreover, the average true stress components inside a localized necking area are determined for each specimen from the thickness strain data for the localized necking area and the geometry of the fracture surface.

  8. Preliminary test results in support of integrated EPP and SMT design methods development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yanli; Jetter, Robert I.; Sham, T. -L.

    2016-02-09

    The proposed integrated Elastic Perfectly-Plastic (EPP) and Simplified Model Test (SMT) methodology consists of incorporating a SMT data-based approach for creep-fatigue damage evaluation into the EPP methodology to avoid using the creep-fatigue interaction diagram (the D diagram) and to minimize over-conservatism while properly accounting for localized defects and stress risers. To support the implementation of the proposed code rules and to verify their applicability, a series of thermomechanical tests have been initiated. One test concept, the Simplified Model Test (SMT), takes into account the stress and strain redistribution in real structures by including representative follow-up characteristics in the test specimen.more » The second test concept is the two-bar thermal ratcheting tests with cyclic loading at high temperatures using specimens representing key features of potential component designs. This report summaries the previous SMT results on Alloy 617, SS316H and SS304H and presents the recent development on SMT approach on Alloy 617. These SMT specimen data are also representative of component loading conditions and have been used as part of the verification of the proposed integrated EPP and SMT design methods development. The previous two-bar thermal ratcheting test results on Alloy 617 and SS316H are also summarized and the new results from two bar thermal ratcheting tests on SS316H at a lower temperature range are reported.« less

  9. Rock-Magnetic Method for Post Nuclear Detonation Diagnostics

    NASA Astrophysics Data System (ADS)

    Englert, J.; Petrosky, J.; Bailey, W.; Watts, D. R.; Tauxe, L.; Heger, A. S.

    2011-12-01

    A magnetic signature characteristic of a Nuclear Electromagnetic Pulse (NEMP) may still be detectable near the sites of atmospheric nuclear tests conducted at what is now the Nevada National Security Site. This signature is due to a secondary magnetization component of the natural remanent magnetization of material containing traces of ferromagnetic particles that have been exposed to a strong pulse of magnetic field. We apply a rock-magnetic method introduced by Verrier et al. (2002), and tested on samples exposed to artificial lightning, to samples of rock and building materials (e.g. bricks, concrete) retrieved from several above ground nuclear test sites. The results of magnetization measurements are compared to NEMP simulations and historic test measurements.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, L; Huang, S; Kang, M

    Purpose: The purpose of this manuscript is to demonstrate the utility of a comprehensive test pattern in validating calculation models of the low-dose tails of proton pencil beam scanning (PBS) spots. Such a pattern has been used previously for quality assurance purposes to assess spot shape and location, and for determining monitor units. Methods: In this study, a scintillation detector was used to measure the test pattern in air at isocenter for two proton beam energies (115 and 225 MeV) of two IBA universal nozzles (UN). Planar measurements were compared with calculated dose distribution based on the weighted superposition ofmore » spot profiles previously measured using a pair-magnification method. Results: Including the halo component below 1% of the central dose is shown to improve the gamma-map comparison between calculation and measurement from 94.9% to 98.4% using 2 mm/2% criteria for the 115 MeV proton beam of UN #1. In contrast, including the halo component below 1% of the central dose does not improve the gamma agreement for the 115 MeV proton beam of UN #2, due to the cutoff of the halo component at off-axis locations. When location-dependent spot profiles are used for calculation instead of spot profiles at central axis, the gamma agreement is improved from 98.0% to 99.5% using 2 mm/2% criteria. The cutoff of the halo component is smaller at higher energies, and is not observable for the 225 MeV proton beam for UN #2. Conclusion: In conclusion, the use of a comprehensive test pattern can facilitate the validation of the halo component of proton PBS spots at off axis locations. The cutoff of the halo component should be taken into consideration for large fields or PBS systems that intend to trim spot profiles using apertures. This work was supported by the US Army Medical Research and Materiel Command under Contract Agreement No. DAMD17-W81XWH-07-2-0121 and W81XWH-09-2-0174.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Liyong, E-mail: linl@uphs.upenn.edu; Huang, Sheng; Kang, Minglei

    Purpose: The purpose of this paper is to demonstrate the utility of a comprehensive test pattern in validating calculation models that include the halo component (low-dose tails) of proton pencil beam scanning (PBS) spots. Such a pattern has been used previously for quality assurance purposes to assess spot shape, position, and dose. Methods: In this study, a scintillation detector was used to measure the test pattern in air at isocenter for two proton beam energies (115 and 225 MeV) of two IBA universal nozzles (UN #1 and UN #2). Planar measurements were compared with calculated dose distributions based on themore » weighted superposition of location-independent (UN #1) or location-dependent (UN #2) spot profiles, previously measured using a pair-magnification method and between two nozzles. Results: Including the halo component below 1% of the central dose is shown to improve the gamma-map comparison between calculation and measurement from 94.9% to 98.4% using 2 mm/2% criteria for the 115 MeV proton beam of UN #1. In contrast, including the halo component below 1% of the central dose does not improve the gamma agreement for the 115 MeV proton beam of UN #2, due to the cutoff of the halo component at off-axis locations. When location-dependent spot profiles are used for calculation instead of spot profiles at central axis, the gamma agreement is improved from 98.0% to 99.5% using 2 mm/2% criteria. The two nozzles clearly have different characteristics, as a direct comparison of measured data shows a passing rate of 89.7% for the 115 MeV proton beam. At 225 MeV, the corresponding gamma comparisons agree better between measurement and calculation, and between measurements in the two nozzles. Conclusions: In addition to confirming the primary component of individual PBS spot profiles, a comprehensive test pattern is useful for the validation of the halo component at off-axis locations, especially for low energy protons.« less

  12. Above-knee prosthesis design based on fatigue life using finite element method and design of experiment.

    PubMed

    Phanphet, Suwattanarwong; Dechjarern, Surangsee; Jomjanyong, Sermkiat

    2017-05-01

    The main objective of this work is to improve the standard of the existing design of knee prosthesis developed by Thailand's Prostheses Foundation of Her Royal Highness The Princess Mother. The experimental structural tests, based on the ISO 10328, of the existing design showed that a few components failed due to fatigue under normal cyclic loading below the required number of cycles. The finite element (FE) simulations of structural tests on the knee prosthesis were carried out. Fatigue life predictions of knee component materials were modeled based on the Morrow's approach. The fatigue life prediction based on the FE model result was validated with the corresponding structural test and the results agreed well. The new designs of the failed components were studied using the design of experimental approach and finite element analysis of the ISO 10328 structural test of knee prostheses under two separated loading cases. Under ultimate loading, knee prosthesis peak von Mises stress must be less than the yield strength of knee component's material and the total knee deflection must be lower than 2.5mm. The fatigue life prediction of all knee components must be higher than 3,000,000 cycles under normal cyclic loading. The design parameters are the thickness of joint bars, the diameter of lower connector and the thickness of absorber-stopper. The optimized knee prosthesis design meeting all the requirements was recommended. Experimental ISO 10328 structural test of the fabricated knee prosthesis based on the optimized design confirmed the finite element prediction. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  13. Radioactive nondestructive test method

    NASA Technical Reports Server (NTRS)

    Obrien, J. R.; Pullen, K. E.

    1971-01-01

    Various radioisotope techniques were used as diagnostic tools for determining the performance of spacecraft propulsion feed system elements. Applications were studied in four tasks. The first two required experimental testing involving the propellant liquid oxygen difluoride (OF2): the neutron activation analysis of dissolved or suspended metals, and the use of radioactive tracers to evaluate the probability of constrictions in passive components (orifices and filters) becoming clogged by matter dissolved or suspended in the OF2. The other tasks were an appraisal of the applicability of radioisotope techniques to problems arising from the exposure of components to liquid/gas combinations, and an assessment of the applicability of the techniques to other propellants.

  14. Design, fabrication, testing, and delivery of a solar energy collector system for residential heating and cooling

    NASA Technical Reports Server (NTRS)

    Holland, T. H.; Borzoni, J. T.

    1976-01-01

    A low cost flat plate solar energy collector was designed for the heating and cooling of residential buildings. The system meets specified performance requirements, at the desired system operating levels, for a useful life of 15 to 20 years, at minimum cost and uses state-of-the-art materials and technology. The rationale for the design method was based on identifying possible material candidates for various collector components and then selecting the components which best meet the solar collector design requirements. The criteria used to eliminate certain materials were: performance and durability test results, cost analysis, and prior solar collector fabrication experience.

  15. Covariate analysis of bivariate survival data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, L.E.

    1992-01-01

    The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methodsmore » have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.« less

  16. A proposal for refining the forced swim test in Swiss mice.

    PubMed

    Costa, Ana Paula Ramos; Vieira, Cintia; Bohner, Lauren O L; Silva, Cristiane Felisbino; Santos, Evelyn Cristina da Silva; De Lima, Thereza Christina Monteiro; Lino-de-Oliveira, Cilene

    2013-08-01

    The forced swim test (FST) is a preclinical test to the screening of antidepressants based on rats or mice behaviours, which is also sensitive to stimulants of motor activity. This work standardised and validated a method to register the active and passive behaviours of Swiss mice during the FST in order to strength the specificity of the test. Adult male Swiss mice were subjected to the FST for 6 min without any treatment or after intraperitoneal injection of saline (0.1 ml/10 g), antidepressants (imipramine, desipramine, or fluoxetine, 30 mg/kg) or stimulants (caffeine, 30 mg/kg or apomorphine, 10mg/kg). The latency, frequency and duration of behaviours (immobility, swimming, and climbing) were scored and summarised in bins of 6, 4, 2 or 1 min. Parameters were first analysed using Principal Components Analysis generating components putatively related to antidepressant (first and second) or to stimulant effects (third). Antidepressants and stimulants affected similarly the parameters grouped into all components. Effects of stimulants on climbing were better distinguished of antidepressants when analysed during the last 4 min of the FST. Surprisingly, the effects of antidepressants on immobility were better distinguished from saline when parameters were scored in the first 2 min. The method proposed here is able to distinguish antidepressants from stimulants of motor activity using Swiss mice in the FST. This refinement should reduce the number of mice used in preclinical evaluation of antidepressants. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Standardization of shape memory alloy test methods toward certification of aerospace applications

    NASA Astrophysics Data System (ADS)

    Hartl, D. J.; Mabe, J. H.; Benafan, O.; Coda, A.; Conduit, B.; Padan, R.; Van Doren, B.

    2015-08-01

    The response of shape memory alloy (SMA) components employed as actuators has enabled a number of adaptable aero-structural solutions. However, there are currently no industry or government-accepted standardized test methods for SMA materials when used as actuators and their transition to commercialization and production has been hindered. This brief fast track communication introduces to the community a recently initiated collaborative and pre-competitive SMA specification and standardization effort that is expected to deliver the first ever regulatory agency-accepted material specification and test standards for SMA as employed as actuators for commercial and military aviation applications. In the first phase of this effort, described herein, the team is working to review past efforts and deliver a set of agreed-upon properties to be included in future material certification specifications as well as the associated experiments needed to obtain them in a consistent manner. Essential for the success of this project is the participation and input from a number of organizations and individuals, including engineers and designers working in materials and processing development, application design, SMA component fabrication, and testing at the material, component, and system level. Going forward, strong consensus among this diverse body of participants and the SMA research community at large is needed to advance standardization concepts for universal adoption by the greater aerospace community and especially regulatory bodies. It is expected that the development and release of public standards will be done in collaboration with an established standards development organization.

  18. The Effect of Simulated Flash-Heat Pasteurization on Immune Components of Human Milk

    PubMed Central

    Daniels, Brodie; Schmidt, Stefan; King, Tracy; Israel-Ballard, Kiersten; Amundson Mansen, Kimberly; Coutsoudis, Anna

    2017-01-01

    A pasteurization temperature monitoring system has been designed using FoneAstra, a cellphone-based networked sensing system, to monitor simulated flash-heat (FH) pasteurization. This study compared the effect of the FoneAstra FH (F-FH) method with the Sterifeed Holder method currently used by human milk banks on human milk immune components (immunoglobulin A (IgA), lactoferrin activity, lysozyme activity, interleukin (IL)-8 and IL-10). Donor milk samples (N = 50) were obtained from a human milk bank, and pasteurized. Concentrations of IgA, IL-8, IL-10, lysozyme activity and lactoferrin activity were compared to their controls using the Student’s t-test. Both methods demonstrated no destruction of interleukins. While the Holder method retained all lysozyme activity, the F-FH method only retained 78.4% activity (p < 0.0001), and both methods showed a decrease in lactoferrin activity (71.1% Holder vs. 38.6% F-FH; p < 0.0001) and a decrease in the retention of total IgA (78.9% Holder vs. 25.2% F-FH; p < 0.0001). Despite increased destruction of immune components compared to Holder pasteurization, the benefits of F-FH in terms of its low cost, feasibility, safety and retention of immune components make it a valuable resource in low-income countries for pasteurizing human milk, potentially saving infants’ lives. PMID:28241418

  19. Chemical composition of fingerprints for gender determination.

    PubMed

    Asano, Keiji G; Bayne, Charles K; Horsman, Katie M; Buchanan, Michelle V

    2002-07-01

    This work investigates the chemical nature of fingerprints to ascertain whether differences in chemical composition or the existence of chemical markers can be used to determine personal traits, such as age, gender, and personal habits. This type of information could be useful for reducing the pool of potential suspects in criminal investigations when latent fingerprints are unsuitable for comparison by traditional methods. Fingertip residue that has been deposited onto a bead was extracted with a solvent such as chloroform. Samples were analyzed by gas chromatography/mass spectrometry (GC/MS). The chemical components identified include fatty acids, long chain fatty acid esters, cholesterol and squalene. The area ratios of ten selected components relative to squalene were calculated for a small preliminary experiment that showed a slight gender difference for three of these components. However, when the experiment was repeated with a larger, statistically designed experiment no significant differences between genders were detected for any of the component ratios. The multivariate Hotelling's T2 test that tested all ten-component ratios simultaneously also showed no gender differences at the 5% significance level.

  20. Self-testing for HIV: a new option for HIV prevention?

    PubMed

    Spielberg, Freya; Levine, Ruth O; Weaver, Marcia

    2004-10-01

    Self-testing has the potential to be an innovative component to community-wide HIV-prevention strategies. This testing method could serve populations who do not have access to standard voluntary counselling and testing services or because of privacy concerns, stigma, transport costs, or other barriers do not use facility-based, standard HIV testing. This paper reviews recent research on the acceptability, feasibility, and cost of rapid testing and home-specimen collection for HIV, and suggests that self-testing may be another important strategy for diagnosing HIV infection. Several research questions are posed that should be answered before self-testing is realised.

  1. A complementation assay for in vivo protein structure/function analysis in Physcomitrella patens (Funariaceae)

    DOE PAGES

    Scavuzzo-Duggan, Tess R.; Chaves, Arielle M.; Roberts, Alison W.

    2015-07-14

    Here, a method for rapid in vivo functional analysis of engineered proteins was developed using Physcomitrella patens. A complementation assay was designed for testing structure/function relationships in cellulose synthase (CESA) proteins. The components of the assay include (1) construction of test vectors that drive expression of epitope-tagged PpCESA5 carrying engineered mutations, (2) transformation of a ppcesa5 knockout line that fails to produce gametophores with test and control vectors, (3) scoring the stable transformants for gametophore production, (4) statistical analysis comparing complementation rates for test vectors to positive and negative control vectors, and (5) analysis of transgenic protein expression by Westernmore » blotting. The assay distinguished mutations that generate fully functional, nonfunctional, and partially functional proteins. In conclusion, compared with existing methods for in vivo testing of protein function, this complementation assay provides a rapid method for investigating protein structure/function relationships in plants.« less

  2. Surface cleanliness of fluid systems, specification for

    NASA Technical Reports Server (NTRS)

    1995-01-01

    This specification establishes surface cleanliness levels, test methods, cleaning and packaging requirements, and protection and inspection procedures for determining surface cleanliness. These surfaces pertain to aerospace parts, components, assemblies, subsystems, and systems in contact with any fluid medium.

  3. Laser transit anemometer software development program

    NASA Technical Reports Server (NTRS)

    Abbiss, John B.

    1989-01-01

    Algorithms were developed for the extraction of two components of mean velocity, standard deviation, and the associated correlation coefficient from laser transit anemometry (LTA) data ensembles. The solution method is based on an assumed two-dimensional Gaussian probability density function (PDF) model of the flow field under investigation. The procedure consists of transforming the data ensembles from the data acquisition domain (consisting of time and angle information) to the velocity space domain (consisting of velocity component information). The mean velocity results are obtained from the data ensemble centroid. Through a least squares fitting of the transformed data to an ellipse representing the intersection of a plane with the PDF, the standard deviations and correlation coefficient are obtained. A data set simulation method is presented to test the data reduction process. Results of using the simulation system with a limited test matrix of input values is also given.

  4. Feature Extraction and Selection Strategies for Automated Target Recognition

    NASA Technical Reports Server (NTRS)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  5. Feature extraction and selection strategies for automated target recognition

    NASA Astrophysics Data System (ADS)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-04-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory regionof- interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  6. Embedding of multidimensional time-dependent observations.

    PubMed

    Barnard, J P; Aldrich, C; Gerber, M

    2001-10-01

    A method is proposed to reconstruct dynamic attractors by embedding of multivariate observations of dynamic nonlinear processes. The Takens embedding theory is combined with independent component analysis to transform the embedding into a vector space of linearly independent vectors (phase variables). The method is successfully tested against prediction of the unembedded state vector in two case studies of simulated chaotic processes.

  7. Embedding of multidimensional time-dependent observations

    NASA Astrophysics Data System (ADS)

    Barnard, Jakobus P.; Aldrich, Chris; Gerber, Marius

    2001-10-01

    A method is proposed to reconstruct dynamic attractors by embedding of multivariate observations of dynamic nonlinear processes. The Takens embedding theory is combined with independent component analysis to transform the embedding into a vector space of linearly independent vectors (phase variables). The method is successfully tested against prediction of the unembedded state vector in two case studies of simulated chaotic processes.

  8. Low-cycle fatigue testing methods

    NASA Technical Reports Server (NTRS)

    Lieurade, H. P.

    1978-01-01

    The good design of highly stressed mechanical components requires accurate knowledge of the service behavior of materials. The main methods for solving the problems of designers are: determination of the mechanical properties of the material after cyclic stabilization; plotting of resistance to plastic deformation curves; effect of temperature on the life on low cycle fatigue; and simulation of notched parts behavior.

  9. Field efficiency and bias of snag inventory methods

    Treesearch

    Robert S. Kenning; Mark J. Ducey; John C. Brissette; Jeffery H. Gove

    2005-01-01

    Snags and cavity trees are important components of forests, but can be difficult to inventory precisely and are not always included in inventories because of limited resources. We tested the application of N-tree distance sampling as a time-saving snag sampling method and compared N-tree distance sampling to fixed-area sampling and modified horizontal line sampling in...

  10. Methods and benefits of experimental seismic evaluation of nuclear power plants. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1979-07-01

    This study reviews experimental techniques, instrumentation requirements, safety considerations, and benefits of performing vibration tests on nuclear power plant containments and internal components. The emphasis is on testing to improve seismic structural models. Techniques for identification of resonant frequencies, damping, and mode shapes, are discussed. The benefits of testing with regard to increased damping and more accurate computer models are oulined. A test plan, schedule and budget are presented for a typical PWR nuclear power plant.

  11. Properties of Syntactic Foam for Simulation of Mechanical Insults.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hubbard, Neal Benson; Haulenbeek, Kimberly K.; Spletzer, Matthew A.

    Syntactic foam encapsulation protects sensitive components. The energy mitigated by the foam is calculated with numerical simulations. The properties of a syntactic foam consisting of a mixture of an epoxy-rubber adduct and glass microballoons are obtained from published literature and test results. The conditions and outcomes of the tests are discussed. The method for converting published properties and test results to input for finite element models is described. Simulations of the test conditions are performed to validate the inputs.

  12. Working memory subsystems are impaired in chronic drug dependents.

    PubMed

    Soliman, Abdrabo Moghazy; Gadelrab, Hesham Fathy; Elfar, Rania Mohamed

    2013-06-01

    A large body of research that has investigated substance dependence and working memory (WM) resources, yet no prior study has used a comprehensive test battery to examine the impact of chronic drug dependence on WM as a multi-component system. This study examined the efficiency of several WM components in participants who were chronic drug dependents. In addition, the functioning of the four WM components was compared among dependents of various types of drugs. In total, 128 chronic drug dependents participated in this study. Their average age was 38.48 years, and they were classified into four drug-dependence groups. Chronic drug dependents were compared with a 36-participant control group that had a mean age of 37.6 years. A WM test battery that comprised eight tests and that assessed each of four WM components was administered to each participant. Compared with the control group, all four groups of drug dependents had significantly poorer test performance on all of the WM tasks. Among the four groups of drug users, the polydrug group had the poorest performance scores on each of the eight tasks, and the performance scores of the marijuana group were the least affected. Finally, the forward digit span task and the logical memory tasks were less sensitive than other tasks when differentiating between marijuana users and the normal participants. The four components of WM are impaired among chronic drug dependents. These results have implications for the development of tools, classification methods and therapeutic strategies for drug dependents.

  13. Preliminary development of POEAW in enhancing K-11 students’ understanding level on impulse and momentum

    NASA Astrophysics Data System (ADS)

    Luthfiani, T. A.; Sinaga, P.; Samsudin, A.

    2018-05-01

    We have been analyzed that there were limited research about Predict-Observe- Explain which use writing process with conceptual change text strategy. This study aims to develop a learning model namely Predict-Observe-Explain-Apply-Writing (POEAW) which is able to enhance students’ understanding level. The research method utilized the 4D model (Defining, Designing, Developing and Disseminating) that is formally limited to Developing Stage. There are four experts who judge the learning component (syntax, lesson plan, teaching material and student worksheet) and matter component (learning quality and content component). The result of this study are obtained expert validity test score average of 87% for learning content and 89% for matter component that means the POEAW is valid and can be tested in classroom learning. This research producing POEAW learning model that has five main steps, Predict, Observe, Explain, Apply and Write. To sum up, we have early developed POEAW in enhancing K-11 students’ understanding levels on impulse and momentum.

  14. Flow Induced Vibration Program at Argonne National Laboratory

    NASA Astrophysics Data System (ADS)

    1984-01-01

    The Argonne National Laboratory's Flow Induced Vibration Program, currently residing in the Laboratory's Components Technology Division is discussed. Throughout its existence, the overall objective of the program was to develop and apply new and/or improved methods of analysis and testing for the design evaluation of nuclear reactor plant components and heat exchange equipment from the standpoint of flow induced vibration. Historically, the majority of the program activities were funded by the US Atomic Energy Commission, the Energy Research and Development Administration, and the Department of Energy. Current DOE funding is from the Breeder Mechanical Component Development Division, Office of Breeder Technology Projects; Energy Conversion and Utilization Technology Program, Office of Energy Systems Research; and Division of Engineering, Mathematical and Geosciences, office of Basic Energy Sciences. Testing of Clinch River Breeder Reactor upper plenum components was funded by the Clinch River Breeder Reactor Plant Project Office. Work was also performed under contract with Foster Wheeler, General Electric, Duke Power Company, US Nuclear Regulatory Commission, and Westinghouse.

  15. The 727/JT8D refan side nacelle airloads

    NASA Technical Reports Server (NTRS)

    Bailey, R. W.; Vadset, H. J.

    1974-01-01

    Airloads on the 727/JT8D refan side engine nacelle are presented. These consist of surface static pressure distributions from two low speed wind tunnel tests. External nacelle surface pressures are from testing of a flow-through, body mounted nacelle model, and internal inlet surface pressures are from performance testing of a forced air inlet model. The method for obtaining critical airloads on nacelle components and a representative example are discussed.

  16. Vestibular schwannomas: Accuracy of tumor volume estimated by ice cream cone formula using thin-sliced MR images

    PubMed Central

    Ho, Hsing-Hao; Li, Ya-Hui; Lee, Jih-Chin; Wang, Chih-Wei; Yu, Yi-Lin; Hueng, Dueng-Yuan; Hsu, Hsian-He

    2018-01-01

    Purpose We estimated the volume of vestibular schwannomas by an ice cream cone formula using thin-sliced magnetic resonance images (MRI) and compared the estimation accuracy among different estimating formulas and between different models. Methods The study was approved by a local institutional review board. A total of 100 patients with vestibular schwannomas examined by MRI between January 2011 and November 2015 were enrolled retrospectively. Informed consent was waived. Volumes of vestibular schwannomas were estimated by cuboidal, ellipsoidal, and spherical formulas based on a one-component model, and cuboidal, ellipsoidal, Linskey’s, and ice cream cone formulas based on a two-component model. The estimated volumes were compared to the volumes measured by planimetry. Intraobserver reproducibility and interobserver agreement was tested. Estimation error, including absolute percentage error (APE) and percentage error (PE), was calculated. Statistical analysis included intraclass correlation coefficient (ICC), linear regression analysis, one-way analysis of variance, and paired t-tests with P < 0.05 considered statistically significant. Results Overall tumor size was 4.80 ± 6.8 mL (mean ±standard deviation). All ICCs were no less than 0.992, suggestive of high intraobserver reproducibility and high interobserver agreement. Cuboidal formulas significantly overestimated the tumor volume by a factor of 1.9 to 2.4 (P ≤ 0.001). The one-component ellipsoidal and spherical formulas overestimated the tumor volume with an APE of 20.3% and 29.2%, respectively. The two-component ice cream cone method, and ellipsoidal and Linskey’s formulas significantly reduced the APE to 11.0%, 10.1%, and 12.5%, respectively (all P < 0.001). Conclusion The ice cream cone method and other two-component formulas including the ellipsoidal and Linskey’s formulas allow for estimation of vestibular schwannoma volume more accurately than all one-component formulas. PMID:29438424

  17. Composite load spectra for select space propulsion structural components

    NASA Technical Reports Server (NTRS)

    Newell, J. F.; Ho, H. W.; Kurth, R. E.

    1991-01-01

    The work performed to develop composite load spectra (CLS) for the Space Shuttle Main Engine (SSME) using probabilistic methods. The three methods were implemented to be the engine system influence model. RASCAL was chosen to be the principal method as most component load models were implemented with the method. Validation of RASCAL was performed. High accuracy comparable to the Monte Carlo method can be obtained if a large enough bin size is used. Generic probabilistic models were developed and implemented for load calculations using the probabilistic methods discussed above. Each engine mission, either a real fighter or a test, has three mission phases: the engine start transient phase, the steady state phase, and the engine cut off transient phase. Power level and engine operating inlet conditions change during a mission. The load calculation module provides the steady-state and quasi-steady state calculation procedures with duty-cycle-data option. The quasi-steady state procedure is for engine transient phase calculations. In addition, a few generic probabilistic load models were also developed for specific conditions. These include the fixed transient spike model, the poison arrival transient spike model, and the rare event model. These generic probabilistic load models provide sufficient latitude for simulating loads with specific conditions. For SSME components, turbine blades, transfer ducts, LOX post, and the high pressure oxidizer turbopump (HPOTP) discharge duct were selected for application of the CLS program. They include static pressure loads and dynamic pressure loads for all four components, centrifugal force for the turbine blade, temperatures of thermal loads for all four components, and structural vibration loads for the ducts and LOX posts.

  18. Reusable Software Component Retrieval via Normalized Algebraic Specifications

    DTIC Science & Technology

    1991-12-01

    outputs. In fact, this method of query is simpler for matching since it relieves the system from the burden of generating a test set. Eichmann [Eich9l...September 1991. [Eich9l] Eichmann , David A., "Selecting Reusable Components Using Algebraic Specifications", Proceedings of the Second International...Technology Atlanta, Georgia 30332-0800 12. Dr. David Eichmann 1 Department of Statistics and Computer Science Knapp Hall West Virginia University Morgantown, West Virginia 26506 226

  19. Noncontact methods for optical testing of convex aspheric mirrors for future large telescopes

    NASA Astrophysics Data System (ADS)

    Goncharov, Alexander V.; Druzhin, Vladislav V.; Batshev, Vladislav I.

    2009-06-01

    Non-contact methods for testing of large rotationally symmetric convex aspheric mirrors are proposed. These methods are based on non-null testing with side illumination schemes, in which a narrow collimated beam is reflected from the meridional aspheric profile of a mirror. The figure error of the mirror is deduced from the intensity pattern from the reflected beam obtained on a screen, which is positioned in the tangential plane (containing the optical axis) and perpendicular to the incoming beam. Testing of the entire surface is carried out by rotating the mirror about its optical axis and registering the characteristics of the intensity pattern on the screen. The intensity pattern can be formed using three different techniques: modified Hartman test, interference and boundary curve test. All these techniques are well known but have not been used in the proposed side illumination scheme. Analytical expressions characterizing the shape and location of the intensity pattern on the screen or a CCD have been developed for all types of conic surfaces. The main advantage of these testing methods compared with existing methods (Hindle sphere, null lens, computer generated hologram) is that the reference system does not require large optical components.

  20. An Assessment of Nondestructive Evaluation Capability for Complex Additive Manufacturing Aerospace Components

    NASA Technical Reports Server (NTRS)

    Walker, James; Beshears, Ron; Lambert, Dennis; Tilson, William

    2016-01-01

    The primary focus of this work is to investigate some of the fundamental relationships between processing, mechanical testing, materials characterization, and NDE for additively manufactured (AM) components using the powder bed fusion direct melt laser sintered process. The goal is to understand the criticality of defects unique to the AM process and then how conventional nondestructive evaluation methods as well as some of the more non-traditional methods such as computed tomography, are effected by the AM material. Specific defects including cracking, porosity and partially/unfused powder will be addressed. Besides line-of-site NDE, as appropriate these inspection capabilities will be put into the context of complex AM geometries where hidden features obscure, or inhibit traditional NDE methods.

  1. Radial expansion of the tail current disruption during substorms - A new approach to the substorm onset region

    NASA Technical Reports Server (NTRS)

    Ohtani, S.; Kokubun, S.; Russell, C. T.

    1992-01-01

    A new method is used to examine the radial expansion of the tail current disruption and the substorm onset region. The expansion of the disruption region is specified by examining the time sequence (phase relationship) between the north-south component and the sun-earth component. This method is tested by applying it to the March 6, 1979, event. The phase relationship indicates that the current disruption started on the earthward side of the spacecraft, and expanded tailward past the spacecraft. The method was used for 13 events selected from the ISEE magnetometer data. The results indicate that the current disruption usually starts in the near-earth magnetotail and often within 15 RE from the earth.

  2. Interlaboratory comparison for the determination of the soluble fraction of metals in welding fume samples.

    PubMed

    Berlinger, Balazs; Harper, Martin

    2018-02-01

    There is interest in the bioaccessible metal components of aerosols, but this has been minimally studied because standardized sampling and analytical methods have not yet been developed. An interlaboratory study (ILS) has been carried out to evaluate a method for determining the water-soluble component of realistic welding fume (WF) air samples. Replicate samples were generated in the laboratory and distributed to participating laboratories to be analyzed according to a standardized procedure. Within-laboratory precision of replicate sample analysis (repeatability) was very good. Reproducibility between laboratories was not as good, but within limits of acceptability for the analysis of typical aerosol samples. These results can be used to support the development of a standardized test method.

  3. Bayes Factor Covariance Testing in Item Response Models.

    PubMed

    Fox, Jean-Paul; Mulder, Joris; Sinharay, Sandip

    2017-12-01

    Two marginal one-parameter item response theory models are introduced, by integrating out the latent variable or random item parameter. It is shown that both marginal response models are multivariate (probit) models with a compound symmetry covariance structure. Several common hypotheses concerning the underlying covariance structure are evaluated using (fractional) Bayes factor tests. The support for a unidimensional factor (i.e., assumption of local independence) and differential item functioning are evaluated by testing the covariance components. The posterior distribution of common covariance components is obtained in closed form by transforming latent responses with an orthogonal (Helmert) matrix. This posterior distribution is defined as a shifted-inverse-gamma, thereby introducing a default prior and a balanced prior distribution. Based on that, an MCMC algorithm is described to estimate all model parameters and to compute (fractional) Bayes factor tests. Simulation studies are used to show that the (fractional) Bayes factor tests have good properties for testing the underlying covariance structure of binary response data. The method is illustrated with two real data studies.

  4. Multiresidue determination of pesticides in tea by gas chromatography-tandem mass spectrometry.

    PubMed

    Saito-Shida, Shizuka; Nemoto, Satoru; Teshima, Reiko

    2015-01-01

    An efficient and reliable GC-MS/MS method for the multiresidue determination of pesticides in tea was developed by modifying the Japanese official multiresidue method. Sample preparation was carefully optimized for the efficient removal of coextracted matrix components. The optimal sample preparation procedure involved swelling of the sample in water; extraction with acetonitrile; removal of water by salting-out; and sequential cleanup by ODS, graphitized carbon black/primary secondary amine (GCB/PSA) and silica gel cartridges prior to GC-MS/MS analysis. The recoveries of 162 pesticides from fortified (at 0.01 mg kg(-1)) green tea, oolong tea, black tea and matcha (powdered green tea) were mostly (95-98% of the tested pesticides) within the range of 70-120%, with relative standard deviations of <20%. Poor recovery of triazole pesticides was considered to be due to low recovery from the silica gel cartridges. The test solutions obtained by the modified method contained relatively small amounts of pigments, caffeine and other matrix components and were cleaner than those obtained by the original Japanese official multiresidue method. No interfering peaks were observed in the blank chromatograms, indicating the high selectivity of the modified method. The overall results suggest that the developed method is suitable for the quantitative analysis of GC-amenable pesticide residues in tea.

  5. Measured airtightness of an installed skylight

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaw, C.Y.; Magee, R.J.; Poirier, G.F.

    2000-07-01

    An art gallery building had problems with moisture. Inspections using thermographic techniques suggested that air leakage through the skylights could be the main cause of the problem. The air leakage rate of an installed metal frame skylight, 26 m long x 8.5 m wide, was measured, using the balanced fan depressurization method. Also, fan depressurization tests were performed on the glazing/upstand interface on the south side of the skylight. The air leakage rates were measured through the full interface and on the west and east halves separately. The methods used for field testing of such components and the test resultsmore » are discussed.« less

  6. Automatic Processing of Metallurgical Abstracts for the Purpose of Information Retrieval. Final Report.

    ERIC Educational Resources Information Center

    Melton, Jessica S.

    Objectives of this project were to develop and test a method for automatically processing the text of abstracts for a document retrieval system. The test corpus consisted of 768 abstracts from the metallurgical section of Chemical Abstracts (CA). The system, based on a subject indexing rational, had two components: (1) a stored dictionary of words…

  7. Fatigue Lives Of Laser-Cut Metals

    NASA Technical Reports Server (NTRS)

    Martin, Michael R.

    1988-01-01

    Fatigue lives made to approach those attainable by traditional grinding methods. Fatigue-test specimens prepared from four metallic alloys, and material removed from specimens by manual grinding, by Nd:glass laser, and by Nd:YAG laser. Results of fatigue tests of all specimens indicated reduction of fatigue strengths of laser-fired specimens. Laser machining holds promise for improved balancing of components of gas turbines.

  8. Do Two or More Multicomponent Instruments Measure the Same Construct? Testing Construct Congruence Using Latent Variable Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.; Tong, Bing

    2016-01-01

    A latent variable modeling procedure is discussed that can be used to test if two or more homogeneous multicomponent instruments with distinct components are measuring the same underlying construct. The method is widely applicable in scale construction and development research and can also be of special interest in construct validation studies.…

  9. GSA-PCA: gene set generation by principal component analysis of the Laplacian matrix of a metabolic network

    PubMed Central

    2012-01-01

    Background Gene Set Analysis (GSA) has proven to be a useful approach to microarray analysis. However, most of the method development for GSA has focused on the statistical tests to be used rather than on the generation of sets that will be tested. Existing methods of set generation are often overly simplistic. The creation of sets from individual pathways (in isolation) is a poor reflection of the complexity of the underlying metabolic network. We have developed a novel approach to set generation via the use of Principal Component Analysis of the Laplacian matrix of a metabolic network. We have analysed a relatively simple data set to show the difference in results between our method and the current state-of-the-art pathway-based sets. Results The sets generated with this method are semi-exhaustive and capture much of the topological complexity of the metabolic network. The semi-exhaustive nature of this method has also allowed us to design a hypergeometric enrichment test to determine which genes are likely responsible for set significance. We show that our method finds significant aspects of biology that would be missed (i.e. false negatives) and addresses the false positive rates found with the use of simple pathway-based sets. Conclusions The set generation step for GSA is often neglected but is a crucial part of the analysis as it defines the full context for the analysis. As such, set generation methods should be robust and yield as complete a representation of the extant biological knowledge as possible. The method reported here achieves this goal and is demonstrably superior to previous set analysis methods. PMID:22876834

  10. Effect of Particle Damping on an Acoustically Excited Curved Vehicle Panel Structure with varied Equipment Assemblies

    NASA Technical Reports Server (NTRS)

    Parsons, David; Smith, Andrew; Knight, Brent; Hunt, Ron; LaVerde, Bruce; Craigmyle, Ben

    2012-01-01

    Particle dampers provide a mechanism for diverting energy away from resonant structural vibrations. This experimental study provides data from trials to determine how effective use of these dampers might be for equipment mounted to a curved orthogrid vehicle panel. Trends for damping are examined for variations in damper fill level, component mass, and excitation energy. A significant response reduction at the component level would suggest that comparatively small, thoughtfully placed, particle dampers might be advantageously used in vehicle design. The results of this test will be compared with baseline acoustic response tests and other follow-on testing involving a range of isolation and damping methods. Instrumentation consisting of accelerometers, microphones, and still photography data will be collected to correlate with the analytical results.

  11. General aviation crash safety program at Langley Research Center

    NASA Technical Reports Server (NTRS)

    Thomson, R. G.

    1976-01-01

    The purpose of the crash safety program is to support development of the technology to define and demonstrate new structural concepts for improved crash safety and occupant survivability in general aviation aircraft. The program involves three basic areas of research: full-scale crash simulation testing, nonlinear structural analyses necessary to predict failure modes and collapse mechanisms of the vehicle, and evaluation of energy absorption concepts for specific component design. Both analytical and experimental methods are being used to develop expertise in these areas. Analyses include both simplified procedures for estimating energy absorption capabilities and more complex computer programs for analysis of general airframe response. Full-scale tests of typical structures as well as tests on structural components are being used to verify the analyses and to demonstrate improved design concepts.

  12. ATLAS Test Program Generator II (AGEN II). Volume I. Executive Software System.

    DTIC Science & Technology

    1980-08-01

    features. l-1 C. To provide detailed descriptions of each of the system components and modules and their corresponding flowcharts. D. To describe methods of...contains the FORTRAN source code listings to enable programmer to do the expansions and modifications. The methods and details of adding another...characteristics of the network. The top-down implementa- tion method is therefore suggested. This method starts at the top by designing the IVT modules in

  13. Electro-impulse de-icing testing analysis and design

    NASA Technical Reports Server (NTRS)

    Zumwalt, G. W.; Schrag, R. L.; Bernhart, W. D.; Friedberg, R. A.

    1988-01-01

    Electro-Impulse De-Icing (EIDI) is a method of ice removal by sharp blows delivered by a transient electromagnetic field. Detailed results are given for studies of the electrodynamic phenomena. Structural dynamic tests and computations are described. Also reported are ten sets of tests at NASA's Icing Research Tunnel and flight tests by NASA and Cessna Aircraft Company. Fabrication of system components are described and illustrated. Fatigue and electromagnetic interference tests are reported. Here, the necessary information for the design of an EIDI system for aircraft is provided.

  14. Design of an impact evaluation using a mixed methods model--an explanatory assessment of the effects of results-based financing mechanisms on maternal healthcare services in Malawi.

    PubMed

    Brenner, Stephan; Muula, Adamson S; Robyn, Paul Jacob; Bärnighausen, Till; Sarker, Malabika; Mathanga, Don P; Bossert, Thomas; De Allegri, Manuela

    2014-04-22

    In this article we present a study design to evaluate the causal impact of providing supply-side performance-based financing incentives in combination with a demand-side cash transfer component on equitable access to and quality of maternal and neonatal healthcare services. This intervention is introduced to selected emergency obstetric care facilities and catchment area populations in four districts in Malawi. We here describe and discuss our study protocol with regard to the research aims, the local implementation context, and our rationale for selecting a mixed methods explanatory design with a quasi-experimental quantitative component. The quantitative research component consists of a controlled pre- and post-test design with multiple post-test measurements. This allows us to quantitatively measure 'equitable access to healthcare services' at the community level and 'healthcare quality' at the health facility level. Guided by a theoretical framework of causal relationships, we determined a number of input, process, and output indicators to evaluate both intended and unintended effects of the intervention. Overall causal impact estimates will result from a difference-in-difference analysis comparing selected indicators across intervention and control facilities/catchment populations over time.To further explain heterogeneity of quantitatively observed effects and to understand the experiential dimensions of financial incentives on clients and providers, we designed a qualitative component in line with the overall explanatory mixed methods approach. This component consists of in-depth interviews and focus group discussions with providers, service user, non-users, and policy stakeholders. In this explanatory design comprehensive understanding of expected and unexpected effects of the intervention on both access and quality will emerge through careful triangulation at two levels: across multiple quantitative elements and across quantitative and qualitative elements. Combining a traditional quasi-experimental controlled pre- and post-test design with an explanatory mixed methods model permits an additional assessment of organizational and behavioral changes affecting complex processes. Through this impact evaluation approach, our design will not only create robust evidence measures for the outcome of interest, but also generate insights on how and why the investigated interventions produce certain intended and unintended effects and allows for a more in-depth evaluation approach.

  15. Proportion of general factor variance in a hierarchical multiple-component measuring instrument: a note on a confidence interval estimation procedure.

    PubMed

    Raykov, Tenko; Zinbarg, Richard E

    2011-05-01

    A confidence interval construction procedure for the proportion of explained variance by a hierarchical, general factor in a multi-component measuring instrument is outlined. The method provides point and interval estimates for the proportion of total scale score variance that is accounted for by the general factor, which could be viewed as common to all components. The approach may also be used for testing composite (one-tailed) or simple hypotheses about this proportion, and is illustrated with a pair of examples. ©2010 The British Psychological Society.

  16. The oxygen uptake slow component at submaximal intensities in breaststroke swimming

    PubMed Central

    Oliveira, Diogo R.; Gonçalves, Lio F.; Reis, António M.; Fernandes, Ricardo J.; Garrido, Nuno D.

    2016-01-01

    Abstract The present work proposed to study the oxygen uptake slow component (VO2 SC) of breaststroke swimmers at four different intensities of submaximal exercise, via mathematical modeling of a multi-exponential function. The slow component (SC) was also assessed with two different fixed interval methods and the three methods were compared. Twelve male swimmers performed a test comprising four submaximal 300 m bouts at different intensities where all expired gases were collected breath by breath. Multi-exponential modeling showed values above 450 ml·min−1 of the SC in the two last bouts of exercise (those with intensities above the lactate threshold). A significant effect of the method that was used to calculate the VO2 SC was revealed. Higher mean values were observed when using mathematical modeling compared with the fixed interval 3rd min method (F=7.111; p=0.012; η2=0.587); furthermore, differences were detected among the two fixed interval methods. No significant relationship was found between the SC determined by any method and the blood lactate measured at each of the four exercise intensities. In addition, no significant association between the SC and peak oxygen uptake was found. It was concluded that in trained breaststroke swimmers, the presence of the VO2 SC may be observed at intensities above that corresponding to the 3.5 mM-1 threshold. Moreover, mathematical modeling of the oxygen uptake on-kinetics tended to show a higher slow component as compared to fixed interval methods. PMID:28149379

  17. United States Advanced Ultra-Supercritical Component Test Facility for 760°C Steam Power Plants ComTest Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hack, Horst; Purgert, Robert Michael

    Following the successful completion of a 15-year effort to develop and test materials that would allow coal-fired power plants to be operated at advanced ultra-supercritical (A-USC) steam conditions, a United States-based consortium is presently engaged in a project to build an A-USC component test facility (ComTest). A-USC steam cycles have the potential to improve cycle efficiency, reduce fuel costs, and reduce greenhouse gas emissions. Current development and demonstration efforts are focused on enabling the construction of A-USC plants, operating with steam temperatures as high as 1400°F (760°C) and steam pressures up to 5000 psi (35 MPa), which can potentially increasemore » cycle efficiencies to 47% HHV (higher heating value), or approximately 50% LHV (lower heating value), and reduce CO 2 emissions by roughly 25%, compared to today’s U.S. fleet. A-USC technology provides a lower-cost method to reduce CO 2 emissions, compared to CO 2 capture technologies, while retaining a viable coal option for owners of coal generation assets. Among the goals of the ComTest facility are to validate that components made from advanced nickel-based alloys can operate and perform under A-USC conditions, to accelerate the development of a U.S.-based supply chain for the full complement of A-USC components, and to decrease the uncertainty of cost estimates for future A-USC power plants. The configuration of the ComTest facility would include the key A-USC technology components that were identified for expanded operational testing, including a gas-fired superheater, high-temperature steam piping, steam turbine valve, and cycling header component. Membrane walls in the superheater have been designed to operate at the full temperatures expected in a commercial A-USC boiler, but at a lower (intermediate) operating pressure. This superheater has been designed to increase the temperature of the steam supplied by the host utility boiler up to 1400°F (760°C). The steam turbine stop and control valve component has been designed to operate at full A-USC temperatures, and would be tested both in throttling operation and to accumulate accelerated, repetitive stroke cycles. A cycling header component has been designed to confirm the suitability of new high-temperature nickel alloys to cycling operation, expected of future coal-fired power plants. Current test plans would subject these components to A-USC operating conditions for at least 8,000 hours by September 2020. The ComTest project is managed by Energy Industries of Ohio, and technically directed by the Electric Power Research Institute, Inc., with General Electric designing the A-USC components. This consortium is completing the Detailed Engineering phase of the project, with procurement scheduled to begin in late 2017. The effort is primarily funded by the U.S. Department of Energy, through the National Energy Technology Laboratory, along with the Ohio Development Services Agency. This presentation outlines the motivation for the project, explains the project’s structure and schedule, and provides technical details on the design of the ComTest facility.« less

  18. Calculation of three-dimensional (3-D) internal flow by means of the velocity-vorticity formulation on a staggered grid

    NASA Technical Reports Server (NTRS)

    Stremel, Paul M.

    1995-01-01

    A method has been developed to accurately compute the viscous flow in three-dimensional (3-D) enclosures. This method is the 3-D extension of a two-dimensional (2-D) method developed for the calculation of flow over airfoils. The 2-D method has been tested extensively and has been shown to accurately reproduce experimental results. As in the 2-D method, the 3-D method provides for the non-iterative solution of the incompressible Navier-Stokes equations by means of a fully coupled implicit technique. The solution is calculated on a body fitted computational mesh incorporating a staggered grid methodology. In the staggered grid method, the three components of vorticity are defined at the centers of the computational cell sides, while the velocity components are defined as normal vectors at the centers of the computational cell faces. The staggered grid orientation provides for the accurate definition of the vorticity components at the vorticity locations, the divergence of vorticity at the mesh cell nodes and the conservation of mass at the mesh cell centers. The solution is obtained by utilizing a fractional step solution technique in the three coordinate directions. The boundary conditions for the vorticity and velocity are calculated implicitly as part of the solution. The method provides for the non-iterative solution of the flow field and satisfies the conservation of mass and divergence of vorticity to machine zero at each time step. To test the method, the calculation of simple driven cavity flows have been computed. The driven cavity flow is defined as the flow in an enclosure driven by a moving upper plate at the top of the enclosure. To demonstrate the ability of the method to predict the flow in arbitrary cavities, results will he shown for both cubic and curved cavities.

  19. Design of the 15 GHz BPM test bench for the CLIC test facility to perform precise stretched-wire RF measurements

    NASA Astrophysics Data System (ADS)

    Zorzetti, Silvia; Fanucci, Luca; Galindo Muñoz, Natalia; Wendt, Manfred

    2015-09-01

    The Compact Linear Collider (CLIC) requires a low emittance beam transport and preservation, thus a precise control of the beam orbit along up to 50 km of the accelerator components in the sub-μm regime is required. Within the PACMAN3 (Particle Accelerator Components Metrology and Alignment to the Nanometer Scale) PhD training action a study with the objective of pre-aligning the electrical centre of a 15 GHz cavity beam position monitor (BPM) to the magnetic centre of the main beam quadrupole is initiated. Of particular importance is the design of a specific test bench to study the stretched-wire setup for the CLIC Test Facility (CTF3) BPM, focusing on the aspects of microwave signal excitation, transmission and impedance-matching, as well as the mechanical setup and reproducibility of the measurement method.

  20. Construction of mathematical model for measuring material concentration by colorimetric method

    NASA Astrophysics Data System (ADS)

    Liu, Bing; Gao, Lingceng; Yu, Kairong; Tan, Xianghua

    2018-06-01

    This paper use the method of multiple linear regression to discuss the data of C problem of mathematical modeling in 2017. First, we have established a regression model for the concentration of 5 substances. But only the regression model of the substance concentration of urea in milk can pass through the significance test. The regression model established by the second sets of data can pass the significance test. But this model exists serious multicollinearity. We have improved the model by principal component analysis. The improved model is used to control the system so that it is possible to measure the concentration of material by direct colorimetric method.

  1. System Testing of Ground Cooling System Components

    NASA Technical Reports Server (NTRS)

    Ensey, Tyler Steven

    2014-01-01

    This internship focused primarily upon software unit testing of Ground Cooling System (GCS) components, one of the three types of tests (unit, integrated, and COTS/regression) utilized in software verification. Unit tests are used to test the software of necessary components before it is implemented into the hardware. A unit test determines that the control data, usage procedures, and operating procedures of a particular component are tested to determine if the program is fit for use. Three different files are used to make and complete an efficient unit test. These files include the following: Model Test file (.mdl), Simulink SystemTest (.test), and autotest (.m). The Model Test file includes the component that is being tested with the appropriate Discrete Physical Interface (DPI) for testing. The Simulink SystemTest is a program used to test all of the requirements of the component. The autotest tests that the component passes Model Advisor and System Testing, and puts the results into proper files. Once unit testing is completed on the GCS components they can then be implemented into the GCS Schematic and the software of the GCS model as a whole can be tested using integrated testing. Unit testing is a critical part of software verification; it allows for the testing of more basic components before a model of higher fidelity is tested, making the process of testing flow in an orderly manner.

  2. Elementary signaling modes predict the essentiality of signal transduction network components

    PubMed Central

    2011-01-01

    Background Understanding how signals propagate through signaling pathways and networks is a central goal in systems biology. Quantitative dynamic models help to achieve this understanding, but are difficult to construct and validate because of the scarcity of known mechanistic details and kinetic parameters. Structural and qualitative analysis is emerging as a feasible and useful alternative for interpreting signal transduction. Results In this work, we present an integrative computational method for evaluating the essentiality of components in signaling networks. This approach expands an existing signaling network to a richer representation that incorporates the positive or negative nature of interactions and the synergistic behaviors among multiple components. Our method simulates both knockout and constitutive activation of components as node disruptions, and takes into account the possible cascading effects of a node's disruption. We introduce the concept of elementary signaling mode (ESM), as the minimal set of nodes that can perform signal transduction independently. Our method ranks the importance of signaling components by the effects of their perturbation on the ESMs of the network. Validation on several signaling networks describing the immune response of mammals to bacteria, guard cell abscisic acid signaling in plants, and T cell receptor signaling shows that this method can effectively uncover the essentiality of components mediating a signal transduction process and results in strong agreement with the results of Boolean (logical) dynamic models and experimental observations. Conclusions This integrative method is an efficient procedure for exploratory analysis of large signaling and regulatory networks where dynamic modeling or experimental tests are impractical. Its results serve as testable predictions, provide insights into signal transduction and regulatory mechanisms and can guide targeted computational or experimental follow-up studies. The source codes for the algorithms developed in this study can be found at http://www.phys.psu.edu/~ralbert/ESM. PMID:21426566

  3. A Method for Calculating the Probability of Successfully Completing a Rocket Propulsion Ground Test

    NASA Technical Reports Server (NTRS)

    Messer, Bradley P.

    2004-01-01

    Propulsion ground test facilities face the daily challenges of scheduling multiple customers into limited facility space and successfully completing their propulsion test projects. Due to budgetary and schedule constraints, NASA and industry customers are pushing to test more components, for less money, in a shorter period of time. As these new rocket engine component test programs are undertaken, the lack of technology maturity in the test articles, combined with pushing the test facilities capabilities to their limits, tends to lead to an increase in facility breakdowns and unsuccessful tests. Over the last five years Stennis Space Center's propulsion test facilities have performed hundreds of tests, collected thousands of seconds of test data, and broken numerous test facility and test article parts. While various initiatives have been implemented to provide better propulsion test techniques and improve the quality, reliability, and maintainability of goods and parts used in the propulsion test facilities, unexpected failures during testing still occur quite regularly due to the harsh environment in which the propulsion test facilities operate. Previous attempts at modeling the lifecycle of a propulsion component test project have met with little success. Each of the attempts suffered form incomplete or inconsistent data on which to base the models. By focusing on the actual test phase of the tests project rather than the formulation, design or construction phases of the test project, the quality and quantity of available data increases dramatically. A logistic regression model has been developed form the data collected over the last five years, allowing the probability of successfully completing a rocket propulsion component test to be calculated. A logistic regression model is a mathematical modeling approach that can be used to describe the relationship of several independent predictor variables X(sub 1), X(sub 2),..,X(sub k) to a binary or dichotomous dependent variable Y, where Y can only be one of two possible outcomes, in this case Success or Failure. Logistic regression has primarily been used in the fields of epidemiology and biomedical research, but lends itself to many other applications. As indicated the use of logistic regression is not new, however, modeling propulsion ground test facilities using logistic regression is both a new and unique application of the statistical technique. Results from the models provide project managers with insight and confidence into the affectivity of rocket engine component ground test projects. The initial success in modeling rocket propulsion ground test projects clears the way for more complex models to be developed in this area.

  4. Cantilever testing of sintered-silver interconnects

    DOE PAGES

    Wereszczak, Andrew A.; Chen, Branndon R.; Jadaan, Osama M.; ...

    2017-10-19

    Cantilever testing is an underutilized test method from which results and interpretations promote greater understanding of the tensile and shear failure responses of interconnects, metallizations, or bonded joints. The use and analysis of this method were pursued through the mechanical testing of sintered-silver interconnects that joined Ni/Au-plated copper pillars or Ti/Ni/Ag-plated silicon pillars to Ag-plated direct bonded copper substrates. Sintered-silver was chosen as the interconnect test medium because of its high electrical and thermal conductivities and high-temperature capability—attractive characteristics for a candidate interconnect in power electronic components and other devices. Deep beam theory was used to improve upon the estimationsmore » of the tensile and shear stresses calculated from classical beam theory. The failure stresses of the sintered-silver interconnects were observed to be dependent on test-condition and test-material-system. In conclusion, the experimental simplicity of cantilever testing, and the ability to analytically calculate tensile and shear stresses at failure, result in it being an attractive mechanical test method to evaluate the failure response of interconnects.« less

  5. Cantilever testing of sintered-silver interconnects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wereszczak, Andrew A.; Chen, Branndon R.; Jadaan, Osama M.

    Cantilever testing is an underutilized test method from which results and interpretations promote greater understanding of the tensile and shear failure responses of interconnects, metallizations, or bonded joints. The use and analysis of this method were pursued through the mechanical testing of sintered-silver interconnects that joined Ni/Au-plated copper pillars or Ti/Ni/Ag-plated silicon pillars to Ag-plated direct bonded copper substrates. Sintered-silver was chosen as the interconnect test medium because of its high electrical and thermal conductivities and high-temperature capability—attractive characteristics for a candidate interconnect in power electronic components and other devices. Deep beam theory was used to improve upon the estimationsmore » of the tensile and shear stresses calculated from classical beam theory. The failure stresses of the sintered-silver interconnects were observed to be dependent on test-condition and test-material-system. In conclusion, the experimental simplicity of cantilever testing, and the ability to analytically calculate tensile and shear stresses at failure, result in it being an attractive mechanical test method to evaluate the failure response of interconnects.« less

  6. Hyperspectral Image Denoising Using a Nonlocal Spectral Spatial Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Li, D.; Xu, L.; Peng, J.; Ma, J.

    2018-04-01

    Hyperspectral images (HSIs) denoising is a critical research area in image processing duo to its importance in improving the quality of HSIs, which has a negative impact on object detection and classification and so on. In this paper, we develop a noise reduction method based on principal component analysis (PCA) for hyperspectral imagery, which is dependent on the assumption that the noise can be removed by selecting the leading principal components. The main contribution of paper is to introduce the spectral spatial structure and nonlocal similarity of the HSIs into the PCA denoising model. PCA with spectral spatial structure can exploit spectral correlation and spatial correlation of HSI by using 3D blocks instead of 2D patches. Nonlocal similarity means the similarity between the referenced pixel and other pixels in nonlocal area, where Mahalanobis distance algorithm is used to estimate the spatial spectral similarity by calculating the distance in 3D blocks. The proposed method is tested on both simulated and real hyperspectral images, the results demonstrate that the proposed method is superior to several other popular methods in HSI denoising.

  7. Joint multifractal analysis based on wavelet leaders

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Yang, Yan-Hong; Wang, Gang-Jin; Zhou, Wei-Xing

    2017-12-01

    Mutually interacting components form complex systems and these components usually have long-range cross-correlated outputs. Using wavelet leaders, we propose a method for characterizing the joint multifractal nature of these long-range cross correlations; we call this method joint multifractal analysis based on wavelet leaders (MF-X-WL). We test the validity of the MF-X-WL method by performing extensive numerical experiments on dual binomial measures with multifractal cross correlations and bivariate fractional Brownian motions (bFBMs) with monofractal cross correlations. Both experiments indicate that MF-X-WL is capable of detecting cross correlations in synthetic data with acceptable estimating errors. We also apply the MF-X-WL method to pairs of series from financial markets (returns and volatilities) and online worlds (online numbers of different genders and different societies) and determine intriguing joint multifractal behavior.

  8. A new compound control method for sine-on-random mixed vibration test

    NASA Astrophysics Data System (ADS)

    Zhang, Buyun; Wang, Ruochen; Zeng, Falin

    2017-09-01

    Vibration environmental test (VET) is one of the important and effective methods to provide supports for the strength design, reliability and durability test of mechanical products. A new separation control strategy was proposed to apply in multiple-input multiple-output (MIMO) sine on random (SOR) mixed mode vibration test, which is the advanced and intensive test type of VET. As the key problem of the strategy, correlation integral method was applied to separate the mixed signals which included random and sinusoidal components. The feedback control formula of MIMO linear random vibration system was systematically deduced in frequency domain, and Jacobi control algorithm was proposed in view of the elements, such as self-spectrum, coherence, and phase of power spectral density (PSD) matrix. Based on the excessive correction of excitation in sine vibration test, compression factor was introduced to reduce the excitation correction, avoiding the destruction to vibration table or other devices. The two methods were synthesized to be applied in MIMO SOR vibration test system. In the final, verification test system with the vibration of a cantilever beam as the control object was established to verify the reliability and effectiveness of the methods proposed in the paper. The test results show that the exceeding values can be controlled in the tolerance range of references accurately, and the method can supply theory and application supports for mechanical engineering.

  9. Characterizing performance of ultra-sensitive accelerometers

    NASA Technical Reports Server (NTRS)

    Sebesta, Henry

    1990-01-01

    An overview is given of methodology and test results pertaining to the characterization of ultra sensitive accelerometers. Two issues are of primary concern. The terminology ultra sensitive accelerometer is used to imply instruments whose noise floors and resolution are at the state of the art. Hence, the typical approach of verifying an instrument's performance by measuring it with a yet higher quality instrument (or standard) is not practical. Secondly, it is difficult to find or create an environment with sufficiently low background acceleration. The typical laboratory acceleration levels will be at several orders of magnitude above the noise floor of the most sensitive accelerometers. Furthermore, this background must be treated as unknown since the best instrument available is the one to be tested. A test methodology was developed in which two or more like instruments are subjected to the same but unknown background acceleration. Appropriately selected spectral analysis techniques were used to separate the sensors' output spectra into coherent components and incoherent components. The coherent part corresponds to the background acceleration being measured by the sensors being tested. The incoherent part is attributed to sensor noise and data acquisition and processing noise. The method works well for estimating noise floors that are 40 to 50 dB below the motion applied to the test accelerometers. The accelerometers being tested are intended for use as feedback sensors in a system to actively stabilize an inertial guidance component test platform.

  10. Noise deconvolution based on the L1-metric and decomposition of discrete distributions of postsynaptic responses.

    PubMed

    Astrelin, A V; Sokolov, M V; Behnisch, T; Reymann, K G; Voronin, L L

    1997-04-25

    A statistical approach to analysis of amplitude fluctuations of postsynaptic responses is described. This includes (1) using a L1-metric in the space of distribution functions for minimisation with application of linear programming methods to decompose amplitude distributions into a convolution of Gaussian and discrete distributions; (2) deconvolution of the resulting discrete distribution with determination of the release probabilities and the quantal amplitude for cases with a small number (< 5) of discrete components. The methods were tested against simulated data over a range of sample sizes and signal-to-noise ratios which mimicked those observed in physiological experiments. In computer simulation experiments, comparisons were made with other methods of 'unconstrained' (generalized) and constrained reconstruction of discrete components from convolutions. The simulation results provided additional criteria for improving the solutions to overcome 'over-fitting phenomena' and to constrain the number of components with small probabilities. Application of the programme to recordings from hippocampal neurones demonstrated its usefulness for the analysis of amplitude distributions of postsynaptic responses.

  11. Determination of main components and anaerobic rumen digestibility of aquatic plants in vitro using near-infrared-reflectance spectroscopy.

    PubMed

    Yue, Zheng-Bo; Zhang, Meng-Lin; Sheng, Guo-Ping; Liu, Rong-Hua; Long, Ying; Xiang, Bing-Ren; Wang, Jin; Yu, Han-Qing

    2010-04-01

    A near-infrared-reflectance (NIR) spectroscopy-based method is established to determine the main components of aquatic plants as well as their anaerobic rumen biodegradability. The developed method is more rapid and accurate compared to the conventional chemical analysis and biodegradability tests. Moisture, volatile solid, Klason lignin and ash in entire aquatic plants could be accurately predicted using this method with coefficient of determination (r(2)) values of 0.952, 0.916, 0.939 and 0.950, respectively. In addition, the anaerobic rumen biodegradability of aquatic plants, represented as biogas and methane yields, could also be predicted well. The algorithm of continuous wavelet transform for the NIR spectral data pretreatment is able to greatly enhance the robustness and predictive ability of the NIR spectral analysis. These results indicate that NIR spectroscopy could be used to predict the main components of aquatic plants and their anaerobic biodegradability. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  12. Two-Component Noncollinear Time-Dependent Spin Density Functional Theory for Excited State Calculations.

    PubMed

    Egidi, Franco; Sun, Shichao; Goings, Joshua J; Scalmani, Giovanni; Frisch, Michael J; Li, Xiaosong

    2017-06-13

    We present a linear response formalism for the description of the electronic excitations of a noncollinear reference defined via Kohn-Sham spin density functional methods. A set of auxiliary variables, defined using the density and noncollinear magnetization density vector, allows the generalization of spin density functional kernels commonly used in collinear DFT to noncollinear cases, including local density, GGA, meta-GGA and hybrid functionals. Working equations and derivations of functional second derivatives with respect to the noncollinear density, required in the linear response noncollinear TDDFT formalism, are presented in this work. This formalism takes all components of the spin magnetization into account independent of the type of reference state (open or closed shell). As a result, the method introduced here is able to afford a nonzero local xc torque on the spin magnetization while still satisfying the zero-torque theorem globally. The formalism is applied to a few test cases using the variational exact-two-component reference including spin-orbit coupling to illustrate the capabilities of the method.

  13. Tuning the Stiffness Balance Using Characteristic Frequencies as a Criterion for a Superconducting Gravity Gradiometer.

    PubMed

    Liu, Xikai; Ma, Dong; Chen, Liang; Liu, Xiangdong

    2018-02-08

    Tuning the stiffness balance is crucial to full-band common-mode rejection for a superconducting gravity gradiometer (SGG). A reliable method to do so has been proposed and experimentally tested. In the tuning scheme, the frequency response functions of the displacement of individual test mass upon common-mode accelerations were measured and thus determined a characteristic frequency for each test mass. A reduced difference in characteristic frequencies between the two test masses was utilized as the criterion for an effective tuning. Since the measurement of the characteristic frequencies does not depend on the scale factors of displacement detection, stiffness tuning can be done independently. We have tested this new method on a single-component SGG and obtained a reduction of two orders of magnitude in stiffness mismatch.

  14. Status of the Flooding Fragility Testing Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, C. L.; Savage, B.; Bhandari, B.

    2016-06-01

    This report provides an update on research addressing nuclear power plant component reliability under flooding conditions. The research includes use of the Component Flooding Evaluation Laboratory (CFEL) where individual components and component subassemblies will be tested to failure under various flooding conditions. The resulting component reliability data can then be incorporated with risk simulation strategies to provide a more thorough representation of overall plant risk. The CFEL development strategy consists of four interleaved phases. Phase 1 addresses design and application of CFEL with water rise and water spray capabilities allowing testing of passive and active components including fully electrified components.more » Phase 2 addresses research into wave generation techniques followed by the design and addition of the wave generation capability to CFEL. Phase 3 addresses methodology development activities including small scale component testing, development of full scale component testing protocol, and simulation techniques including Smoothed Particle Hydrodynamic (SPH) based computer codes. Phase 4 involves full scale component testing including work on full scale component testing in a surrogate CFEL testing apparatus.« less

  15. Analysis of the torsional storage modulus of human hair and its relation to hair morphology and cosmetic processing.

    PubMed

    Wortmann, Franz J; Wortmann, Gabriele; Haake, Hans-Martin; Eisfeld, Wolf

    2014-01-01

    Through measurements of three different hair samples (virgin and treated) by the torsional pendulum method (22°C, 22% RH) a systematic decrease of the torsional storage modulus G' with increasing fiber diameter, i.e., polar moment of inertia, is observed. G' is therefore not a material constant for hair. This change of G' implies a systematic component of data variance, which significantly contributes to the limitations of the torsional method for cosmetic claim support. Fitting the data on the basis of a core/shell model for cortex and cuticle enables to separate this systematic component of variance and to greatly enhance the discriminative power of the test. The fitting procedure also provides values for the torsional storage moduli of the morphological components, confirming that the cuticle modulus is substantially higher than that of the cortex. The results give consistent insight into the changes imparted to the morphological components by the cosmetic treatments.

  16. Single-shot color fringe projection for three-dimensional shape measurement of objects with discontinuities.

    PubMed

    Dai, Meiling; Yang, Fujun; He, Xiaoyuan

    2012-04-20

    A simple but effective fringe projection profilometry is proposed to measure 3D shape by using one snapshot color sinusoidal fringe pattern. One color fringe pattern encoded with a sinusoidal fringe (as red component) and one uniform intensity pattern (as blue component) is projected by a digital video projector, and the deformed fringe pattern is recorded by a color CCD camera. The captured color fringe pattern is separated into its RGB components and division operation is applied to red and blue channels to reduce the variable reflection intensity. Shape information of the tested object is decoded by applying an arcsine algorithm on the normalized fringe pattern with subpixel resolution. In the case of fringe discontinuities caused by height steps, or spatially isolated surfaces, the separated blue component is binarized and used for correcting the phase demodulation. A simple and robust method is also introduced to compensate for nonlinear intensity response of the digital video projector. The experimental results demonstrate the validity of the proposed method.

  17. Data collection and analysis software development for rotor dynamics testing in spin laboratory

    NASA Astrophysics Data System (ADS)

    Abdul-Aziz, Ali; Arble, Daniel; Woike, Mark

    2017-04-01

    Gas turbine engine components undergo high rotational loading another complex environmental conditions. Such operating environment leads these components to experience damages and cracks that can cause catastrophic failure during flights. There are traditional crack detections and health monitoring methodologies currently being used which rely on periodic routine maintenances, nondestructive inspections that often times involve engine and components dis-assemblies. These methods do not also offer adequate information about the faults, especially, if these faults at subsurface or not clearly evident. At NASA Glenn research center, the rotor dynamics laboratory is presently involved in developing newer techniques that are highly dependent on sensor technology to enable health monitoring and prediction of damage and cracks in rotor disks. These approaches are noninvasive and relatively economical. Spin tests are performed using a subscale test article mimicking turbine rotor disk undergoing rotational load. Non-contact instruments such as capacitive and microwave sensors are used to measure the blade tip gap displacement and blade vibrations characteristics in an attempt develop a physics based model to assess/predict the faults in the rotor disk. Data collection is a major component in this experimental-analytical procedure and as a result, an upgrade to an older version of the data acquisition software which is based on LabVIEW program has been implemented to support efficiently running tests and analyze the results. Outcomes obtained from the tests data and related experimental and analytical rotor dynamics modeling including key features of the updated software are presented and discussed.

  18. Technology-delivered adaptations of motivational interviewing for health-related behaviors: A systematic review of the current research

    PubMed Central

    Shingleton, Rebecca M.; Palfai, Tibor P.

    2015-01-01

    Objectives The aims of this paper were to describe and evaluate the methods and efficacy of technology-delivered motivational interviewing interventions (TAMIs), discuss the challenges and opportunities of TAMIs, and provide a framework for future research. Methods We reviewed studies that reported using motivational interviewing (MI) based components delivered via technology and conducted ratings on technology description, comprehensiveness of MI, and study methods. Results The majority of studies were fully-automated and included at least one form of media rich technology to deliver the TAMI. Few studies provided complete descriptions of how MI components were delivered via technology. Of the studies that isolated the TAMI effects, positive changes were reported. Conclusion Researchers have used a range of technologies to deliver TAMIs suggesting feasibility of these methods. However, there are limited data regarding their efficacy, and strategies to deliver relational components remain a challenge. Future research should better characterize the components of TAMIs, empirically test the efficacy of TAMIs with randomized controlled trials, and incorporate fidelity measures. Practice Implications TAMIs are feasible to implement and well accepted. These approaches offer considerable potential to reduce costs, minimize therapist and training burden, and expand the range of clients that may benefit from adaptations of MI. PMID:26298219

  19. Weighted divergence correction scheme and its fast implementation

    NASA Astrophysics Data System (ADS)

    Wang, ChengYue; Gao, Qi; Wei, RunJie; Li, Tian; Wang, JinJun

    2017-05-01

    Forcing the experimental volumetric velocity fields to satisfy mass conversation principles has been proved beneficial for improving the quality of measured data. A number of correction methods including the divergence correction scheme (DCS) have been proposed to remove divergence errors from measurement velocity fields. For tomographic particle image velocimetry (TPIV) data, the measurement uncertainty for the velocity component along the light thickness direction is typically much larger than for the other two components. Such biased measurement errors would weaken the performance of traditional correction methods. The paper proposes a variant for the existing DCS by adding weighting coefficients to the three velocity components, named as the weighting DCS (WDCS). The generalized cross validation (GCV) method is employed to choose the suitable weighting coefficients. A fast algorithm for DCS or WDCS is developed, making the correction process significantly low-cost to implement. WDCS has strong advantages when correcting velocity components with biased noise levels. Numerical tests validate the accuracy and efficiency of the fast algorithm, the effectiveness of GCV method, and the advantages of WDCS. Lastly, DCS and WDCS are employed to process experimental velocity fields from the TPIV measurement of a turbulent boundary layer. This shows that WDCS achieves a better performance than DCS in improving some flow statistics.

  20. Study on Fault Diagnostics of a Turboprop Engine Using Inverse Performance Model and Artificial Intelligent Methods

    NASA Astrophysics Data System (ADS)

    Kong, Changduk; Lim, Semyeong

    2011-12-01

    Recently, the health monitoring system of major gas path components of gas turbine uses mostly the model based method like the Gas Path Analysis (GPA). This method is to find quantity changes of component performance characteristic parameters such as isentropic efficiency and mass flow parameter by comparing between measured engine performance parameters such as temperatures, pressures, rotational speeds, fuel consumption, etc. and clean engine performance parameters without any engine faults which are calculated by the base engine performance model. Currently, the expert engine diagnostic systems using the artificial intelligent methods such as Neural Networks (NNs), Fuzzy Logic and Genetic Algorithms (GAs) have been studied to improve the model based method. Among them the NNs are mostly used to the engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base if there are large amount of learning data. In addition, it has a very complex structure for finding effectively single type faults or multiple type faults of gas path components. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measured performance data, and proposes a fault diagnostic system using the base engine performance model and the artificial intelligent methods such as Fuzzy logic and Neural Network. The proposed diagnostic system isolates firstly the faulted components using Fuzzy Logic, then quantifies faults of the identified components using the NN leaned by fault learning data base, which are obtained from the developed base performance model. In leaning the NN, the Feed Forward Back Propagation (FFBP) method is used. Finally, it is verified through several test examples that the component faults implanted arbitrarily in the engine are well isolated and quantified by the proposed diagnostic system.

  1. Analytical Bias Exceeding Desirable Quality Goal in 4 out of 5 Common Immunoassays: Results of a Native Single Serum Sample External Quality Assessment Program for Cobalamin, Folate, Ferritin, Thyroid-Stimulating Hormone, and Free T4 Analyses.

    PubMed

    Kristensen, Gunn B B; Rustad, Pål; Berg, Jens P; Aakre, Kristin M

    2016-09-01

    We undertook this study to evaluate method differences for 5 components analyzed by immunoassays, to explore whether the use of method-dependent reference intervals may compensate for method differences, and to investigate commutability of external quality assessment (EQA) materials. Twenty fresh native single serum samples, a fresh native serum pool, Nordic Federation of Clinical Chemistry Reference Serum X (serum X) (serum pool), and 2 EQA materials were sent to 38 laboratories for measurement of cobalamin, folate, ferritin, free T4, and thyroid-stimulating hormone (TSH) by 5 different measurement procedures [Roche Cobas (n = 15), Roche Modular (n = 4), Abbott Architect (n = 8), Beckman Coulter Unicel (n = 2), and Siemens ADVIA Centaur (n = 9)]. The target value for each component was calculated based on the mean of method means or measured by a reference measurement procedure (free T4). Quality specifications were based on biological variation. Local reference intervals were reported from all laboratories. Method differences that exceeded acceptable bias were found for all components except folate. Free T4 differences from the uncommonly used reference measurement procedure were large. Reference intervals differed between measurement procedures but also within 1 measurement procedure. The serum X material was commutable for all components and measurement procedures, whereas the EQA materials were noncommutable in 13 of 50 occasions (5 components, 5 methods, 2 EQA materials). The bias between the measurement procedures was unacceptably large in 4/5 tested components. Traceability to reference materials as claimed by the manufacturers did not lead to acceptable harmonization. Adjustment of reference intervals in accordance with method differences and use of commutable EQA samples are not implemented commonly. © 2016 American Association for Clinical Chemistry.

  2. Advanced Turbine Technology Applications Project (ATTAP)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    ATTAP activities during the past year included test-bed engine design and development, ceramic component design, materials and component characterization, ceramic component process development and fabrication, ceramic component rig testing, and test-bed engine fabrication and testing. Significant technical challenges remain, but all areas exhibited progress. Test-bed engine design and development included engine mechanical design, combustion system design, alternate aerodynamic designs of gasifier scrolls, and engine system integration aimed at upgrading the AGT-5 from a 1038 C (1900 F) metal engine to a durable 1372 C (2500 F) structural ceramic component test-bed engine. ATTAP-defined ceramic and associated ceramic/metal component design activities completed include the ceramic gasifier turbine static structure, the ceramic gasifier turbine rotor, ceramic combustors, the ceramic regenerator disk, the ceramic power turbine rotors, and the ceramic/metal power turbine static structure. The material and component characterization efforts included the testing and evaluation of seven candidate materials and three development components. Ceramic component process development and fabrication proceeded for the gasifier turbine rotor, gasifier turbine scroll, gasifier turbine vanes and vane platform, extruded regenerator disks, and thermal insulation. Component rig activities included the development of both rigs and the necessary test procedures, and conduct of rig testing of the ceramic components and assemblies. Test-bed engine fabrication, testing, and development supported improvements in ceramic component technology that permit the achievement of both program performance and durability goals. Total test time in 1991 amounted to 847 hours, of which 128 hours were engine testing, and 719 were hot rig testing.

  3. Modification of a compressor performance test bench for liquid slugging observation in refrigeration compressors

    NASA Astrophysics Data System (ADS)

    Ola, Max; Thomas, Christiane; Hesse, Ullrich

    2017-08-01

    Compressor performance test procedures are defined by the standard DIN EN 13771, wherein a variety of possible calorimeter and flow rate measurement methods are suggested. One option is the selection of two independent measurement methods. The accuracies of both selected measurement methods are essential. The second option requires only one method. However the measurement accuracy of the used device has to be verified and recalibrated on a regular basis. The compressor performance test facility at the Technische Universitaet Dresden uses a calibrated flow measurement sensor, a hot gas bypass and a mixed flow heat exchanger. The test bench can easily be modified for tests of various compressor types at different operating ranges and with various refrigerants. In addition, the modified test setup enables the investigation of long term liquid slug and its effects on the compressor. The modification comprises observational components, adjustments of the control system, safety measures and a customized oil recirculation system for compressors which do not contain an integrated oil sump or oil level regulation system. This paper describes the setup of the test bench, its functional principle, the key modifications, first test results and an evaluation of the energy balance.

  4. Gene set analysis using variance component tests.

    PubMed

    Huang, Yen-Tsung; Lin, Xihong

    2013-06-28

    Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data.

  5. The Partially Flipped Classroom: The Effects of Flipping a Module on "Junk Science" in a Large Methods Course

    ERIC Educational Resources Information Center

    Burgoyne, Stephanie; Eaton, Judy

    2018-01-01

    Flipped classrooms are gaining popularity, especially in psychology statistics courses. However, not all courses lend themselves to a fully flipped design, and some instructors might not want to commit to flipping every class. We tested the effectiveness of flipping just one component (a module on junk science) of a large methods course. We…

  6. Nondestructive structural evaluation of wood floor systems with a vibration technique.

    Treesearch

    Xiping Wang; Robert J. Ross; Lawrence Andrew Soltis

    2002-01-01

    The objective of this study was to determine if transverse vibration methods could be used to effectively assess the structural integrity of wood floors as component systems. A total of 10 wood floor systems, including 3 laboratory-built floor sections and 7 in-place floors in historic buildings, were tested. A forced vibration method was applied to the floor systems...

  7. Investigation of noise in gear transmissions by the method of mathematical smoothing of experiments

    NASA Technical Reports Server (NTRS)

    Sheftel, B. T.; Lipskiy, G. K.; Ananov, P. P.; Chernenko, I. K.

    1973-01-01

    A rotatable central component smoothing method is used to analyze rotating gear noise spectra. A matrix is formulated in which the randomized rows correspond to various tests and the columns to factor values. Canonical analysis of the obtained regression equation permits the calculation of optimal speed and load at a previous assigned noise level.

  8. Efficient techniques for forced response involving linear modal components interconnected by discrete nonlinear connection elements

    NASA Astrophysics Data System (ADS)

    Avitabile, Peter; O'Callahan, John

    2009-01-01

    Generally, response analysis of systems containing discrete nonlinear connection elements such as typical mounting connections require the physical finite element system matrices to be used in a direct integration algorithm to compute the nonlinear response analysis solution. Due to the large size of these physical matrices, forced nonlinear response analysis requires significant computational resources. Usually, the individual components of the system are analyzed and tested as separate components and their individual behavior may essentially be linear when compared to the total assembled system. However, the joining of these linear subsystems using highly nonlinear connection elements causes the entire system to become nonlinear. It would be advantageous if these linear modal subsystems could be utilized in the forced nonlinear response analysis since much effort has usually been expended in fine tuning and adjusting the analytical models to reflect the tested subsystem configuration. Several more efficient techniques have been developed to address this class of problem. Three of these techniques given as: equivalent reduced model technique (ERMT);modal modification response technique (MMRT); andcomponent element method (CEM); are presented in this paper and are compared to traditional methods.

  9. [Effects of endophytic fungi from Dendrobium officinale on host growth and components metabolism of tissue culture seedlings].

    PubMed

    Zhu, Bo; Liu, Jing-Jing; Si, Jin-Ping; Qin, Lu-Ping; Han, Ting; Zhao, Li; Wu, Ling-Shang

    2016-05-01

    The paper aims to study the effects of endophytic fungi from D. officinale cultivated on living trees on growth and components metabolism of tissue culture seedlings. Morphological characteristics and agronomic characters of tissue culture seedlings infected and uninfected by endophytic fungus were observed and measured. Polysaccharides and alcohol-soluble extracts contents were determined by phenol-sulfuric acid method and hot-dipmethod, respectively. Monosacchride composition of polysaccharides and alcohol-soluble extracts components were analyzed by pre-column derivatives HPLC and HPLC method, respectively. It showed that effects of turning to purple of stem nodes could be changed by endophytic fungus. Besides, the endophytic fungus could affect the contents and constitutions of polysaccharides and alcohol-soluble extracts. The strains tested, expect DO34, could promote growth and polysaccharides content of tissue culture seedlings. The strains tested, expect DO12, could promote the accumulation of mannose. Furthermore, DO18, DO19 and DO120 could increase alcohol-soluble extracts. On the basis, four superior strains were selected for mechanism research between endophytic fungus and their hosts and microbiology engineering. Copyright© by the Chinese Pharmaceutical Association.

  10. [Discrimination of types of polyacrylamide based on near infrared spectroscopy coupled with least square support vector machine].

    PubMed

    Zhang, Hong-Guang; Yang, Qin-Min; Lu, Jian-Gang

    2014-04-01

    In this paper, a novel discriminant methodology based on near infrared spectroscopic analysis technique and least square support vector machine was proposed for rapid and nondestructive discrimination of different types of Polyacrylamide. The diffuse reflectance spectra of samples of Non-ionic Polyacrylamide, Anionic Polyacrylamide and Cationic Polyacrylamide were measured. Then principal component analysis method was applied to reduce the dimension of the spectral data and extract of the principal compnents. The first three principal components were used for cluster analysis of the three different types of Polyacrylamide. Then those principal components were also used as inputs of least square support vector machine model. The optimization of the parameters and the number of principal components used as inputs of least square support vector machine model was performed through cross validation based on grid search. 60 samples of each type of Polyacrylamide were collected. Thus a total of 180 samples were obtained. 135 samples, 45 samples for each type of Polyacrylamide, were randomly split into a training set to build calibration model and the rest 45 samples were used as test set to evaluate the performance of the developed model. In addition, 5 Cationic Polyacrylamide samples and 5 Anionic Polyacrylamide samples adulterated with different proportion of Non-ionic Polyacrylamide were also prepared to show the feasibilty of the proposed method to discriminate the adulterated Polyacrylamide samples. The prediction error threshold for each type of Polyacrylamide was determined by F statistical significance test method based on the prediction error of the training set of corresponding type of Polyacrylamide in cross validation. The discrimination accuracy of the built model was 100% for prediction of the test set. The prediction of the model for the 10 mixing samples was also presented, and all mixing samples were accurately discriminated as adulterated samples. The overall results demonstrate that the discrimination method proposed in the present paper can rapidly and nondestructively discriminate the different types of Polyacrylamide and the adulterated Polyacrylamide samples, and offered a new approach to discriminate the types of Polyacrylamide.

  11. Functional Heart Valve Scaffolds Obtained by Complete Decellularization of Porcine Aortic Roots in a Novel Differential Pressure Gradient Perfusion System

    PubMed Central

    Sierad, Leslie Neil; Shaw, Eliza Laine; Bina, Alexander; Brazile, Bryn; Rierson, Nicholas; Patnaik, Sourav S.; Kennamer, Allison; Odum, Rebekah; Cotoi, Ovidiu; Terezia, Preda; Branzaniuc, Klara; Smallwood, Harrison; Deac, Radu; Egyed, Imre; Pavai, Zoltan; Szanto, Annamaria; Harceaga, Lucian; Suciu, Horatiu; Raicea, Victor; Olah, Peter; Simionescu, Agneta; Liao, Jun; Movileanu, Ionela

    2015-01-01

    There is a great need for living valve replacements for patients of all ages. Such constructs could be built by tissue engineering, with perspective of the unique structure and biology of the aortic root. The aortic valve root is composed of several different tissues, and careful structural and functional consideration has to be given to each segment and component. Previous work has shown that immersion techniques are inadequate for whole-root decellularization, with the aortic wall segment being particularly resistant to decellularization. The aim of this study was to develop a differential pressure gradient perfusion system capable of being rigorous enough to decellularize the aortic root wall while gentle enough to preserve the integrity of the cusps. Fresh porcine aortic roots have been subjected to various regimens of perfusion decellularization using detergents and enzymes and results compared to immersion decellularized roots. Success criteria for evaluation of each root segment (cusp, muscle, sinus, wall) for decellularization completeness, tissue integrity, and valve functionality were defined using complementary methods of cell analysis (histology with nuclear and matrix stains and DNA analysis), biomechanics (biaxial and bending tests), and physiologic heart valve bioreactor testing (with advanced image analysis of open–close cycles and geometric orifice area measurement). Fully acellular porcine roots treated with the optimized method exhibited preserved macroscopic structures and microscopic matrix components, which translated into conserved anisotropic mechanical properties, including bending and excellent valve functionality when tested in aortic flow and pressure conditions. This study highlighted the importance of (1) adapting decellularization methods to specific target tissues, (2) combining several methods of cell analysis compared to relying solely on histology, (3) developing relevant valve-specific mechanical tests, and (4) in vitro testing of valve functionality. PMID:26467108

  12. Apparatus and method for defect testing of integrated circuits

    DOEpatents

    Cole, Jr., Edward I.; Soden, Jerry M.

    2000-01-01

    An apparatus and method for defect and failure-mechanism testing of integrated circuits (ICs) is disclosed. The apparatus provides an operating voltage, V.sub.DD, to an IC under test and measures a transient voltage component, V.sub.DDT, signal that is produced in response to switching transients that occur as test vectors are provided as inputs to the IC. The amplitude or time delay of the V.sub.DDT signal can be used to distinguish between defective and defect-free (i.e. known good) ICs. The V.sub.DDT signal is measured with a transient digitizer, a digital oscilloscope, or with an IC tester that is also used to input the test vectors to the IC. The present invention has applications for IC process development, for the testing of ICs during manufacture, and for qualifying ICs for reliability.

  13. Estimation of sample size and testing power (part 5).

    PubMed

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-02-01

    Estimation of sample size and testing power is an important component of research design. This article introduced methods for sample size and testing power estimation of difference test for quantitative and qualitative data with the single-group design, the paired design or the crossover design. To be specific, this article introduced formulas for sample size and testing power estimation of difference test for quantitative and qualitative data with the above three designs, the realization based on the formulas and the POWER procedure of SAS software and elaborated it with examples, which will benefit researchers for implementing the repetition principle.

  14. Development and Validation of a Portable Platform for Deploying Decision-Support Algorithms in Prehospital Settings

    PubMed Central

    Reisner, A. T.; Khitrov, M. Y.; Chen, L.; Blood, A.; Wilkins, K.; Doyle, W.; Wilcox, S.; Denison, T.; Reifman, J.

    2013-01-01

    Summary Background Advanced decision-support capabilities for prehospital trauma care may prove effective at improving patient care. Such functionality would be possible if an analysis platform were connected to a transport vital-signs monitor. In practice, there are technical challenges to implementing such a system. Not only must each individual component be reliable, but, in addition, the connectivity between components must be reliable. Objective We describe the development, validation, and deployment of the Automated Processing of Physiologic Registry for Assessment of Injury Severity (APPRAISE) platform, intended to serve as a test bed to help evaluate the performance of decision-support algorithms in a prehospital environment. Methods We describe the hardware selected and the software implemented, and the procedures used for laboratory and field testing. Results The APPRAISE platform met performance goals in both laboratory testing (using a vital-sign data simulator) and initial field testing. After its field testing, the platform has been in use on Boston MedFlight air ambulances since February of 2010. Conclusion These experiences may prove informative to other technology developers and to healthcare stakeholders seeking to invest in connected electronic systems for prehospital as well as in-hospital use. Our experiences illustrate two sets of important questions: are the individual components reliable (e.g., physical integrity, power, core functionality, and end-user interaction) and is the connectivity between components reliable (e.g., communication protocols and the metadata necessary for data interpretation)? While all potential operational issues cannot be fully anticipated and eliminated during development, thoughtful design and phased testing steps can reduce, if not eliminate, technical surprises. PMID:24155791

  15. Management System for Integrating Basic Skills 2 Training and Unit Training Programs

    DTIC Science & Technology

    1983-09-01

    Social Sciences. NOTEs The findings in this report are not to be construed as en official Department of the Army position, unless so designated by other...This report describes methods used and results obtained in the design , development, and field test of a management system and curriculum components...for integrating the Army’s Basic Skills Education Program, Phase II (BSEP II) and unit training programs. The curriculum components are designed to

  16. Optical contacting of quartz

    NASA Technical Reports Server (NTRS)

    Payne, L. L.

    1982-01-01

    The strength of the bond between optically contacted quartz surfaces was investigated. The Gravity Probe-B (GP-B) experiment to test the theories of general relativity requires extremely precise measurements. The quartz components of the instruments to make these measurements must be held together in a very stable unit. Optical contacting is suggested as a possible method of joining these components. The fundamental forces involved in optical contacting are reviewed and relates calculations of these forces to the results obtained in experiments.

  17. Prompt identification of tsunamigenic earthquakes from 3-component seismic data

    NASA Astrophysics Data System (ADS)

    Kundu, Ajit; Bhadauria, Y. S.; Basu, S.; Mukhopadhyay, S.

    2016-10-01

    An Artificial Neural Network (ANN) based algorithm for prompt identification of shallow focus (depth < 70 km) tsunamigenic earthquakes at a regional distance is proposed in the paper. The promptness here refers to decision making as fast as 5 min after the arrival of LR phase in the seismogram. The root mean square amplitudes of seismic phases recorded by a single 3-component station have been considered as inputs besides location and magnitude. The trained ANN has been found to categorize 100% of the new earthquakes successfully as tsunamigenic or non-tsunamigenic. The proposed method has been corroborated by an alternate mapping technique of earthquake category estimation. The second method involves computation of focal parameters, estimation of water volume displaced at the source and eventually deciding category of the earthquake. The method has been found to identify 95% of the new earthquakes successfully. Both the methods have been tested using three component broad band seismic data recorded at PALK (Pallekele, Sri Lanka) station provided by IRIS for earthquakes originating from Sumatra region of magnitude 6 and above. The fair agreement between the methods ensures that a prompt alert system could be developed based on proposed method. The method would prove to be extremely useful for the regions that are not adequately instrumented for azimuthal coverage.

  18. A web-based 3D geological information visualization system

    NASA Astrophysics Data System (ADS)

    Song, Renbo; Jiang, Nan

    2013-03-01

    Construction of 3D geological visualization system has attracted much more concern in GIS, computer modeling, simulation and visualization fields. It not only can effectively help geological interpretation and analysis work, but also can it can help leveling up geosciences professional education. In this paper, an applet-based method was introduced for developing a web-based 3D geological information visualization system. The main aims of this paper are to explore a rapid and low-cost development method for constructing a web-based 3D geological system. First, the borehole data stored in Excel spreadsheets was extracted and then stored in SQLSERVER database of a web server. Second, the JDBC data access component was utilized for providing the capability of access the database. Third, the user interface was implemented with applet component embedded in JSP page and the 3D viewing and querying functions were implemented with PickCanvas of Java3D. Last, the borehole data acquired from geological survey were used for test the system, and the test results has shown that related methods of this paper have a certain application values.

  19. An examination of effect estimation in factorial and standardly-tailored designs

    PubMed Central

    Allore, Heather G; Murphy, Terrence E

    2012-01-01

    Background Many clinical trials are designed to test an intervention arm against a control arm wherein all subjects are equally eligible for all interventional components. Factorial designs have extended this to test multiple intervention components and their interactions. A newer design referred to as a ‘standardly-tailored’ design, is a multicomponent interventional trial that applies individual interventional components to modify risk factors identified a priori and tests whether health outcomes differ between treatment arms. Standardly-tailored designs do not require that all subjects be eligible for every interventional component. Although standardly-tailored designs yield an estimate for the net effect of the multicomponent intervention, it has not yet been shown if they permit separate, unbiased estimation of individual component effects. The ability to estimate the most potent interventional components has direct bearing on conducting second stage translational research. Purpose We present statistical issues related to the estimation of individual component effects in trials of geriatric conditions using factorial and standardly-tailored designs. The medical community is interested in second stage translational research involving the transfer of results from a randomized clinical trial to a community setting. Before such research is undertaken, main effects and synergistic and or antagonistic interactions between them should be identified. Knowledge of the relative strength and direction of the effects of the individual components and their interactions facilitates the successful transfer of clinically significant findings and may potentially reduce the number of interventional components needed. Therefore the current inability of the standardly-tailored design to provide unbiased estimates of individual interventional components is a serious limitation in their applicability to second stage translational research. Methods We discuss estimation of individual component effects from the family of factorial designs and this limitation for standardly-tailored designs. We use the phrase ‘factorial designs’ to describe full-factorial designs and their derivatives including the fractional factorial, partial factorial, incomplete factorial and modified reciprocal designs. We suggest two potential directions for designing multicomponent interventions to facilitate unbiased estimates of individual interventional components. Results Full factorial designs and their variants are the most common multicomponent trial design described in the literature and differ meaningfully from standardly-tailored designs. Factorial and standardly-tailored designs result in similar estimates of net effect with different levels of precision. Unbiased estimation of individual component effects from a standardly-tailored design will require new methodology. Limitations Although clinically relevant in geriatrics, previous applications of standardly-tailored designs have not provided unbiased estimates of the effects of individual interventional components. Discussion Future directions to estimate individual component effects from standardly-tailored designs include applying D-optimal designs and creating independent linear combinations of risk factors analogous to factor analysis. Conclusion Methods are needed to extract unbiased estimates of the effects of individual interventional components from standardly-tailored designs. PMID:18375650

  20. The application test system: Experiences to date and future plans

    NASA Technical Reports Server (NTRS)

    May, G. A.; Ashburn, P.; Hansen, H. L. (Principal Investigator)

    1979-01-01

    The ATS analysis component is presented focusing on methods by which the varied data sources are used by the ATS analyst. Analyst training and initial processing of data is discussed along with short and long plans for the ATS.

  1. 40 CFR 265.112 - Closure plan; amendment of plan.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... residues and contaminated containment system components, equipment, structures, and soils during partial... contaminated soils, methods for sampling and testing surrounding soils, and criteria for determining the extent of decontamination necessary to satisfy the closure performance standard; and (5) A detailed...

  2. 40 CFR 264.112 - Closure plan; amendment of plan.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... residues and contaminated containment system components, equipment, structures, and soils during partial... contaminated soils, methods for sampling and testing surrounding soils, and criteria for determining the extent of decontamination required to satisfy the closure performance standard; and (5) A detailed...

  3. Microbial Resistant Test Method Development

    EPA Science Inventory

    Because humans spend most of their time in the indoor environment, environmental analysis of the quality of indoor air has become an important research topic. A major component of the aerosol in the indoor environment consists of biological particles, called bioaerosols, and fur...

  4. Flexural Properties of PLA Components Under Various Test Condition Manufactured by 3D Printer

    NASA Astrophysics Data System (ADS)

    Jaya Christiyan, K. G.; Chandrasekhar, U.; Venkateswarlu, K.

    2018-06-01

    Rapid Prototyping (RP) technologies have emerged as a fabrication method to obtain engineering components in the resent past. Desktop 3D printing, also referred as an additive layer manufacturing technology is a powerful method of RP technique that can fabricate 3 dimensional engineering components. In this method, 3D digital data is converted into real product. In the present investigation, Polylactic Acid (PLA) was considered as a starting material. Flexural strength of PLA material was evaluated using 3-point bend test, as per ASTM D790 standard. Specimens with flat (0°) and vertical (90°) orientation were considered. Moreover, layer thicknesses of 0.2, 0.25, and 0.3 mm were considered. To fabricate these specimens, printing speed of 38 and 52 mm/s was maintained. Nozzle diameter of 0.4 mm with 40 % of infill density were used. Based on the experimental results, it was observed that 0° orientation, 38 mm/s printing speed, and 0.2 mm layer thickness resulted maximum flexural strength, as compared to all other specimens. The improved flexural strength was due to the lower layer thickness (0.2 mm) specimens, as compared with other specimens made of 0.25 and 0.30 mm layer thicknesses. It was concluded that flexural strength properties were greatly influenced by lower the layer thickness, printing speed, and orientation.

  5. Stress analysis and damage evaluation of flawed composite laminates by hybrid-numerical methods

    NASA Technical Reports Server (NTRS)

    Yang, Yii-Ching

    1992-01-01

    Structural components in flight vehicles is often inherited flaws, such as microcracks, voids, holes, and delamination. These defects will degrade structures the same as that due to damages in service, such as impact, corrosion, and erosion. It is very important to know how a structural component can be useful and survive after these flaws and damages. To understand the behavior and limitation of these structural components researchers usually do experimental tests or theoretical analyses on structures with simulated flaws. However, neither approach has been completely successful. As Durelli states that 'Seldom does one method give a complete solution, with the most efficiency'. Examples of this principle is seen in photomechanics which additional strain-gage testing can only average stresses at locations of high concentration. On the other hand, theoretical analyses including numerical analyses are implemented with simplified assumptions which may not reflect actual boundary conditions. Hybrid-Numerical methods which combine photomechanics and numerical analysis have been used to correct this inefficiency since 1950's. But its application is limited until 1970's when modern computer codes became available. In recent years, researchers have enhanced the data obtained from photoelasticity, laser speckle, holography and moire' interferometry for input of finite element analysis on metals. Nevertheless, there is only few of literature being done on composite laminates. Therefore, this research is dedicated to this highly anisotropic material.

  6. Errors incurred in profile reconstruction and methods for increasing inversion accuracies for occultation type measurements

    NASA Technical Reports Server (NTRS)

    Gross, S. H.; Pirraglia, J. A.

    1972-01-01

    A method for augmenting the occultation experiment is described for slightly refractive media. This method which permits separation of the components of the gradient of refractivity, appears applicable to most of the planets for a major portion of their atmospheres and ionospheres. The analytic theory is given, and the results of numerical tests with a radially and angularly varying model of an ionosphere are discussed.

  7. Hypothesis test of mediation effect in causal mediation model with high-dimensional continuous mediators.

    PubMed

    Huang, Yen-Tsung; Pan, Wen-Chi

    2016-06-01

    Causal mediation modeling has become a popular approach for studying the effect of an exposure on an outcome through a mediator. However, current methods are not applicable to the setting with a large number of mediators. We propose a testing procedure for mediation effects of high-dimensional continuous mediators. We characterize the marginal mediation effect, the multivariate component-wise mediation effects, and the L2 norm of the component-wise effects, and develop a Monte-Carlo procedure for evaluating their statistical significance. To accommodate the setting with a large number of mediators and a small sample size, we further propose a transformation model using the spectral decomposition. Under the transformation model, mediation effects can be estimated using a series of regression models with a univariate transformed mediator, and examined by our proposed testing procedure. Extensive simulation studies are conducted to assess the performance of our methods for continuous and dichotomous outcomes. We apply the methods to analyze genomic data investigating the effect of microRNA miR-223 on a dichotomous survival status of patients with glioblastoma multiforme (GBM). We identify nine gene ontology sets with expression values that significantly mediate the effect of miR-223 on GBM survival. © 2015, The International Biometric Society.

  8. A mixed method Poisson solver for three-dimensional self-gravitating astrophysical fluid dynamical systems

    NASA Technical Reports Server (NTRS)

    Duncan, Comer; Jones, Jim

    1993-01-01

    A key ingredient in the simulation of self-gravitating astrophysical fluid dynamical systems is the gravitational potential and its gradient. This paper focuses on the development of a mixed method multigrid solver of the Poisson equation formulated so that both the potential and the Cartesian components of its gradient are self-consistently and accurately generated. The method achieves this goal by formulating the problem as a system of four equations for the gravitational potential and the three Cartesian components of the gradient and solves them using a distributed relaxation technique combined with conventional full multigrid V-cycles. The method is described, some tests are presented, and the accuracy of the method is assessed. We also describe how the method has been incorporated into our three-dimensional hydrodynamics code and give an example of an application to the collision of two stars. We end with some remarks about the future developments of the method and some of the applications in which it will be used in astrophysics.

  9. Lifetime Reliability Evaluation of Structural Ceramic Parts with the CARES/LIFE Computer Program

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Powers, Lynn M.; Janosik, Lesley A.; Gyekenyesi, John P.

    1993-01-01

    The computer program CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing the power law, Paris law, or Walker equation. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled using either the principle of independent action (PIA), Weibull's normal stress averaging method (NSA), or Batdorf's theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. Two example problems demonstrating cyclic fatigue parameter estimation and component reliability analysis with proof testing are included.

  10. Influence of Sample Size of Polymer Materials on Aging Characteristics in the Salt Fog Test

    NASA Astrophysics Data System (ADS)

    Otsubo, Masahisa; Anami, Naoya; Yamashita, Seiji; Honda, Chikahisa; Takenouchi, Osamu; Hashimoto, Yousuke

    Polymer insulators have been used in worldwide because of some superior properties; light weight, high mechanical strength, good hydrophobicity etc., as compared with porcelain insulators. In this paper, effect of sample size on the aging characteristics in the salt fog test is examined. Leakage current was measured by using 100 MHz AD board or 100 MHz digital oscilloscope and separated three components as conductive current, corona discharge current and dry band arc discharge current by using FFT and the current differential method newly proposed. Each component cumulative charge was estimated automatically by a personal computer. As the results, when the sample size increased under the same average applied electric field, the peak values of leakage current and each component current increased. Especially, the cumulative charges and the arc discharge length of dry band arc discharge increased remarkably with the increase of gap length.

  11. Determination of the optimal number of components in independent components analysis.

    PubMed

    Kassouf, Amine; Jouan-Rimbaud Bouveresse, Delphine; Rutledge, Douglas N

    2018-03-01

    Independent components analysis (ICA) may be considered as one of the most established blind source separation techniques for the treatment of complex data sets in analytical chemistry. Like other similar methods, the determination of the optimal number of latent variables, in this case, independent components (ICs), is a crucial step before any modeling. Therefore, validation methods are required in order to decide about the optimal number of ICs to be used in the computation of the final model. In this paper, three new validation methods are formally presented. The first one, called Random_ICA, is a generalization of the ICA_by_blocks method. Its specificity resides in the random way of splitting the initial data matrix into two blocks, and then repeating this procedure several times, giving a broader perspective for the selection of the optimal number of ICs. The second method, called KMO_ICA_Residuals is based on the computation of the Kaiser-Meyer-Olkin (KMO) index of the transposed residual matrices obtained after progressive extraction of ICs. The third method, called ICA_corr_y, helps to select the optimal number of ICs by computing the correlations between calculated proportions and known physico-chemical information about samples, generally concentrations, or between a source signal known to be present in the mixture and the signals extracted by ICA. These three methods were tested using varied simulated and experimental data sets and compared, when necessary, to ICA_by_blocks. Results were relevant and in line with expected ones, proving the reliability of the three proposed methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Embedded electronics for intelligent structures

    NASA Astrophysics Data System (ADS)

    Warkentin, David J.; Crawley, Edward F.

    The signal, power, and communications provisions for the distributed control processing, sensing, and actuation of an intelligent structure could benefit from a method of physically embedding some electronic components. The preliminary feasibility of embedding electronic components in load-bearing intelligent composite structures is addressed. A technique for embedding integrated circuits on silicon chips within graphite/epoxy composite structures is presented which addresses the problems of electrical, mechanical, and chemical isolation. The mechanical and chemical isolation of test articles manufactured by this technique are tested by subjecting them to static and cyclic mechanical loads and a temperature/humidity/bias environment. The likely failure modes under these conditions are identified, and suggestions for further improvements in the technique are discussed.

  13. Evaluation of Fatigue Crack Growth and Fracture Properties of Cryogenic Model Materials

    NASA Technical Reports Server (NTRS)

    Newman, John A.; Forth, Scott C.; Everett, Richard A., Jr.; Newman, James C., Jr.; Kimmel, William M.

    2002-01-01

    The criteria used to prevent failure of wind-tunnel models and support hardware were revised as part of a project to enhance the capabilities of cryogenic wind tunnel testing at NASA Langley Research Center. Specifically, damage-tolerance fatigue life prediction methods are now required for critical components, and material selection criteria are more general and based on laboratory test data. The suitability of two candidate model alloys (AerMet 100 and C-250 steel) was investigated by obtaining the fatigue crack growth and fracture data required for a damage-tolerance fatigue life analysis. Finally, an example is presented to illustrate the newly implemented damage tolerance analyses required of wind-tunnel model system components.

  14. The application of probabilistic design theory to high temperature low cycle fatigue

    NASA Technical Reports Server (NTRS)

    Wirsching, P. H.

    1981-01-01

    Metal fatigue under stress and thermal cycling is a principal mode of failure in gas turbine engine hot section components such as turbine blades and disks and combustor liners. Designing for fatigue is subject to considerable uncertainty, e.g., scatter in cycles to failure, available fatigue test data and operating environment data, uncertainties in the models used to predict stresses, etc. Methods of analyzing fatigue test data for probabilistic design purposes are summarized. The general strain life as well as homo- and hetero-scedastic models are considered. Modern probabilistic design theory is reviewed and examples are presented which illustrate application to reliability analysis of gas turbine engine components.

  15. Vestibular schwannomas: Accuracy of tumor volume estimated by ice cream cone formula using thin-sliced MR images.

    PubMed

    Ho, Hsing-Hao; Li, Ya-Hui; Lee, Jih-Chin; Wang, Chih-Wei; Yu, Yi-Lin; Hueng, Dueng-Yuan; Ma, Hsin-I; Hsu, Hsian-He; Juan, Chun-Jung

    2018-01-01

    We estimated the volume of vestibular schwannomas by an ice cream cone formula using thin-sliced magnetic resonance images (MRI) and compared the estimation accuracy among different estimating formulas and between different models. The study was approved by a local institutional review board. A total of 100 patients with vestibular schwannomas examined by MRI between January 2011 and November 2015 were enrolled retrospectively. Informed consent was waived. Volumes of vestibular schwannomas were estimated by cuboidal, ellipsoidal, and spherical formulas based on a one-component model, and cuboidal, ellipsoidal, Linskey's, and ice cream cone formulas based on a two-component model. The estimated volumes were compared to the volumes measured by planimetry. Intraobserver reproducibility and interobserver agreement was tested. Estimation error, including absolute percentage error (APE) and percentage error (PE), was calculated. Statistical analysis included intraclass correlation coefficient (ICC), linear regression analysis, one-way analysis of variance, and paired t-tests with P < 0.05 considered statistically significant. Overall tumor size was 4.80 ± 6.8 mL (mean ±standard deviation). All ICCs were no less than 0.992, suggestive of high intraobserver reproducibility and high interobserver agreement. Cuboidal formulas significantly overestimated the tumor volume by a factor of 1.9 to 2.4 (P ≤ 0.001). The one-component ellipsoidal and spherical formulas overestimated the tumor volume with an APE of 20.3% and 29.2%, respectively. The two-component ice cream cone method, and ellipsoidal and Linskey's formulas significantly reduced the APE to 11.0%, 10.1%, and 12.5%, respectively (all P < 0.001). The ice cream cone method and other two-component formulas including the ellipsoidal and Linskey's formulas allow for estimation of vestibular schwannoma volume more accurately than all one-component formulas.

  16. Non-stationary signal analysis based on general parameterized time-frequency transform and its application in the feature extraction of a rotary machine

    NASA Astrophysics Data System (ADS)

    Zhou, Peng; Peng, Zhike; Chen, Shiqian; Yang, Yang; Zhang, Wenming

    2018-06-01

    With the development of large rotary machines for faster and more integrated performance, the condition monitoring and fault diagnosis for them are becoming more challenging. Since the time-frequency (TF) pattern of the vibration signal from the rotary machine often contains condition information and fault feature, the methods based on TF analysis have been widely-used to solve these two problems in the industrial community. This article introduces an effective non-stationary signal analysis method based on the general parameterized time-frequency transform (GPTFT). The GPTFT is achieved by inserting a rotation operator and a shift operator in the short-time Fourier transform. This method can produce a high-concentrated TF pattern with a general kernel. A multi-component instantaneous frequency (IF) extraction method is proposed based on it. The estimation for the IF of every component is accomplished by defining a spectrum concentration index (SCI). Moreover, such an IF estimation process is iteratively operated until all the components are extracted. The tests on three simulation examples and a real vibration signal demonstrate the effectiveness and superiority of our method.

  17. Metal-coated optical fibers for high temperature sensing applications

    NASA Astrophysics Data System (ADS)

    Fidelus, Janusz D.; Wysokiński, Karol; Stańczyk, Tomasz; Kołakowska, Agnieszka; Nasiłowski, Piotr; Lipiński, Stanisław; Tenderenda, Tadeusz; Nasiłowski, Tomasz

    2017-10-01

    An novel low-temperature method was used to enhance the corrosion resistance of copper or gold-coated optical fibers. A characterization of the elaborated materials and reports on selected studies such as cyclic temperature tests together with tensile tests is presented. Gold-coated optical fibers are proposed as a component of optical fiber sensors working in oxidizing atmospheres under temperatures exceeding 900 °C.

  18. Measuring air-water interfacial area for soils using the mass balance surfactant-tracer method.

    PubMed

    Araujo, Juliana B; Mainhagu, Jon; Brusseau, Mark L

    2015-09-01

    There are several methods for conducting interfacial partitioning tracer tests to measure air-water interfacial area in porous media. One such approach is the mass balance surfactant tracer method. An advantage of the mass-balance method compared to other tracer-based methods is that a single test can produce multiple interfacial area measurements over a wide range of water saturations. The mass-balance method has been used to date only for glass beads or treated quartz sand. The purpose of this research is to investigate the effectiveness and implementability of the mass-balance method for application to more complex porous media. The results indicate that interfacial areas measured with the mass-balance method are consistent with values obtained with the miscible-displacement method. This includes results for a soil, for which solid-phase adsorption was a significant component of total tracer retention. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Assessment of the Uniqueness of Wind Tunnel Strain-Gage Balance Load Predictions

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2016-01-01

    A new test was developed to assess the uniqueness of wind tunnel strain-gage balance load predictions that are obtained from regression models of calibration data. The test helps balance users to gain confidence in load predictions of non-traditional balance designs. It also makes it possible to better evaluate load predictions of traditional balances that are not used as originally intended. The test works for both the Iterative and Non-Iterative Methods that are used in the aerospace testing community for the prediction of balance loads. It is based on the hypothesis that the total number of independently applied balance load components must always match the total number of independently measured bridge outputs or bridge output combinations. This hypothesis is supported by a control volume analysis of the inputs and outputs of a strain-gage balance. It is concluded from the control volume analysis that the loads and bridge outputs of a balance calibration data set must separately be tested for linear independence because it cannot always be guaranteed that a linearly independent load component set will result in linearly independent bridge output measurements. Simple linear math models for the loads and bridge outputs in combination with the variance inflation factor are used to test for linear independence. A highly unique and reversible mapping between the applied load component set and the measured bridge output set is guaranteed to exist if the maximum variance inflation factor of both sets is less than the literature recommended threshold of five. Data from the calibration of a six{component force balance is used to illustrate the application of the new test to real-world data.

  20. A Method for Evaluating Information Security Governance (ISG) Components in Banking Environment

    NASA Astrophysics Data System (ADS)

    Ula, M.; Ula, M.; Fuadi, W.

    2017-02-01

    As modern banking increasingly relies on the internet and computer technologies to operate their businesses and market interactions, the threats and security breaches have highly increased in recent years. Insider and outsider attacks have caused global businesses lost trillions of Dollars a year. Therefore, that is a need for a proper framework to govern the information security in the banking system. The aim of this research is to propose and design an enhanced method to evaluate information security governance (ISG) implementation in banking environment. This research examines and compares the elements from the commonly used information security governance frameworks, standards and best practices. Their strength and weakness are considered in its approaches. The initial framework for governing the information security in banking system was constructed from document review. The framework was categorized into three levels which are Governance level, Managerial level, and technical level. The study further conducts an online survey for banking security professionals to get their professional judgment about the ISG most critical components and the importance for each ISG component that should be implemented in banking environment. Data from the survey was used to construct a mathematical model for ISG evaluation, component importance data used as weighting coefficient for the related component in the mathematical model. The research further develops a method for evaluating ISG implementation in banking based on the mathematical model. The proposed method was tested through real bank case study in an Indonesian local bank. The study evidently proves that the proposed method has sufficient coverage of ISG in banking environment and effectively evaluates the ISG implementation in banking environment.

  1. Blade loss transient dynamics analysis, volume 2. Task 2: Theoretical and analytical development. Task 3: Experimental verification

    NASA Technical Reports Server (NTRS)

    Gallardo, V. C.; Storace, A. S.; Gaffney, E. F.; Bach, L. J.; Stallone, M. J.

    1981-01-01

    The component element method was used to develop a transient dynamic analysis computer program which is essentially based on modal synthesis combined with a central, finite difference, numerical integration scheme. The methodology leads to a modular or building-block technique that is amenable to computer programming. To verify the analytical method, turbine engine transient response analysis (TETRA), was applied to two blade-out test vehicles that had been previously instrumented and tested. Comparison of the time dependent test data with those predicted by TETRA led to recommendations for refinement or extension of the analytical method to improve its accuracy and overcome its shortcomings. The development of working equations, their discretization, numerical solution scheme, the modular concept of engine modelling, the program logical structure and some illustrated results are discussed. The blade-loss test vehicles (rig full engine), the type of measured data, and the engine structural model are described.

  2. Non-destructive testing of composite materials used in military applications by eddy current thermography method

    NASA Astrophysics Data System (ADS)

    Swiderski, Waldemar

    2016-10-01

    Eddy current thermography is a new NDT-technique for the detection of cracks in electro conductive materials. It combines the well-established inspection techniques of eddy current testing and thermography. The technique uses induced eddy currents to heat the sample being tested and defect detection is based on the changes of induced eddy currents flows revealed by thermal visualization captured by an infrared camera. The advantage of this method is to use the high performance of eddy current testing that eliminates the known problem of the edge effect. Especially for components of complex geometry this is an important factor which may overcome the increased expense for inspection set-up. The paper presents the possibility of applying eddy current thermography method for detecting defects in ballistic covers made of carbon fiber reinforced composites used in the construction of military vehicles.

  3. ENVIRONMENTALLY BENIGN MITIGATION OF MICROBIOLOGICALLY INFLUENCED CORROSION (MIC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. Robert Paterek; Gemma Husmillo; Amrutha Daram

    The overall program objective is to develop and evaluate environmentally benign agents or products that are effective in the prevention, inhibition, and mitigation of microbially influenced corrosion (MIC) in the internal surfaces of metallic natural gas pipelines. The goal is to develop one or more environmentally benign (a.k.a. ''green'') products that can be applied to maintain the structure and dependability of the natural gas infrastructure. The technical approach for this quarter includes the application of new methods of Capsicum sp. (pepper) extraction by soxhlet method and analysis of a new set of extracts by thin layer chromatography (TLC) and highmore » performance liquid chromatography (HPLC); isolation and cultivation of MIC-causing microorganisms from corroded pipeline samples; and evaluation of antimicrobial activities of the old set of pepper extracts in comparison with major components of known biocides and corrosion inhibitors. Twelve new extracts from three varieties of Capsicum sp. (Serrano, Habanero, and Chile de Arbol) were obtained by soxhlet extraction using 4 different solvents. Results of TLC done on these extracts showed the presence of capsaicin and some phenolic compounds, while that of HPLC detected capsaicin and dihydrocapsaicin peaks. More tests will be done to determine specific components. Additional isolates from the group of heterotrophic, acid-producing, denitrifying and sulfate-reducing bacteria were obtained from the pipeline samples submitted by gas companies. Isolates of interest will be used in subsequent antimicrobial testing and test-loop simulation system experiments. Results of antimicrobial screening of Capsicum sp. extracts and components of known commercial biocides showed comparable activities when tested against two strains of sulfate-reducing bacteria.« less

  4. Characterization of selected municipal solid waste components to estimate their biodegradability.

    PubMed

    Bayard, R; Benbelkacem, H; Gourdon, R; Buffière, P

    2018-06-15

    Biological treatments of Residual Municipal Solid Waste (RMSW) allow to divert biodegradable materials from landfilling and recover valuable alternative resources. The biodegradability of the waste components needs however to be assessed in order to design the bioprocesses properly. The present study investigated complementary approaches to aerobic and anaerobic biotests for a more rapid evaluation. A representative sample of residual MSW was collected from a Mechanical Biological Treatment (MBT) plant and sorted out into 13 fractions according to the French standard procedure MODECOM™. The different fractions were analyzed for organic matter content, leaching behavior, contents in biochemical constituents (determined by Van Soest's acid detergent fiber method), Biochemical Oxygen Demand (BOD) and Bio-Methane Potential (BMP). Experimental data were statistically treated by Principal Components Analysis (PCA). Cumulative oxygen consumption from BOD tests and cumulative methane production from BMP tests were found to be positively correlated in all waste fractions. No correlation was observed between the results from BOD or BMP bioassays and the contents in cellulose-like, hemicelluloses-like or labile organic compounds. No correlation was observed either with the results from leaching tests (Soluble COD). The contents in lignin-like compounds, evaluated as the non-extracted RES fraction in Van Soest's method, was found however to impact negatively the biodegradability assessed by BOD or BMP tests. Since cellulose, hemicelluloses and lignin are the polymers responsible for the structuration of lignocellulosic complexes, it was concluded that the structural organization of the organic matter in the different waste fractions was more determinant on biodegradability than the respective contents in individual biopolymers. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. In Pursuit of Change: Youth Response to Intensive Goal Setting Embedded in a Serious Video Game

    PubMed Central

    Thompson, Debbe; Baranowski, Tom; Buday, Richard; Baranowski, Janice; Juliano, Melissa; Frazior, McKee; Wilsdon, Jon; Jago, Russell

    2007-01-01

    Background Type 2 diabetes has increased in prevalence among youth, paralleling the increase in pediatric obesity. Helping youth achieve energy balance by changing diet and physical activity behaviors should decrease the risk for type 2 diabetes and obesity. Goal setting and goal review are critical components of behavior change. Theory-informed video games that emphasize development and refinement of goal setting and goal review skills provide a method for achieving energy balance in an informative, entertaining format. This article reports alpha-testing results of early versions of theory-informed goal setting and reviews components of two diabetes and obesity prevention video games for preadolescents. Method Two episodes each of two video games were alpha tested with 9- to 11-year-old youth from multiple ethnic groups. Alpha testing included observed game play followed by a scripted interview. The staff was trained in observation and interview techniques prior to data collection. Results Although some difficulties were encountered, alpha testers generally understood goal setting and review components and comprehended they were setting personal goals. Although goal setting and review involved multiple steps, youth were generally able to complete them quickly, with minimal difficulty. Few technical issues arose; however, several usability and comprehension problems were identified. Conclusions Theory-informed video games may be an effective medium for promoting youth diabetes and obesity prevention. Alpha testing helps identify problems likely to have a negative effect on functionality, usability, and comprehension during development, thereby providing an opportunity to correct these issues prior to final production. PMID:19885165

  6. Design of the HPTN 065 (TLC-Plus) study: A study to evaluate the feasibility of an enhanced test, link-to-care, plus treat approach for HIV prevention in the United States

    PubMed Central

    Gamble, Theresa; Branson, Bernard; Donnell, Deborah; Hall, H Irene; King, Georgette; Cutler, Blayne; Hader, Shannon; Burns, David; Leider, Jason; Wood, Angela Fulwood; G. Volpp, Kevin; Buchacz, Kate; El-Sadr, Wafaa M

    2017-01-01

    Background/Aims HIV continues to be a major public health threat in the United States, and mathematical modeling has demonstrated that the universal effective use of antiretroviral therapy among all HIV-positive individuals (i.e. the “test and treat” approach) has the potential to control HIV. However, to accomplish this, all the steps that define the HIV care continuum must be achieved at high levels, including HIV testing and diagnosis, linkage to and retention in clinical care, antiretroviral medication initiation, and adherence to achieve and maintain viral suppression. The HPTN 065 (Test, Link-to-Care Plus Treat [TLC-Plus]) study was designed to determine the feasibility of the “test and treat” approach in the United States. Methods HPTN 065 was conducted in two intervention communities, Bronx, NY, and Washington, DC, along with four non-intervention communities, Chicago, IL; Houston, TX; Miami, FL; and Philadelphia, PA. The study consisted of five components: (1) exploring the feasibility of expanded HIV testing via social mobilization and the universal offer of testing in hospital settings, (2) evaluating the effectiveness of financial incentives to increase linkage to care, (3) evaluating the effectiveness of financial incentives to increase viral suppression, (4) evaluating the effectiveness of a computer-delivered intervention to decrease risk behavior in HIV-positive patients in healthcare settings, and (5) administering provider and patient surveys to assess knowledge and attitudes regarding the use of antiretroviral therapy for prevention and the use of financial incentives to improve health outcomes. The study used observational cohorts, cluster and individual randomization, and made novel use of the existing national HIV surveillance data infrastructure. All components were developed with input from a community advisory board, and pragmatic methods were used to implement and assess the outcomes for each study component. Results A total of 76 sites in Washington, DC, and the Bronx, NY, participated in the study: 37 HIV test sites, including 16 hospitals, and 39 HIV care sites. Between September 2010 and December 2014, all study components were successfully implemented at these sites and resulted in valid outcomes. Our pragmatic approach to the study design, implementation, and the assessment of study outcomes allowed the study to be conducted within established programmatic structures and processes. In addition, it was successfully layered on the ongoing standard of care and existing data infrastructure without disrupting health services. Conclusion The HPTN 065 study demonstrated the feasibility of implementing and evaluating a multi-component “test and treat” trial that included a large number of community sites and involved pragmatic approaches to study implementation and evaluation. PMID:28627929

  7. Design of the HPTN 065 (TLC-Plus) study: A study to evaluate the feasibility of an enhanced test, link-to-care, plus treat approach for HIV prevention in the United States.

    PubMed

    Gamble, Theresa; Branson, Bernard; Donnell, Deborah; Hall, H Irene; King, Georgette; Cutler, Blayne; Hader, Shannon; Burns, David; Leider, Jason; Wood, Angela Fulwood; G Volpp, Kevin; Buchacz, Kate; El-Sadr, Wafaa M

    2017-08-01

    Background/Aims HIV continues to be a major public health threat in the United States, and mathematical modeling has demonstrated that the universal effective use of antiretroviral therapy among all HIV-positive individuals (i.e. the "test and treat" approach) has the potential to control HIV. However, to accomplish this, all the steps that define the HIV care continuum must be achieved at high levels, including HIV testing and diagnosis, linkage to and retention in clinical care, antiretroviral medication initiation, and adherence to achieve and maintain viral suppression. The HPTN 065 (Test, Link-to-Care Plus Treat [TLC-Plus]) study was designed to determine the feasibility of the "test and treat" approach in the United States. Methods HPTN 065 was conducted in two intervention communities, Bronx, NY, and Washington, DC, along with four non-intervention communities, Chicago, IL; Houston, TX; Miami, FL; and Philadelphia, PA. The study consisted of five components: (1) exploring the feasibility of expanded HIV testing via social mobilization and the universal offer of testing in hospital settings, (2) evaluating the effectiveness of financial incentives to increase linkage to care, (3) evaluating the effectiveness of financial incentives to increase viral suppression, (4) evaluating the effectiveness of a computer-delivered intervention to decrease risk behavior in HIV-positive patients in healthcare settings, and (5) administering provider and patient surveys to assess knowledge and attitudes regarding the use of antiretroviral therapy for prevention and the use of financial incentives to improve health outcomes. The study used observational cohorts, cluster and individual randomization, and made novel use of the existing national HIV surveillance data infrastructure. All components were developed with input from a community advisory board, and pragmatic methods were used to implement and assess the outcomes for each study component. Results A total of 76 sites in Washington, DC, and the Bronx, NY, participated in the study: 37 HIV test sites, including 16 hospitals, and 39 HIV care sites. Between September 2010 and December 2014, all study components were successfully implemented at these sites and resulted in valid outcomes. Our pragmatic approach to the study design, implementation, and the assessment of study outcomes allowed the study to be conducted within established programmatic structures and processes. In addition, it was successfully layered on the ongoing standard of care and existing data infrastructure without disrupting health services. Conclusion The HPTN 065 study demonstrated the feasibility of implementing and evaluating a multi-component "test and treat" trial that included a large number of community sites and involved pragmatic approaches to study implementation and evaluation.

  8. Algorithm 971: An Implementation of a Randomized Algorithm for Principal Component Analysis

    PubMed Central

    LI, HUAMIN; LINDERMAN, GEORGE C.; SZLAM, ARTHUR; STANTON, KELLY P.; KLUGER, YUVAL; TYGERT, MARK

    2017-01-01

    Recent years have witnessed intense development of randomized methods for low-rank approximation. These methods target principal component analysis and the calculation of truncated singular value decompositions. The present article presents an essentially black-box, foolproof implementation for Mathworks’ MATLAB, a popular software platform for numerical computation. As illustrated via several tests, the randomized algorithms for low-rank approximation outperform or at least match the classical deterministic techniques (such as Lanczos iterations run to convergence) in basically all respects: accuracy, computational efficiency (both speed and memory usage), ease-of-use, parallelizability, and reliability. However, the classical procedures remain the methods of choice for estimating spectral norms and are far superior for calculating the least singular values and corresponding singular vectors (or singular subspaces). PMID:28983138

  9. Engineering large-scale agent-based systems with consensus

    NASA Technical Reports Server (NTRS)

    Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.

    1994-01-01

    The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.

  10. Development of Modal Test Techniques for Validation of a Solar Sail Design

    NASA Technical Reports Server (NTRS)

    Gaspar, James L.; Mann, Troy; Behun, Vaughn; Wilkie, W. Keats; Pappa, Richard

    2004-01-01

    This paper focuses on the development of modal test techniques for validation of a solar sail gossamer space structure design. The major focus is on validating and comparing the capabilities of various excitation techniques for modal testing solar sail components. One triangular shaped quadrant of a solar sail membrane was tested in a 1 Torr vacuum environment using various excitation techniques including, magnetic excitation, and surface-bonded piezoelectric patch actuators. Results from modal tests performed on the sail using piezoelectric patches at different positions are discussed. The excitation methods were evaluated for their applicability to in-vacuum ground testing and to the development of on orbit flight test techniques. The solar sail membrane was tested in the horizontal configuration at various tension levels to assess the variation in frequency with tension in a vacuum environment. A segment of a solar sail mast prototype was also tested in ambient atmospheric conditions using various excitation techniques, and these methods are also assessed for their ground test capabilities and on-orbit flight testing.

  11. Multilevel geometry optimization

    NASA Astrophysics Data System (ADS)

    Rodgers, Jocelyn M.; Fast, Patton L.; Truhlar, Donald G.

    2000-02-01

    Geometry optimization has been carried out for three test molecules using six multilevel electronic structure methods, in particular Gaussian-2, Gaussian-3, multicoefficient G2, multicoefficient G3, and two multicoefficient correlation methods based on correlation-consistent basis sets. In the Gaussian-2 and Gaussian-3 methods, various levels are added and subtracted with unit coefficients, whereas the multicoefficient Gaussian-x methods involve noninteger parameters as coefficients. The multilevel optimizations drop the average error in the geometry (averaged over the 18 cases) by a factor of about two when compared to the single most expensive component of a given multilevel calculation, and in all 18 cases the accuracy of the atomization energy for the three test molecules improves; with an average improvement of 16.7 kcal/mol.

  12. Using a fuzzy comprehensive evaluation method to determine product usability: A test case

    PubMed Central

    Zhou, Ronggang; Chan, Alan H. S.

    2016-01-01

    BACKGROUND: In order to take into account the inherent uncertainties during product usability evaluation, Zhou and Chan [1] proposed a comprehensive method of usability evaluation for products by combining the analytic hierarchy process (AHP) and fuzzy evaluation methods for synthesizing performance data and subjective response data. This method was designed to provide an integrated framework combining the inevitable vague judgments from the multiple stages of the product evaluation process. OBJECTIVE AND METHODS: In order to illustrate the effectiveness of the model, this study used a summative usability test case to assess the application and strength of the general fuzzy usability framework. To test the proposed fuzzy usability evaluation framework [1], a standard summative usability test was conducted to benchmark the overall usability of a specific network management software. Based on the test data, the fuzzy method was applied to incorporate both the usability scores and uncertainties involved in the multiple components of the evaluation. Then, with Monte Carlo simulation procedures, confidence intervals were used to compare the reliabilities among the fuzzy approach and two typical conventional methods combining metrics based on percentages. RESULTS AND CONCLUSIONS: This case study showed that the fuzzy evaluation technique can be applied successfully for combining summative usability testing data to achieve an overall usability quality for the network software evaluated. Greater differences of confidence interval widths between the method of averaging equally percentage and weighted evaluation method, including the method of weighted percentage averages, verified the strength of the fuzzy method. PMID:28035942

  13. Strategies for lowering attrition rates and raising NCLEX-RN pass rates.

    PubMed

    Higgins, Bonnie

    2005-12-01

    This study was designed to determine strategies to raise the NCLEX-RN pass rate and lower the attrition rate in a community college nursing program. Ex-post facto data were collected from 213 former nursing student records. Qualitative data were collected from 10 full-time faculty, 30 new graduates, and 45 directors of associate degree nursing programs in Texas. The findings linked the academic variables of two biology courses and three components of the preadmission test to completion of the nursing program. A relationship was found between one biology course, the science component of the preadmission test, the HESI Exit Examination score, and the nursing skills course to passing the NCLEX-RN. Qualitative data indicated preadmission requirements, campus counselors, remediation, faculty, test-item writing, and teaching method were instrumental in completion of the program and passing the NCLEX-RN.

  14. Standardization Efforts for Mechanical Testing and Design of Advanced Ceramic Materials and Components

    NASA Technical Reports Server (NTRS)

    Salem, Jonathan A.; Jenkins, Michael G.

    2003-01-01

    Advanced aerospace systems occasionally require the use of very brittle materials such as sapphire and ultra-high temperature ceramics. Although great progress has been made in the development of methods and standards for machining, testing and design of component from these materials, additional development and dissemination of standard practices is needed. ASTM Committee C28 on Advanced Ceramics and ISO TC 206 have taken a lead role in the standardization of testing for ceramics, and recent efforts and needs in standards development by Committee C28 on Advanced Ceramics will be summarized. In some cases, the engineers, etc. involved are unaware of the latest developments, and traditional approaches applicable to other material systems are applied. Two examples of flight hardware failures that might have been prevented via education and standardization will be presented.

  15. A laboratory medicine residency training program that includes clinical consultation and research.

    PubMed

    Spitzer, E D; Pierce, G F; McDonald, J M

    1990-04-01

    We describe a laboratory medicine residency training program that includes ongoing interaction with both clinical laboratories and clinical services as well as significant research experience. Laboratory medicine residents serve as on-call consultants in the interpretation of test results, design of testing strategies, and assurance of test quality. The consultative on-call beeper system was evaluated and is presented as an effective method of clinical pathology training that is well accepted by the clinical staff. The research component of the residency program is also described. Together, these components provide training in real-time clinical problem solving and prepare residents for the changing technological environment of the clinical laboratory. At the completion of the residency, the majority of the residents are qualified laboratory subspecialists and are also capable of running an independent research program.

  16. Characterization of Triaxial Braided Composite Material Properties for Impact Simulation

    NASA Technical Reports Server (NTRS)

    Roberts, Gary D.; Goldberg, Robert K.; Biniendak, Wieslaw K.; Arnold, William A.; Littell, Justin D.; Kohlman, Lee W.

    2009-01-01

    The reliability of impact simulations for aircraft components made with triaxial braided carbon fiber composites is currently limited by inadequate material property data and lack of validated material models for analysis. Improvements to standard quasi-static test methods are needed to account for the large unit cell size and localized damage within the unit cell. The deformation and damage of a triaxial braided composite material was examined using standard quasi-static in-plane tension, compression, and shear tests. Some modifications to standard test specimen geometries are suggested, and methods for measuring the local strain at the onset of failure within the braid unit cell are presented. Deformation and damage at higher strain rates is examined using ballistic impact tests on 61- by 61- by 3.2-mm (24- by 24- by 0.125-in.) composite panels. Digital image correlation techniques were used to examine full-field deformation and damage during both quasi-static and impact tests. An impact analysis method is presented that utilizes both local and global deformation and failure information from the quasi-static tests as input for impact simulations. Improvements that are needed in test and analysis methods for better predictive capability are examined.

  17. On the Stress Analysis of Rails and Ties

    DOT National Transportation Integrated Search

    1976-09-01

    This report covers first the methods presented in the literature for the stress analysis of railroad track components and results of a variety of validation tests. It was found that a formula can yield deflections and bending stresses in the rails of...

  18. 40 CFR 63.1325 - Batch process vents-performance test methods and procedures to determine compliance.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... Cj=Average inlet or outlet concentration of TOC or sample organic HAP component j of the gas stream...), where standard temperature is 20 °C. Cj=Inlet or outlet concentration of TOC or sample organic HAP...

  19. 40 CFR 63.1325 - Batch process vents-performance test methods and procedures to determine compliance.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... Cj=Average inlet or outlet concentration of TOC or sample organic HAP component j of the gas stream...), where standard temperature is 20 °C. Cj=Inlet or outlet concentration of TOC or sample organic HAP...

  20. 40 CFR 63.1325 - Batch process vents-performance test methods and procedures to determine compliance.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... Cj=Average inlet or outlet concentration of TOC or sample organic HAP component j of the gas stream...), where standard temperature is 20 °C. Cj=Inlet or outlet concentration of TOC or sample organic HAP...

Top